Skip to content

An experimental CPU backend for Triton (https//github.com/openai/triton)

License

Notifications You must be signed in to change notification settings

pytorch-labs/triton-cpu

Repository files navigation

Triton-CPU

A long-lived development branch to build an experimental CPU backend for Triton.

This repository clones the main Triton repository, but we intend to minimize divergences in the core (and ideally upstream anything that needs to change and isn't too CPU-specific). Most of the CPU work should be in a backend subdirectory (similar to how GPU vendors are supported today). We're starting with a clone to give ourselves maximum development flexibility as this project gets off the ground!

How to use it?

Build it like a normal Triton, but just pass TRITON_CPU_BACKEND=1 to use the CPU backend over a GPU backend, if any.

TRITON_CPU_BACKEND=1 python3 tutorials/01-vector-add.py

NOTE: It's still work in progress.


Upstream README

Triton logo

We're hiring! If you are interested in working on Triton at OpenAI, we have roles open for Compiler Engineers and Kernel Engineers.

Documentation Nightly Wheels
Documentation Wheels

Triton

This is the development repository of Triton, a language and compiler for writing highly efficient custom Deep-Learning primitives. The aim of Triton is to provide an open-source environment to write fast code at higher productivity than CUDA, but also with higher flexibility than other existing DSLs.

The foundations of this project are described in the following MAPL2019 publication: Triton: An Intermediate Language and Compiler for Tiled Neural Network Computations. Please consider citing this work if you use Triton!

The official documentation contains installation instructions and tutorials. See also these third-party Triton puzzles, which can all be run using the Triton interpreter -- no GPU required.

Quick Installation

You can install the latest stable release of Triton from pip:

pip install triton

Binary wheels are available for CPython 3.7-3.11 and PyPy 3.8-3.9.

And the latest nightly release:

pip install -U --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/Triton-Nightly/pypi/simple/ triton-nightly

Install from source

git clone https://github.com/openai/triton.git;
cd triton;

pip install ninja cmake wheel; # build-time dependencies
pip install -e python

Or with a virtualenv:

git clone https://github.com/openai/triton.git;
cd triton;

python -m venv .venv --prompt triton;
source .venv/bin/activate;

pip install ninja cmake wheel; # build-time dependencies
pip install -e python

Building with a custom LLVM

Triton uses LLVM to generate code for GPUs and CPUs. Normally, the Triton build downloads a prebuilt LLVM, but you can also build LLVM from source and use that.

LLVM does not have a stable API, so the Triton build will not work at an arbitrary LLVM version.

  1. Find the version of LLVM that Triton builds against. Check cmake/llvm-hash.txt to see the current version. For example, if it says: 49af6502c6dcb4a7f7520178bd14df396f78240c

    This means that the version of Triton you have builds against LLVM 49af6502.

  2. git checkout LLVM at this revision. Optionally, make additional modifications to LLVM.

  3. Build LLVM. For example, you might run

    $ cd $HOME/llvm-project  # your clone of LLVM.
    $ mkdir build
    $ cd build
    $ cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DLLVM_ENABLE_ASSERTIONS=ON  ../llvm -DLLVM_ENABLE_PROJECTS="mlir;llvm"
    $ ninja
    
  4. Grab a snack, this will take a while.

  5. Build Triton as above, but set the following environment variables.

    # Modify as appropriate to point to your LLVM build.
    $ export LLVM_BUILD_DIR=$HOME/llvm-project/build
    
    $ cd <triton install>
    $ LLVM_INCLUDE_DIRS=$LLVM_BUILD_DIR/include \
      LLVM_LIBRARY_DIR=$LLVM_BUILD_DIR/lib \
      LLVM_SYSPATH=$LLVM_BUILD_DIR \
      pip install -e python
    

Tips for building

  • Set TRITON_BUILD_WITH_CLANG_LLD=true as an environment variable to use clang and lld. lld in particular results in faster builds.

  • Set TRITON_BUILD_WITH_CCACHE=true to build with ccache.

  • Pass --no-build-isolation to pip install to make nop builds faster. Without this, every invocation of pip install uses a different symlink to cmake, and this forces ninja to rebuild most of the .a files.

  • vscode intellisense has some difficulty figuring out how to build Triton's C++ (probably because, in our build, users don't invoke cmake directly, but instead use setup.py). Teach vscode how to compile Triton as follows.

    • Do a local build.
    • Get the full path to the compile_commands.json file produced by the build: find python/build -name 'compile_commands.json | xargs readlink -f'
    • In vscode, install the C/C++ extension, then open the command palette (Shift + Command + P on Mac, or Shift + Ctrl + P on Windows/Linux) and open C/C++: Edit Configurations (UI).
    • Open "Advanced Settings" and paste the full path to compile_commands.json into the "Compile Commands" textbox.

Running tests

There currently isn't a turnkey way to run all the Triton tests, but you can follow the following recipe.

# One-time setup.  Note we have to reinstall local Triton because torch
# overwrites it with the public version.
$ pip install scipy numpy torch pytest lit && pip install -e python

# Run Python tests using your local GPU.
$ python3 -m pytest python/test/unit

# Move to builddir.  Fill in <...> with the full path, e.g.
# `cmake.linux-x86_64-cpython-3.11`.
$ cd python/build/cmake<...>

# Run C++ unit tests.
$ ninja test

# Run lit tests.
$ lit test

You may find it helpful to make a symlink to the builddir and tell your local git to ignore it.

$ ln -s python/build/cmake<...> build
$ echo build >> .git/info/exclude

Then you can e.g. rebuild and run lit with the following command.

$ ninja -C build && ( cd build ; lit test )

Tips for hacking

Helpful environment variables

  • MLIR_ENABLE_DUMP=1 dumps the IR before every MLIR pass Triton runs.
  • LLVM_IR_ENABLE_DUMP=1 dumps the IR before every pass run over the LLVM IR.
  • TRITON_INTERPRET=1 uses the Triton interpreter instead of running on the GPU. You can insert Python breakpoints in your kernel code!
  • TRITON_ENABLE_LLVM_DEBUG=1 passes -debug to LLVM, printing a lot of debugging information to stdout. If this is too noisy, run with just LLVM_IR_ENABLE_DUMP=1, extract the IR before the LLVM pass of interest, and then run LLVM's opt standalone, perhaps passing -debug-only=foo on the command line.
  • USE_TTGIR_LOC reparses the ttgir such that the location information will be the line number of the ttgir instead of line number of the python file. This can provide a direct mapping from ttgir to llir/ptx. When used with performance tools, it can provide a breakdown on ttgir instructions.

Changelog

Version 2.0 is out! New features include:

  • Many, many bug fixes
  • Performance improvements
  • Backend rewritten to use MLIR
  • Support for kernels that contain back-to-back matmuls (e.g., flash attention)

Contributing

Community contributions are more than welcome, whether it be to fix bugs or to add new features at github. For more detailed instructions, please visit our contributor's guide.

Compatibility

Supported Platforms:

  • Linux

Supported Hardware:

  • NVIDIA GPUs (Compute Capability 7.0+)
  • Under development: AMD GPUs, CPUs

About

An experimental CPU backend for Triton (https//github.com/openai/triton)

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published