-
Notifications
You must be signed in to change notification settings - Fork 11
01. Installation
This guide covers all installation scenarios for TritonParse, from basic usage to full development setup.
- Python >= 3.10
- Operating System: Linux, macOS, or Windows (with WSL recommended)
-
GPU Required (Triton depends on GPU):
- NVIDIA GPUs: CUDA 11.8+ or 12.x
- AMD GPUs: ROCm 5.0+ (supports MI100, MI200, MI300 series)
- Node.js >= 22.0.0 (for website development only)
⚠️ Important: GPU is required to generate traces because Triton kernels can only run on GPU hardware. The web interface can view existing traces without GPU.
All installation options require PyTorch and Triton. Complete these steps first before choosing your installation option below.
# Install PyTorch nightly with CUDA 12.8 support (recommended)
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
# Alternative: Install stable PyTorch with CUDA support
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
# Install PyTorch nightly with ROCm support (recommended)
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.2
# Alternative: Install stable PyTorch with ROCm support
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.1
# First, uninstall any existing PyTorch-bundled Triton to avoid conflicts
pip uninstall -y pytorch-triton triton || true
# Install the latest version of Triton (>= 3.4.0)
pip install triton
💡 Note: PyPI installation of Triton is now recommended. Building from source is only needed for development or unreleased features.
# Verify PyTorch and GPU
python -c "import torch; print(f'PyTorch: {torch.__version__}')"
python -c "import torch; print(f'GPU available: {torch.cuda.is_available()}')"
python -c "import torch; print(f'GPU device: {torch.cuda.get_device_name(0) if torch.cuda.is_available() else \"No GPU\"}')"
# Verify Triton
python -c "import triton; print(f'Triton: {triton.__version__}')"
Now choose your installation method based on your needs:
Quick installation from Python Package Index
# Install nightly version (recommended, latest features)
pip install -U --pre tritonparse
# OR install stable version
pip install tritonparse
Install from source for development or to get unreleased features
# Clone repository
git clone https://github.com/meta-pytorch/tritonparse.git
cd tritonparse
# Install in development mode (editable)
pip install -e .
# OR install directly from GitHub without cloning
pip install git+https://github.com/meta-pytorch/tritonparse.git
Complete setup for Python development with formatting tools
# First, follow Option 2 to clone and install
git clone https://github.com/meta-pytorch/tritonparse.git
cd tritonparse
pip install -e .
# Then install development dependencies
make install-dev
This installs formatting and linting tools: black
, usort
, ruff
Setup for working on the React-based web interface
Prerequisites: Node.js >= 22.0.0
# First, follow Option 2 or 3
git clone https://github.com/meta-pytorch/tritonparse.git
cd tritonparse
pip install -e .
# Install website dependencies
cd website
npm install
# Start development server
npm run dev # Access at http://localhost:5173
Test that TritonParse is working correctly:
# Navigate to tests directory
cd tests # or cd tritonparse/tests if you didn't clone
# Run example test
TORCHINDUCTOR_FX_GRAPH_CACHE=0 python test_add.py
Expected output:
Triton kernel executed successfully
Torch compiled function executed successfully
================================================================================
📁 TRITONPARSE PARSING RESULTS
================================================================================
📂 Parsed files directory: /scratch/findhao/tritonparse/tests/parsed_output
📊 Total files generated: 2
...
✅ Parsing completed successfully!
================================================================================
- Generate trace files using the Python API (see Usage Guide)
- Visit https://meta-pytorch.org/tritonparse/
- Load your trace files (.ndjson or .gz format)
For Python development (Option 3):
# Check code formatting
make format-check
# Run linting
make lint-check
# Run tests
python -m unittest tests.test_tritonparse -v
For website development (Option 4):
npm run dev # Development server
npm run build # Production build
npm run build:single # Standalone HTML build
npm run lint # Linting
npm run preview # Preview production build
Error: "CUDA not available" or "ROCm not available"
Diagnosis:
python -c "import torch; print(f'GPU available: {torch.cuda.is_available()}')"
python -c "import torch; print(f'Device count: {torch.cuda.device_count()}')"
Solution: Reinstall PyTorch with GPU support following Step 1 above.
Error: "No module named 'triton'" or "Triton version mismatch"
Solution:
pip uninstall -y pytorch-triton triton || true
pip install --upgrade triton
Error: Permission denied during installation
Solution: Use a virtual environment
python -m venv tritonparse-env
source tritonparse-env/bin/activate # Linux/Mac
# OR
tritonparse-env\Scripts\activate # Windows
Error: "black not found" or similar
Solution:
make install-dev
# OR manually: pip install black usort ruff
Error: Node.js version too old
Solution:
# Update Node.js to >= 22.0.0
conda install 'nodejs>=22.0.0' -c conda-forge
# Clear cache and reinstall
rm -rf node_modules package-lock.json
npm install
# TritonParse
export TRITONPARSE_DEBUG=1 # Enable debug logging
export TRITON_TRACE_GZIP=1 # Enable gzip compression
export TRITON_TRACE=/path/to/traces # Custom trace directory
# PyTorch/TorchInductor
export TORCHINDUCTOR_FX_GRAPH_CACHE=0 # Disable FX graph cache (for testing)
export TORCH_LOGS="+dynamo,+inductor" # Enable PyTorch debug logs
# GPU control
export CUDA_VISIBLE_DEVICES=0 # Limit to specific GPU (NVIDIA)
export ROCR_VISIBLE_DEVICES=0 # Limit to specific GPU (AMD)
export CUDA_LAUNCH_BLOCKING=1 # Synchronous CUDA execution (for debugging)
If you encounter issues:
- Check the Troubleshooting section above
- Review the FAQ for frequently asked questions
- Search GitHub Issues
- Open a new issue with system info (
python --version
,pip list
) and error messages
After successful installation:
- Read the Usage Guide to learn how to generate traces
- Explore the Web Interface Guide to master the visualization
- Check out Basic Examples for practical usage scenarios
- Join the GitHub Discussions for community support