Skip to content

01. Installation

FindHao edited this page Oct 5, 2025 · 11 revisions

This guide covers all installation scenarios for TritonParse, from basic usage to full development setup.

📋 Prerequisites

System Requirements

  • Python >= 3.10
  • Operating System: Linux, macOS, or Windows (with WSL recommended)
  • GPU Required (Triton depends on GPU):
    • NVIDIA GPUs: CUDA 11.8+ or 12.x
    • AMD GPUs: ROCm 5.0+ (supports MI100, MI200, MI300 series)
  • Node.js >= 22.0.0 (for website development only)

⚠️ Important: GPU is required to generate traces because Triton kernels can only run on GPU hardware. The web interface can view existing traces without GPU.


🔧 Install Required Dependencies

All installation options require PyTorch and Triton. Complete these steps first before choosing your installation option below.

Step 1: Install PyTorch with GPU Support

For NVIDIA GPUs (CUDA)

# Install PyTorch nightly with CUDA 12.8 support (recommended)
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128

# Alternative: Install stable PyTorch with CUDA support
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

For AMD GPUs (ROCm)

# Install PyTorch nightly with ROCm support (recommended)
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.2

# Alternative: Install stable PyTorch with ROCm support
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.1

Step 2: Install Triton

# First, uninstall any existing PyTorch-bundled Triton to avoid conflicts
pip uninstall -y pytorch-triton triton || true

# Install the latest version of Triton (>= 3.4.0)
pip install triton

💡 Note: PyPI installation of Triton is now recommended. Building from source is only needed for development or unreleased features.

Step 3: Verify GPU Setup

# Verify PyTorch and GPU
python -c "import torch; print(f'PyTorch: {torch.__version__}')"
python -c "import torch; print(f'GPU available: {torch.cuda.is_available()}')"
python -c "import torch; print(f'GPU device: {torch.cuda.get_device_name(0) if torch.cuda.is_available() else \"No GPU\"}')"

# Verify Triton
python -c "import triton; print(f'Triton: {triton.__version__}')"

🎯 Installation Options

Now choose your installation method based on your needs:

Option 1: PyPI Installation (Recommended for Most Users)

Quick installation from Python Package Index

# Install nightly version (recommended, latest features)
pip install -U --pre tritonparse

# OR install stable version
pip install tritonparse

Option 2: GitHub Installation (For Development/Latest Features)

Install from source for development or to get unreleased features

# Clone repository
git clone https://github.com/meta-pytorch/tritonparse.git
cd tritonparse

# Install in development mode (editable)
pip install -e .

# OR install directly from GitHub without cloning
pip install git+https://github.com/meta-pytorch/tritonparse.git

Option 3: Full Development Setup (For Contributors)

Complete setup for Python development with formatting tools

# First, follow Option 2 to clone and install
git clone https://github.com/meta-pytorch/tritonparse.git
cd tritonparse
pip install -e .

# Then install development dependencies
make install-dev

This installs formatting and linting tools: black, usort, ruff

Option 4: Website Development Setup (For Web UI Contributors)

Setup for working on the React-based web interface

Prerequisites: Node.js >= 22.0.0

# First, follow Option 2 or 3
git clone https://github.com/meta-pytorch/tritonparse.git
cd tritonparse
pip install -e .

# Install website dependencies
cd website
npm install

# Start development server
npm run dev  # Access at http://localhost:5173

✅ Verify Installation

Test that TritonParse is working correctly:

# Navigate to tests directory
cd tests  # or cd tritonparse/tests if you didn't clone

# Run example test
TORCHINDUCTOR_FX_GRAPH_CACHE=0 python test_add.py

Expected output:

Triton kernel executed successfully
Torch compiled function executed successfully
================================================================================
📁 TRITONPARSE PARSING RESULTS
================================================================================
📂 Parsed files directory: /scratch/findhao/tritonparse/tests/parsed_output
📊 Total files generated: 2
...
✅ Parsing completed successfully!
================================================================================

Using the Web Interface

  1. Generate trace files using the Python API (see Usage Guide)
  2. Visit https://meta-pytorch.org/tritonparse/
  3. Load your trace files (.ndjson or .gz format)

Additional Commands for Development

For Python development (Option 3):

# Check code formatting
make format-check

# Run linting
make lint-check

# Run tests
python -m unittest tests.test_tritonparse -v

For website development (Option 4):

npm run dev          # Development server
npm run build        # Production build
npm run build:single # Standalone HTML build
npm run lint         # Linting
npm run preview      # Preview production build

🐛 Troubleshooting

Common Issues

1. GPU Not Available

Error: "CUDA not available" or "ROCm not available"

Diagnosis:

python -c "import torch; print(f'GPU available: {torch.cuda.is_available()}')"
python -c "import torch; print(f'Device count: {torch.cuda.device_count()}')"

Solution: Reinstall PyTorch with GPU support following Step 1 above.

2. Triton Installation Issues

Error: "No module named 'triton'" or "Triton version mismatch"

Solution:

pip uninstall -y pytorch-triton triton || true
pip install --upgrade triton

3. Permission Issues

Error: Permission denied during installation

Solution: Use a virtual environment

python -m venv tritonparse-env
source tritonparse-env/bin/activate  # Linux/Mac
# OR
tritonparse-env\Scripts\activate  # Windows

4. Development Tools Not Found

Error: "black not found" or similar

Solution:

make install-dev
# OR manually: pip install black usort ruff

5. Website Build Issues

Error: Node.js version too old

Solution:

# Update Node.js to >= 22.0.0
conda install 'nodejs>=22.0.0' -c conda-forge

# Clear cache and reinstall
rm -rf node_modules package-lock.json
npm install

Useful Environment Variables

# TritonParse
export TRITONPARSE_DEBUG=1                   # Enable debug logging
export TRITON_TRACE_GZIP=1                   # Enable gzip compression
export TRITON_TRACE=/path/to/traces          # Custom trace directory

# PyTorch/TorchInductor
export TORCHINDUCTOR_FX_GRAPH_CACHE=0        # Disable FX graph cache (for testing)
export TORCH_LOGS="+dynamo,+inductor"        # Enable PyTorch debug logs

# GPU control
export CUDA_VISIBLE_DEVICES=0                # Limit to specific GPU (NVIDIA)
export ROCR_VISIBLE_DEVICES=0                # Limit to specific GPU (AMD)
export CUDA_LAUNCH_BLOCKING=1                # Synchronous CUDA execution (for debugging)

Getting Help

If you encounter issues:

  1. Check the Troubleshooting section above
  2. Review the FAQ for frequently asked questions
  3. Search GitHub Issues
  4. Open a new issue with system info (python --version, pip list) and error messages

🚀 Next Steps

After successful installation:

  1. Read the Usage Guide to learn how to generate traces
  2. Explore the Web Interface Guide to master the visualization
  3. Check out Basic Examples for practical usage scenarios
  4. Join the GitHub Discussions for community support