Skip to content
FindHao edited this page Oct 5, 2025 · 11 revisions

Welcome to TritonParse Wiki πŸš€

TritonParse is a comprehensive visualization and analysis tool for Triton IR files, designed to help developers analyze, debug, and understand Triton kernel compilation processes.

License: BSD-3 GitHub Pages

🎯 Quick Navigation

πŸ“š Getting Started

πŸ“– User Guide

πŸ”§ Developer Guide

πŸŽ“ Advanced Topics

πŸ“ Quick Reference

🌟 Key Features

πŸ” Visualization & Analysis

  • Interactive Kernel Explorer - Browse kernel information and stack traces
  • Multi-format IR Support - View TTGIR, TTIR, LLIR, PTX, and AMDGCN
  • IR Code View - Side-by-side IR viewing with synchronized highlighting and line mapping
  • Interactive Code Views - Click-to-highlight corresponding lines across IR stages
  • Launch Diff Analysis - Compare kernel launch events
  • File Diff View - Compare kernels across different trace files side-by-side

πŸ“Š Structured Logging

  • Compilation Tracing - Capture detailed Triton compilation events
  • Launch Tracing - Capture detailed kernel launch events
  • Stack Trace Integration - Full Python stack traces for debugging
  • Metadata Extraction - Comprehensive kernel metadata and statistics
  • NDJSON Output - Structured logging format for easy processing

πŸ”§ Reproducer Generation

  • Standalone Scripts - Generate self-contained Python scripts to reproduce kernels
  • Tensor Reconstruction - Rebuild tensors from statistical data or saved blobs
  • Template System - Customize reproducer output with flexible templates
  • Minimal Dependencies - Scripts run independently for debugging and testing

🌐 Deployment Options

  • GitHub Pages - Ready-to-use online interface
  • Local Development - Full development environment
  • Standalone HTML - Self-contained deployments

⚑ Quick Start

1. Installation

# Clone the repository
git clone https://github.com/meta-pytorch/tritonparse.git
cd tritonparse

# Install dependencies
pip install -e .

2. Generate Traces

import tritonparse.structured_logging

# Initialize logging
tritonparse.structured_logging.init("./logs/", enable_trace_launch=True)

# Your Triton/PyTorch code here
...

# Parse logs
import tritonparse.utils
tritonparse.utils.unified_parse(source="./logs/", out="./parsed_output")

3. Analyze Results

Visit https://meta-pytorch.org/tritonparse/ and load your trace files!

4. (Optional) Generate Reproducer

# Generate standalone reproducer script
tritonparse reproduce ./parsed_output/trace.ndjson --line 1 --out-dir repro_output

Important Links

🀝 Contributing

We welcome contributions! Please see our Contributing Guide for details on:

  • Development setup and prerequisites
  • Code formatting standards (Code Formatting Guide)
  • Pull request and code review process
  • Issue reporting guidelines

πŸ“„ License

This project is licensed under the BSD-3 License. See the LICENSE file for details.


Note: This tool is designed for developers working with Triton kernels and GPU computing. Basic familiarity with GPU programming concepts (CUDA for NVIDIA or ROCm/HIP for AMD), and the Triton language is recommended for effective use.