Pinned Loading
Repositories
Showing 10 of 21 repositories
- llm-compressor Public
Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM
vllm-project/llm-compressor’s past year of commit activity - flash-attention Public Forked from Dao-AILab/flash-attention
Fast and memory-efficient exact attention
vllm-project/flash-attention’s past year of commit activity - speculators Public
A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM
vllm-project/speculators’s past year of commit activity
Top languages
Loading…