Skip to content

Pinned Loading

  1. vllm vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 45.5k 7k

  2. llm-compressor llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 1.2k 119

Repositories

Showing 10 of 16 repositories
  • aibrix Public

    Cost-efficient and pluggable Infrastructure components for GenAI inference

    vllm-project/aibrix’s past year of commit activity
    Jupyter Notebook 3,473 Apache-2.0 331 146 (11 issues need help) 8 Updated Apr 22, 2025
  • llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    vllm-project/llm-compressor’s past year of commit activity
    Python 1,242 Apache-2.0 119 41 (9 issues need help) 40 Updated Apr 22, 2025
  • vllm-spyre Public

    Community maintained hardware plugin for vLLM on Spyre

    vllm-project/vllm-spyre’s past year of commit activity
    Python 20 Apache-2.0 11 24 (3 issues need help) 5 Updated Apr 22, 2025
  • ci-infra Public
    vllm-project/ci-infra’s past year of commit activity
    HCL 8 22 0 7 Updated Apr 22, 2025
  • vllm-ascend Public

    Community maintained hardware plugin for vLLM on Ascend

    vllm-project/vllm-ascend’s past year of commit activity
    Python 508 Apache-2.0 97 97 34 Updated Apr 22, 2025
  • vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    vllm-project/vllm’s past year of commit activity
    Python 45,539 Apache-2.0 7,013 1,724 (16 issues need help) 593 Updated Apr 22, 2025
  • production-stack Public

    vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization

    vllm-project/production-stack’s past year of commit activity
    Python 1,093 Apache-2.0 156 43 (2 issues need help) 20 Updated Apr 21, 2025
  • flash-attention Public Forked from Dao-AILab/flash-attention

    Fast and memory-efficient exact attention

    vllm-project/flash-attention’s past year of commit activity
    Python 63 BSD-3-Clause 1,636 0 10 Updated Apr 22, 2025
  • vllm-project/vllm-project.github.io’s past year of commit activity
    HTML 7 16 0 2 Updated Apr 20, 2025
  • vllm-openvino Public
    vllm-project/vllm-openvino’s past year of commit activity
    2 Apache-2.0 1 1 0 Updated Mar 20, 2025

Sponsors

  • @imkero
  • @comet-ml
  • @HiddenPeak
  • @JenZhao
  • @terrytangyuan
  • @mhupfauer
  • @dvlpjrs
  • @vincentkoc
  • @robertgshaw2-redhat
  • Private Sponsor

Top languages

Loading…