Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.
-
Updated
Mar 15, 2025 - Python
Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.
[ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling
EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022]
Code for paper " AdderNet: Do We Really Need Multiplications in Deep Learning?"
[CVPR 2024] DeepCache: Accelerating Diffusion Models for Free
[NeurIPS 2024 Spotlight]"LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS", Zhiwen Fan, Kevin Wang, Kairun Wen, Zehao Zhu, Dejia Xu, Zhangyang Wang
[ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization
List of papers related to neural network quantization in recent AI conferences and journals.
Learning Efficient Convolutional Networks through Network Slimming, In ICCV 2017.
[NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
Explorations into some recent techniques surrounding speculative decoding
[CVPR 2021] Exploring Sparsity in Image Super-Resolution for Efficient Inference
On-device LLM Inference Powered by X-Bit Quantization
(CVPR 2021, Oral) Dynamic Slimmable Network
[ECCV2022] Efficient Long-Range Attention Network for Image Super-resolution
📚 Collection of awesome generation acceleration resources.
[NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising
Deep Face Model Compression
[ECCV 2022] Official implementation of the paper "DeciWatch: A Simple Baseline for 10x Efficient 2D and 3D Pose Estimation"
Official code repository for Sketch-of-Thought (SoT)
Add a description, image, and links to the efficient-inference topic page so that developers can more easily learn about it.
To associate your repository with the efficient-inference topic, visit your repo's landing page and select "manage topics."