Skip to content

Initial ParetoQ commit #1876

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 9, 2025
Merged

Initial ParetoQ commit #1876

merged 1 commit into from
Apr 9, 2025

Conversation

andrewor14
Copy link
Contributor

This project contains the training code of ParetoQ introduced in: "ParetoQ: Scaling Laws in Extremely Low-bit LLM Quantization" (https://arxiv.org/abs/2502.02631). All code is written by @liuzechun and @zxdmike and migrated from
https://github.com/facebookresearch/ParetoQ.

ParetoQ is the first unified framework that facilitates rigorous comparisons across 1-bit, 1.58-bit, 2-bit, 3-bit, and 4-bit quantization settings. By optimizing training schemes and refining quantization functions, ParetoQ surpasses all previous methods tailored to specific bit widths. Specifically, the 1.58-bit ParetoQ LLaMA-3 8B model reduces the performance gap to full precision by relatively 37.8% compared to the 1-bit Era’s 1.58-bit LLaMA-3 8B model, while using only 30% of the training tokens.

Copy link

pytorch-bot bot commented Mar 12, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1876

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 24191f4 with merge base 6726b0b (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 12, 2025
@andrewor14 andrewor14 marked this pull request as draft March 12, 2025 20:14
@andrewor14 andrewor14 added the topic: new feature Use this tag if this PR adds a new feature label Mar 12, 2025
@andrewor14 andrewor14 marked this pull request as ready for review March 13, 2025 20:31
@andrewor14 andrewor14 force-pushed the paretoq branch 2 times, most recently from 29400c6 to 77b1bcc Compare March 14, 2025 16:16
@vkuzo
Copy link
Contributor

vkuzo commented Mar 28, 2025

should there be a test of some sort? Otherwise it's likely this will break soon without anyone knowing.

@andrewor14 andrewor14 force-pushed the paretoq branch 2 times, most recently from ca0fdaa to 87638de Compare April 9, 2025 16:05
@andrewor14
Copy link
Contributor Author

should there be a test of some sort? Otherwise it's likely this will break soon without anyone knowing.

added

This project contains the training code of ParetoQ introduced in:
"ParetoQ: Scaling Laws in Extremely Low-bit LLM Quantization"
(https://arxiv.org/abs/2502.02631). All code is written by
@liuzechun and @zxdmike and migrated from
https://github.com/facebookresearch/ParetoQ.

ParetoQ is the first unified framework that facilitates rigorous
comparisons across 1-bit, 1.58-bit, 2-bit, 3-bit, and 4-bit
quantization settings. By optimizing training schemes and refining
quantization functions, ParetoQ surpasses all previous methods
tailored to specific bit widths. Specifically, the 1.58-bit
ParetoQ LLaMA-3 8B model reduces the performance gap to full
precision by relatively 37.8% compared to the 1-bit Era’s
1.58-bit LLaMA-3 8B model, while using only 30% of the
training tokens.
@andrewor14 andrewor14 merged commit 31f119e into main Apr 9, 2025
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: new feature Use this tag if this PR adds a new feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants