-
Notifications
You must be signed in to change notification settings - Fork 249
Failed to save the static quantized model #1950
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@jerryzh168 can you help with this? |
yeah I'll take a look, sorry I didn't see this one |
Hi @yiliu30 sorry for the late reply, I took a look, and I think the reason is because there are a few local functions defined in ao/torchao/quantization/quant_api.py Line 1612 in 34421b1
specifically: ao/tutorials/calibration_flow/static_quant.py Line 108 in 34421b1
ao/tutorials/calibration_flow/static_quant.py Line 143 in 34421b1
ao/tutorials/calibration_flow/static_quant.py Line 147 in 34421b1
last two lines might be a bit harder, you can follow float8 example to do it |
Hi, I am following this example and want to save the INT8 static quantization result, but it’s failing.
Could you take a look, thanks!
cc @jerryzh168
The text was updated successfully, but these errors were encountered: