-
Notifications
You must be signed in to change notification settings - Fork 422
Error loading mixtral-8x7b-v0.1.Q6_K.gguf #357
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Same error with mixtral-8x7b-v0.1.Q2_K.gguf, file size 15G error loading model: create_tensor: tensor 'blk.0.ffn_gate.weight' not found |
I see, thanks for the info! Given the PR mentioned, it might just be about a few weeks for the update to drip down to LlamaSharp :-) (feel free to close this issue) |
The expectations on this model are very high. For sure that it will a good idea to update llama.cpp binaries once the model is supported in llama.cpp master |
I was already planning to start a binary update later this week anyway, since it's been about a month since the last set. So that should pick up support for mixtral moe :) |
Code:
Model:
System:
Perhaps not enough memory?
Error:
The text was updated successfully, but these errors were encountered: