-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update GPT4All/llama.cpp #25
Comments
Thanks for this! I looked it up and tried to build the new gpt4all-backend. From quick look it seems it only supports dynamic linking unlike this project. There's a good reason for dynamic linking because then one does not have to have separate builds for avx1 and avx2. And it supports multiple llama versions at the same time. But that also means the binary cannot be compiled on one machine and just trust it works on another. With this project one is now unfortunately stuck with the old format. It would be good to leave this issue open so that people know that it does not work with the new ggml formats. |
GPT4All uses a newer version of llama.cpp which can handle the new ggml formats. Currently this throws an error similar to the following if you attempt to load a model of a newer version:
The text was updated successfully, but these errors were encountered: