Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Will this work with quantized versions of llama models? #9

Open
thistleknot opened this issue Apr 28, 2023 · 0 comments
Open

Will this work with quantized versions of llama models? #9

thistleknot opened this issue Apr 28, 2023 · 0 comments

Comments

@thistleknot
Copy link

What about alpaca?

For example. I have a smaller version of llama that runs locally

python server.py --model ggml-alpaca-7b-q4 --listen

I'm going to give it a shot/go with this reduced model and get back to you over the weekend

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant