-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using local Ollama models #201
Comments
Thanks for this Glen! This is impresive and great to see. I'd been meaning to create a higher-level abstraction that reuses more chatgpt-shell things thank shell-maker https://xenodium.com/a-shell-maker. I've not had a chance to play with these models. I'm guessing they're also implementing OpenAI's API/schema, which would make reusing more things easier for chatgpt-shell. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This is neither a feature request nor a bug but hopefully others may find it useful.
I wanted to experiment with code refactoring using local models but still using the awesome chatgpt-shell. Here is how I got it to work:
I have found that the
gemma
models integrate the best with correct code formatting, etc, but your mileage may vary.The majority of chatgpt-shell features work and you can even change models with
C-c C-v
.The text was updated successfully, but these errors were encountered: