-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for offline models like Ollama #188
Comments
With the release of new LLM dedicated to coding (e.g. codegemma, etc.), it would be great to be able to connect to a choice of LLMs running locally. |
Depending on your organization's preferences, sending your code to OpenAI may not be required, as CodiumAI offers on-premises solutions. However, local models, such as those running on your edge machine, have not yet achieved the desired quality. Nonetheless, we are closely monitoring advancements in this area. |
Do you plan to support local models powered by GPU, instead of having to send our code and pay for ChatGPT 3.5/4 ?
The text was updated successfully, but these errors were encountered: