Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for offline models like Ollama #188

Open
anodynos opened this issue Feb 14, 2024 · 2 comments
Open

Support for offline models like Ollama #188

anodynos opened this issue Feb 14, 2024 · 2 comments

Comments

@anodynos
Copy link

Do you plan to support local models powered by GPU, instead of having to send our code and pay for ChatGPT 3.5/4 ?

@qhreul
Copy link

qhreul commented Apr 18, 2024

With the release of new LLM dedicated to coding (e.g. codegemma, etc.), it would be great to be able to connect to a choice of LLMs running locally.

@GadiZimerman
Copy link
Collaborator

Depending on your organization's preferences, sending your code to OpenAI may not be required, as CodiumAI offers on-premises solutions. However, local models, such as those running on your edge machine, have not yet achieved the desired quality. Nonetheless, we are closely monitoring advancements in this area.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants