New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Custom OpenAI compatible server support with preserved inference parameters #2840
Comments
hi @qnixsynapse,
The value from the settings will be visible in the UI and will be applied to the request Related article: |
@Van-QA Thank you! This helped. BTW, is it possible to provide a custom openai compatible endpoint for embeddings which is needed in "Knowledge retrieval"? |
hi @qnixsynapse this page will guide you to the modification of chat completion endpoint https://jan.ai/docs/remote-models/openai#how-to-integrate-openai-api-with-jan |
This is for large language model only. I use a smaller sentence transformer for the embeddings which is significantly faster than using embeddings from the main large model. So If I want to add an endpoint to the Jan app on some other port, will that be possible? Or sentence transformer support on the app using nitro is also a viable option. |
Problem
Sometimes, people like me run an openai compatible server instead of running the model through Jan's nitro for better tailored to hardware compatibility and availability. The openai inference partially works when set like this:
However, Inference parameters, such as top_k, top_p, etc are not available. And some parameters such as temperature are not preserved. The model name is also incorrect as shown in the screenshot:
Success Criteria
It would be better to have support for a custom openai server. The list of models available can be queried through v1/models endpoint(if available).
The text was updated successfully, but these errors were encountered: