Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Custom OpenAI compatible server support with preserved inference parameters #2840

Open
qnixsynapse opened this issue Apr 29, 2024 · 4 comments
Assignees
Labels
P2: nice to have Nice to have feature type: feature request A new feature

Comments

@qnixsynapse
Copy link

qnixsynapse commented Apr 29, 2024

Problem
Sometimes, people like me run an openai compatible server instead of running the model through Jan's nitro for better tailored to hardware compatibility and availability. The openai inference partially works when set like this:
image

However, Inference parameters, such as top_k, top_p, etc are not available. And some parameters such as temperature are not preserved. The model name is also incorrect as shown in the screenshot:
Untitled

Success Criteria
It would be better to have support for a custom openai server. The list of models available can be queried through v1/models endpoint(if available).

@qnixsynapse qnixsynapse added the type: feature request A new feature label Apr 29, 2024
@Van-QA Van-QA added the P2: nice to have Nice to have feature label May 2, 2024
@Van-QA
Copy link
Contributor

Van-QA commented May 15, 2024

hi @qnixsynapse,
You can achieve the same result by modifying the model.json value, take OpenAI GPT 4 turbo for example:

{
  "sources": [
    {
      "url": "https://openai.com"
    }
  ],
  "id": "gpt-4-turbo",
  "object": "model",
  "name": "OpenAI GPT 4 Turbo",
  "version": "1.2",
  "description": "OpenAI GPT 4 Turbo model is extremely good",
  "format": "api",
  "settings": {},
  "parameters": {
    "max_tokens": 4096,
    "temperature": 0.7,
    "top_p": 0.95,
    "stream": true,
    "stop": [],
    "frequency_penalty": 0,
    "presence_penalty": 0
  },
  "metadata": {
    "author": "OpenAI",
    "tags": [
      "General"
    ]
  },
  "engine": "openai"
}

The value from the settings will be visible in the UI and will be applied to the request
image

Related article:
https://jan.ai/docs/remote-models/generic-openai

@Van-QA Van-QA self-assigned this May 15, 2024
@qnixsynapse
Copy link
Author

@Van-QA Thank you! This helped. BTW, is it possible to provide a custom openai compatible endpoint for embeddings which is needed in "Knowledge retrieval"?

@Van-QA
Copy link
Contributor

Van-QA commented May 23, 2024

@Van-QA Thank you! This helped. BTW, is it possible to provide a custom openai compatible endpoint for embeddings which is needed in "Knowledge retrieval"?

hi @qnixsynapse this page will guide you to the modification of chat completion endpoint https://jan.ai/docs/remote-models/openai#how-to-integrate-openai-api-with-jan

@qnixsynapse
Copy link
Author

This is for large language model only. I use a smaller sentence transformer for the embeddings which is significantly faster than using embeddings from the main large model. So If I want to add an endpoint to the Jan app on some other port, will that be possible? Or sentence transformer support on the app using nitro is also a viable option.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P2: nice to have Nice to have feature type: feature request A new feature
Projects
Status: No status
Development

No branches or pull requests

3 participants