Enhancement: Pure code output formatting for Ollama models #2535
-
What features would you like to see added?I have added an Ollama endpoint and am testing it with the orca-mini:13b model. I would like to see the code in the output properly formatted (font, indentation, syntax highlighting) More details - name: "Ollama"
apiKey: "ollama"
# use 'host.docker.internal' instead of localhost if running LibreChat in a docker container
baseURL: "http://host.docker.internal:11434/v1/chat/completions"
models:
default: [
"orca-mini:13b",
"dolphin-mixtral:latest",
"llama3:latest",
"starcoder2:15b",
]
fetch: false # fetching list of models is not supported
titleConvo: true
titleModel: "orca-mini:13b"
summarize: false
summaryModel: "orca-mini:13b"
forcePrompt: false
modelDisplayLabel: "Ollama" Not entirely sure if this is an issue of the model or of LibreChat. Which components are impacted by your request?General PicturesNo response Code of Conduct
|
Beta Was this translation helpful? Give feedback.
Answered by
danny-avila
Apr 25, 2024
Replies: 1 comment 1 reply
-
In fact this is already working, if the model outputs text first and code second (assuming in tripe backticks?). Would be nice if pure code output would be formatted properly as well. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
this is all on the model training. You can add a custom instruction for this via preset:
"Please always output multi-line code in markdown format with backticks"