-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🎅 I WISH LITELLM HAD... #361
Comments
[LiteLLM Client] Add new models via UI Thinking aloud it seems intuitive that you'd be able to add new models / remap completion calls to different models via UI. Unsure on real problem though. |
User / API Access Management Different users have access to different models. It'd be helpful if there was a way to maybe leverage the BudgetManager to gate access. E.g. GPT-4 is expensive, i don't want to expose that to my free users but i do want my paid users to be able to use it. |
cc: @yujonglee @WilliamEspegren @zakhar-kogan @ishaan-jaff @PhucTranThanh feel free to add any requests / ideas here. |
[Spend Dashboard] View analytics for spend per llm and per user
|
Auto select the best LLM for a given task If it's a simple task like responding to "hello" litlellm should auto-select a cheaper but faster llm like j2-light |
Integration with NLP Cloud |
That's awesome @Pipboyguy - dm'ing on linkedin to learn more! |
@ishaan-jaff check out this truncate param in the cohere api This looks super interesting. Similar to your token trimmer. If the prompt exceeds context window, trim in a particular manner. I would maybe only run trimming on user/assistant messages. Not touch the system prompt (works for RAG scenarios as well). |
Option to use Inference API so we can use any model from Hugging Face 🤗 |
@haseeb-heaven you can already do this -
from litellm import completion
response = completion(model="huggingface/gpt2", messages=[{"role": "user", "content": "Hey, how's it going?"}])
print(response) |
Wow great thanks its working. Nice feature |
Support for inferencing using models hosted on Petals swarms (https://github.com/bigscience-workshop/petals), both public and private. |
@smig23 what are you trying to use petals for ? We found it to be quite unstable and it would not consistently pass our tests |
finetuning wrapper for openai, huggingface etc. |
@shauryr i created an issue to track this - feel free to add any missing details here |
Specifically for my aims, I'm running a private swarm as a experiment with a view to implementing with in private organization, who have idle GPU resources, but it's distributed. The initial target would be inferencing and if litellm was able to be the abstraction layer, it would allow flexibility to go another direction with hosting in the future. |
I wish the litellm to have a direct support for finetuning the model. Based on the below blog post, I understand that in order to fine tune, one needs to have a specific understanding on the LLM provider and then follow their instructions or library for fine tuning the model. Why not the LiteLLM do all the abstraction and handle the fine-tuning aspects as well? https://docs.litellm.ai/docs/tutorials/finetuned_chat_gpt |
I wish LiteLLM has a support for open-source embeddings like sentence-transformers, hkunlp/instructor-large etc. Sorry, based on the below documentation, it seems there's only support for the Open AI embedding. |
I wish LiteLLM has the integration to cerebrium platform. Please check the below link for the prebuilt-models. |
@ranjancse26 what models on cerebrium do you want to use with LiteLLM ? |
@ishaan-jaff The cerebrium has got a lot of pre-built model. The focus should be on consuming the open-source models first ex: Lama 2, GPT4All, Falcon, FlanT5 etc. I am mentioning this as a first step. However, it's a good idea to have the Litellm take care of the internal communication with the custom-built models too. In-turn based on the API which the cerebrium is exposing. |
@smig23 We've added support for petals to LiteLLM https://docs.litellm.ai/docs/providers/petals |
I wish Litellm has a built-in support for the majority of the provider operations than targeting the text generation alone. Consider an example of Cohere, the below one allows users to have conversations with a Large Language Model (LLM) from Cohere. |
I wish Litellm has a ton of support and examples for users to develop apps with RAG pattern. It's kind of mandatory to go with the standard best practices and we all wish to have the same support. |
I wish Litellm has use-case driven examples for beginners. Keeping in mind of the day-to-day use-cases, it's a good idea to come up with a great sample which covers the following aspects.
|
I wish Litellm to support for various known or popular vector db's. Here are couple of them to begin with.
|
I wish Litellm has a built-in support for performing the web-scrapping or to get the real-time data using known provider like serpapi. It will be helpful for users to build the custom AI models or integrate with the LLMs for performing the retrieval augmented based generation. https://serpapi.com/blog/llms-vs-serpapi/#serpapi-google-local-results-parser |
Please add redisvl module to the requirements.txt for semantic redis caching. This is so I do not have to build a custom docker container. Thank you and thanks for adding the feature! Just noticed in commit history it was added and then removed. Will this be coming back? |
sglang support pretty please! |
It would be great if you could provide a support for groq. Essentially, groq provides an Open AI based interface. |
Is support for the g4f package planned? |
I wish |
Not sure if this is already implemented, but... Proactive routing. Instead of trying to route, failing, and falling back, maybe keep model max tokens so it can tell if the inference will fail anyway beforehand. Also, perhaps a max parallelism for number of requests that can simultaneously be sent to an endpoint. This way it could round robin on empty endpoints instead of overloading one endpoint, and failing over. |
I wish |
pre-call checks for max tokens is live - https://docs.litellm.ai/docs/routing#pre-call-checks-context-window max parallelism for number of requests -> explain to me how this might work? So do you want to set a max parallel request for an endpoint? |
Plans to add Private-GPT's API? https://github.com/zylon-ai/private-gpt |
I wish LiteLLM had a client library for Elixir, removing the need for me to run a separate proxy server. |
I wish LiteLLM had simple serverless ability, some proxy services are not used continuously |
@RobertLiu0905 Cloudflare Python workers are here, we have an active issue to get litellm support on Cloudflare workers: cloudflare/workerd#1943 Is this what you wanted ? Open to suggestions on other approaches |
I wish LiteLLM had support for IBM watsonx.ai. Thanks |
I wish LiteLLM Proxy server had a config setting for proxy_base_url. For example hosting the server at
This would simplify our infrastructure in AWS and still comply with company policies. |
WISH: Expand batchingThe Google AI Studio API for Genini Pro 1.5 has very harsh restrictions on RPM & TPM ( https://ai.google.dev/pricing ) but you get a FREE or $7+$21/M (1M+8k) LLM API. The NEW OpenAI Batch API is 50% cheaper than the normal API so for GPT-4 Turbo is $5+$15/M (128k+4k) — but schedules processing, processes it very asynchronously on their end, and delivers results “later”. https://help.openai.com/en/articles/9197833-batch-api-faq It would be great to create an OpenAI-compatible Batch API abstraction which for OpenAI uses their Batch API abstraction directly, but for other models uses local batching, pooling, RPM&TPM limitation etc., and works in a similar way. I imagine that other API providers may follow suit with their native, cheaper batch API, so an abstraction would be highly desirable. I know LiteLLM has its own batching already (which is slightly different in concept), so my request might be an extension to that. Why?Well, many of us have use cases for MASS LLM processing: translation, summsrization, rewriting (like coreference resolution, NER etc.). We don't need "ASAP async" for those, but cheaper is always better 😃 |
I wish it was possible to specify which callbacks LiteLLM would use on a per request basis (e.g. without modifying global state) |
I wish LiteLLM logger would support json logging, with a more succinct message and extra fields with longer strings. Logging of requests to LLM providers is specially long and unformatted. |
I wish LiteLLM would implement stronger typing for methods. As an example, when I call:
I need to do the following assertions:
since I'm working in a typed codebase enforced with pyright. |
Hey @andersskog just pushed the v1 for json logging - b46db8b You can enable it with |
I wish litellm had an API to check available models from providers in real time. |
I wish LiteLLM had support for Sambaverse. Thanks |
Discord alerting would be nice |
Wilcard for model_name property in model_list:
|
@ggallotti would that be similar to how we do it for openai today - https://docs.litellm.ai/docs/providers/openai#2-start-the-proxy |
Thanks for the response. |
Streamlined way to call vision and non-vision models would be great. Being LLM-agnostic is a big reason why I use the package but currently still have to handle different request format depending on which model it goes to. For example: Calling GPT4 Vision, messages.content is an array. Using the same code to call Azure's Command R+ would result in
I'm aware this is on the model provider's side, but GPT's non-vision models for example support both format. |
@ducnvu seems like something we need to fix - can you share the command r call? |
@krrishdholakia Thanks for the prompt response, the call is something like this. I don't have access to all models supported by litellm to test but so far OpenAI models work with both string messages.content and the format below, Command R is where I first encounter this error. All my calls are through Azure.
|
This is a ticket to track a wishlist of items you wish LiteLLM had.
COMMENT BELOW 👇
With your request 🔥 - if we have any questions, we'll follow up in comments / via DMs
Respond with ❤️ to any request you would also like to see
P.S.: Come say hi 👋 on the Discord
The text was updated successfully, but these errors were encountered: