Replies: 1 comment
-
🤖 Hey @maspotts, great to see you back! Hope those chatbots are still keeping the conversations lively. 🤖 Based on the information available in the LlamaIndex repository, there currently isn't a built-in method to update the OpenAI API key or the LLM in an existing index or query engine without recreating them. The OpenAI API key is set during the initialization of the However, there are some workarounds that might help you. One of them is to use different index = VectorStoreIndex.from_documents(docs, service_context=service_context_cheap)
query_engine = index.as_query_engine(service_context=service_context_expensive) In this example, Another workaround is to pass the Please note that these are workarounds and might not be the most efficient solutions. It would be beneficial if LlamaIndex could provide a built-in method to update the OpenAI API key or the LLM in an existing index or query engine, or a feature for deleting or rotating used-up tokens. You might want to consider opening a new issue to suggest these features. Sources
|
Beta Was this translation helpful? Give feedback.
-
Hi: I'm using llama_index to deploy chatbots (against our internal documents) very successfully, but a pain point is that a given OpenAI API key will randomly exceed its monthly quota, or be rate limited due to too many embedding API calls. I can easily catch these edge cases and I have a list of API tokens that I can rotate, or remove quota-exceeded tokens from, but currently I can't see any way to update already-created indices and query engines to use a new key: or even to generate a new LLM and update them to use it. Instead I have to recreated the whole index or query engine, which is resource-heavy (especially the indices). (I did try replacing the openai_api_key variable deep inside the class instance variables, but it seems to be ignored, and anyway I don't like doing that.) So my question is: is there a good way to achieve this goal? (Update the API token, or even the LLM, in an existing index or query engine) Or if not, are there any plans for implementing deletion of used-up tokens, and/or rotation of tokens in a token list, into llama_index itself?
Beta Was this translation helpful? Give feedback.
All reactions