New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Danswer asks for an OpenAI API Key even with Ollama configuration #1414
Comments
I've been having the same issue. Was working great with ollama for awhile until I updated and now I can't get past it asking for an API key. |
@Weves , If I type in my OpenAI API key, it works. I guess maybe the problem is that it seems to be ignoring my .env. I'm unable to see a way to use the ollama server I was using before I updated Danswer. Fill in the values and copy the contents of this file to .env in the deployment directory.Some valid default values are provided where applicable, delete the variables which you don't set values for.This is only necessary when using the docker-compose.prod.yml compose file.Could be something like danswer.companyname.comWEB_DOMAIN=http://localhost:3000 GEN_AI_MODEL_PROVIDER=ollama_chat Model of your choiceGEN_AI_MODEL_VERSION=llama3:instruct Wherever Ollama is runningHint: To point Docker containers to http://localhost:11434, use host.docker.internal instead of localhostGEN_AI_API_ENDPOINT=http://host.docker.internal:11434 Let's also make some changes to accommodate the weaker locally hosted LLMQA_TIMEOUT=240 # Set a longer timeout, running models on CPU can be slow Always run search, never skipDISABLE_LLM_CHOOSE_SEARCH=True Don't use LLM for reranking, the prompts aren't properly tuned for these modelsDISABLE_LLM_CHUNK_FILTER=True Don't try to rephrase the user query, the prompts aren't properly tuned for these modelsDISABLE_LLM_QUERY_REPHRASE=True Don't use LLM to automatically discover time/source filtersDISABLE_LLM_FILTER_EXTRACTION=True Uncomment this one if you find that the model is struggling (slow or distracted by too many docs)Use only 1 section from the documents and do not require quotesQA_PROMPT_OVERRIDE=weakAUTH_TYPE=basic If you want to setup a slack bot to answer questions automatically in Slackchannels it is added to, you must specify the two below.More information in the guide here: https://docs.danswer.dev/slack_bot_setup#DANSWER_BOT_SLACK_APP_TOKEN= How long before user needs to reauthenticate, default to 1 day. (cookie expiration time)SESSION_EXPIRE_TIME_SECONDS=86400 Use the below to specify a list of allowed user domains, only checked if user Auth is turned one.g.
|
In my case, when I enter the Open AI key, I get a Red pop up box at the bottom left that says "Not found" |
@nausher I was just able to get it working via the menu by setting up custom and using my ollama url:port in the API base field and putting the model name (in my case "llama3:instruct"). |
@exsodus2 - I don't see an option to set up a custom LLM provider. Also, after updating / adding the .env file did you do a docker start with the following command - Or did you do a full build and deploy |
@nausher I also believe the .env isn't being loaded. The option to add a custom LLM is on the LLM tab at the bottom. I always use |
I tried setting up a custom LLM provider after (1) pulling / building & force restarting the containers 2-3 times and (2) adding my Open AI keys. However, when I try to add ollama as both llama2, llama3, llama3:instruct. I receive the following error message - |
@nausher in your screenshots I don't see the |
@Weves - thanks for spotting that and chiming in! I noticed it too. But alas, no luck - |
@nausher can you try running |
@nausher, I can replicate your error when not using a valid API Base address (I changed the port to a wrong one to test). Additionally, I got the same error after changing the address back to correct, but closing ollama . This leads me to believe your issue may be related to your ollama server itself (if you're sure you're using the right address pointing to it in Dawnser). |
@exsodus2 you were right! While it quite wasn't ollama that had an issue it was with the API base address. I posted a question and Danswer was surprisingly snappy and quoted the right local documents. Now, if I could get my other issue & code change indexing org files accepted, that would be the cherry on this thing. I'd like to leave this issue open since, the |
Hi team Even using that address, I keep getting the infamous 'NoneType' object has no attribute 'request' error during Danswer setup 05/13/2024 07:58:27 PM utils.py 228 : Failed to call LLM with the following error: 'NoneType' object has no attribute 'request' Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new INFO: 172.20.0.9:54288 - "POST /admin/llm/test HTTP/1.1" 400 Bad Request Am I missing something more? |
I managed to use the official ollama image (ollama/ollama) and not litellm/ollama. also (if you still haven't), try adding
on the ollama service to allow containers communicate with each other |
Would you be so kind to summarize how to use ollama instead of litelllm. Is there a documentation section for that? |
yeah there is: https://docs.danswer.dev/gen_ai_configs/ollama |
Thanks again! Unfortunately I was unable to make it work using the ollama Windows Installer or Docker, same error |
#1458 Seems to be related to this |
I have Danswer up and running on my Mac. It is indexing files, I've also updated it to use Ollama that I have running locally.
I used the configuration mentioned here - https://docs.danswer.dev/gen_ai_configs/ollama
and have created/updated a .env file in the docker_compose directory, in addition I have also updated the kubernetes yaml file for good measure.
I've also restarted the service a few times. The service still continues to ask for an API key, skipping which results in a non-working LLM chat.
The text was updated successfully, but these errors were encountered: