Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Danswer asks for an OpenAI API Key even with Ollama configuration #1414

Open
nausher opened this issue May 2, 2024 · 22 comments
Open

Danswer asks for an OpenAI API Key even with Ollama configuration #1414

nausher opened this issue May 2, 2024 · 22 comments

Comments

@nausher
Copy link

nausher commented May 2, 2024

I have Danswer up and running on my Mac. It is indexing files, I've also updated it to use Ollama that I have running locally.
I used the configuration mentioned here - https://docs.danswer.dev/gen_ai_configs/ollama
and have created/updated a .env file in the docker_compose directory, in addition I have also updated the kubernetes yaml file for good measure.

I've also restarted the service a few times. The service still continues to ask for an API key, skipping which results in a non-working LLM chat.

@exsodus2
Copy link

exsodus2 commented May 7, 2024

I've been having the same issue. Was working great with ollama for awhile until I updated and now I can't get past it asking for an API key.

@Weves
Copy link
Contributor

Weves commented May 7, 2024

@exsodus2 / @nausher what happens if you put in an API key?

@exsodus2
Copy link

exsodus2 commented May 7, 2024

@Weves , If I type in my OpenAI API key, it works. I guess maybe the problem is that it seems to be ignoring my .env. I'm unable to see a way to use the ollama server I was using before I updated Danswer.

Fill in the values and copy the contents of this file to .env in the deployment directory.

Some valid default values are provided where applicable, delete the variables which you don't set values for.

This is only necessary when using the docker-compose.prod.yml compose file.

Could be something like danswer.companyname.com

WEB_DOMAIN=http://localhost:3000

GEN_AI_MODEL_PROVIDER=ollama_chat

Model of your choice

GEN_AI_MODEL_VERSION=llama3:instruct

Wherever Ollama is running

Hint: To point Docker containers to http://localhost:11434, use host.docker.internal instead of localhost

GEN_AI_API_ENDPOINT=http://host.docker.internal:11434

Let's also make some changes to accommodate the weaker locally hosted LLM

QA_TIMEOUT=240 # Set a longer timeout, running models on CPU can be slow

Always run search, never skip

DISABLE_LLM_CHOOSE_SEARCH=True

Don't use LLM for reranking, the prompts aren't properly tuned for these models

DISABLE_LLM_CHUNK_FILTER=True

Don't try to rephrase the user query, the prompts aren't properly tuned for these models

DISABLE_LLM_QUERY_REPHRASE=True

Don't use LLM to automatically discover time/source filters

DISABLE_LLM_FILTER_EXTRACTION=True

Uncomment this one if you find that the model is struggling (slow or distracted by too many docs)

Use only 1 section from the documents and do not require quotes

QA_PROMPT_OVERRIDE=weak

AUTH_TYPE=basic

If you want to setup a slack bot to answer questions automatically in Slack

channels it is added to, you must specify the two below.

More information in the guide here: https://docs.danswer.dev/slack_bot_setup

#DANSWER_BOT_SLACK_APP_TOKEN=
#DANSWER_BOT_SLACK_BOT_TOKEN=

How long before user needs to reauthenticate, default to 1 day. (cookie expiration time)

SESSION_EXPIRE_TIME_SECONDS=86400

Use the below to specify a list of allowed user domains, only checked if user Auth is turned on

e.g. VALID_EMAIL_DOMAINS=example.com,example.org will only allow users

with an @example.com or an @example.org email

#VALID_EMAIL_DOMAINS=

Default values here are what Postgres uses by default, feel free to change.

POSTGRES_USER=postgres
POSTGRES_PASSWORD=password

@nausher
Copy link
Author

nausher commented May 7, 2024

In my case, when I enter the Open AI key, I get a Red pop up box at the bottom left that says "Not found"
I've tried both Open AI (1) user keys & (2) project keys, they both keep give me the same error. I've also tried on both the LLM Options page as well as the pop-up on initial use.

@Bushrxh
Copy link

Bushrxh commented May 7, 2024

Have you tried setting the local llm as the default from the user interface?
Screenshot 2024-05-07

@nausher
Copy link
Author

nausher commented May 7, 2024

I don't see the providers. Here is the screen I see. And when attempting to add an Open AI key I get the error 'Not Found' in a red box as a toast in the bottom left corner

image

@exsodus2
Copy link

exsodus2 commented May 7, 2024

@nausher I was just able to get it working via the menu by setting up custom and using my ollama url:port in the API base field and putting the model name (in my case "llama3:instruct").
image
image

@nausher
Copy link
Author

nausher commented May 7, 2024

@exsodus2 - I don't see an option to set up a custom LLM provider.
A couple of questions, since I believe the .env is not being loaded correctly.
Is your .env file in the following location
/danswer/deployment/docker_compose/.env ?

Also, after updating / adding the .env file did you do a docker start with the following command -
docker compose -f docker-compose.dev.yml -p danswer-stack up

Or did you do a full build and deploy
docker compose -f docker-compose.dev.yml -p danswer-stack up -d --build --force-recreate

@exsodus2
Copy link

exsodus2 commented May 7, 2024

@nausher I also believe the .env isn't being loaded. The option to add a custom LLM is on the LLM tab at the bottom. I always use docker compose -f docker-compose.dev.yml -p danswer-stack up -d --pull always --force-recreate
image

@nausher
Copy link
Author

nausher commented May 8, 2024

I tried setting up a custom LLM provider after (1) pulling / building & force restarting the containers 2-3 times and (2) adding my Open AI keys.

However, when I try to add ollama as both llama2, llama3, llama3:instruct. I receive the following error message -
'NoneType' object has no attribute 'request'

image
image

@nausher
Copy link
Author

nausher commented May 8, 2024

Uploading the correct secreenshot for the 2nd image. I had mistakenly entered the model info in the "Fast Model" field.
Entered it now in the "Model names" field but similar error.
image

@Weves
Copy link
Contributor

Weves commented May 8, 2024

@nausher in your screenshots I don't see the API Base being set. That could be the issue?

@nausher
Copy link
Author

nausher commented May 8, 2024

@Weves - thanks for spotting that and chiming in! I noticed it too. But alas, no luck -
image
image

@Weves
Copy link
Contributor

Weves commented May 8, 2024

@nausher can you try running docker logs danswer-stack-api_server-1 --tail 300 and posting that here?

@exsodus2
Copy link

exsodus2 commented May 8, 2024

@nausher, I can replicate your error when not using a valid API Base address (I changed the port to a wrong one to test). Additionally, I got the same error after changing the address back to correct, but closing ollama . This leads me to believe your issue may be related to your ollama server itself (if you're sure you're using the right address pointing to it in Dawnser).

@nausher
Copy link
Author

nausher commented May 8, 2024

@exsodus2 you were right! While it quite wasn't ollama that had an issue it was with the API base address.
I'm using the rancher-desktop flavor of docker, so I had to change the base API address to http://host.rancher-desktop.internal:11434. Ollama is working now!

image

I posted a question and Danswer was surprisingly snappy and quoted the right local documents.

Now, if I could get my other issue & code change indexing org files accepted, that would be the cherry on this thing.
#1415

I'd like to leave this issue open since, the .env file is still not being picked up, though it can likely be a minor/downgraded issue for now.

@gabilanbrc
Copy link

gabilanbrc commented May 13, 2024

Hi team
Seems that I'm having a similar issue
After struggling some time, I have found on Google that the right address to use in Windows for accessing the host from Docker is http://docker.for.win.localhost:11434/
image

Even using that address, I keep getting the infamous 'NoneType' object has no attribute 'request' error during Danswer setup
Here is the result of docker logs danswer-stack-api_server-1 --tail 300:

05/13/2024 07:58:27 PM utils.py 228 : Failed to call LLM with the following error: 'NoneType' object has no attribute 'request'
05/13/2024 07:58:27 PM utils.py 228 : Failed to call LLM with the following error: 'NoneType' object has no attribute 'request'

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

INFO: 172.20.0.9:54288 - "POST /admin/llm/test HTTP/1.1" 400 Bad Request

Am I missing something more?

@Bushrxh
Copy link

Bushrxh commented May 14, 2024

I managed to use the official ollama image (ollama/ollama) and not litellm/ollama. also (if you still haven't), try adding

extra_hosts:
     - "host.docker.internal:host-gateway"

on the ollama service to allow containers communicate with each other

@gabilanbrc
Copy link

I managed to use the official ollama image (ollama/ollama) and not litellm/ollama. also (if you still haven't), try adding

extra_hosts:
     - "host.docker.internal:host-gateway"

on the ollama service to allow containers communicate with each other

Would you be so kind to summarize how to use ollama instead of litelllm. Is there a documentation section for that?
Thanks in advance

@Bushrxh
Copy link

Bushrxh commented May 14, 2024

yeah there is: https://docs.danswer.dev/gen_ai_configs/ollama

@gabilanbrc
Copy link

gabilanbrc commented May 14, 2024

yeah there is: https://docs.danswer.dev/gen_ai_configs/ollama

Thanks again! Unfortunately I was unable to make it work using the ollama Windows Installer or Docker, same error
Not sure how to move forward from here.

@gabilanbrc
Copy link

#1458 Seems to be related to this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants