Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Model selection from settings is ignored (OpenAIException - Error code: 401) #1261

Open
2 tasks done
dagelf opened this issue Apr 21, 2024 · 12 comments
Open
2 tasks done
Labels
bug Something isn't working severity:low Minor issues, code cleanup, etc

Comments

@dagelf
Copy link
Contributor

dagelf commented Apr 21, 2024

Is there an existing issue for the same bug?

Describe the bug

When I set the correct key, and choose the correct model in settings, it still tries to use GPT4 with my key to another platform.

Current Version

ghcr.io/opendevin/opendevin:0.3.1

Installation and Configuration

docker run     -e LLM_API_KEY     -e WORKSPACE_MOUNT_PATH=$WORKSPACE_DIR     -v $WORKSPACE_DIR:/opt/workspace_base     -v /var/run/docker.sock:/var/run/docker.sock     -p 3000:3000     --add-host host.docker.internal=host-gateway     ghcr.io/opendevin/opendevin:0.3.1

Model and Agent

No response

Reproduction Steps

  1. set API key in env
  2. docker run
  3. Send first instruction

Logs, Errors, Screenshots, and Additional Context

Error during task loop: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: gsk_vKbB********************************************PIaY. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
@dagelf dagelf added the bug Something isn't working label Apr 21, 2024
@dagelf
Copy link
Contributor Author

dagelf commented Apr 21, 2024

Of course works if you run with:

docker run  -e OPENAI_API_BASE  -e LLM_MODEL  -e LLM_BASE_URL   -e LLM_API_KEY     -e WORKSPACE_MOUNT_PATH=$WORKSPACE_DIR     -v $WORKSPACE_DIR:/opt/workspace_base     -v /var/run/docker.sock:/var/run/docker.sock     -p 3000:3000     --add-host host.docker.internal=host-gateway     ghcr.io/opendevin/opendevin:0.3.1

But it shouldn't expose settings that doesn't work!

@enyst
Copy link
Collaborator

enyst commented Apr 21, 2024

Just to clarify, please: you say you chose model in settings, but in the reproduction steps it's only set key and run? The app has some fallback if you set nothing as model.

It should work with the UI choice if you made it. Was it a different run?

@rbren
Copy link
Collaborator

rbren commented Apr 21, 2024

@dagelf we have an LLM_MODEL for running OpenDevin programatically (i.e. python opendevin/main.py), but we don't encourage setting that for running the full application.

Which model did you choose in the UI?

@zhonggegege
Copy link

zhonggegege commented Apr 25, 2024

I also have the same problem. Even if the server address of lm-studio is set in the environment variable, after starting the ui and selecting a model in the interface, it still prompts that the wrong openai key is used.

new model:MaziyarPanahi/WizardLM-2-7B-GGUF

git pull
export LLM_API_KEY="lm-studio"
export WORKSPACE_BASE=/home/agetn/OpenDevin/workspace
export LLM_BASE_URL="http://192.168.0.76:1234/v1"
docker run \

-e LLM_API_KEY \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-v $WORKSPACE_BASE:/opt/workspace_base \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
--add-host host.docker.internal=host-gateway \
ghcr.io/opendevin/opendevin:0.3.1

litellm.exceptions.AuthenticationError: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: lm-studio. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

@dagelf
Copy link
Contributor Author

dagelf commented Apr 25, 2024 via email

@dagelf
Copy link
Contributor Author

dagelf commented Apr 25, 2024 via email

@zhonggegege
Copy link

zhonggegege commented Apr 25, 2024

Setting the environment variable "LLM_BASE_URL" solved the 401 problem by.

@zhonggegege
Copy link

zhonggegege commented Apr 25, 2024

是的,这看起来相关

docker run
-e LLM_API_KEY
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE
-e LLM_BASE_URL="http://192.168.0.76:1234/v1"
-v /var/run/docker.sock:/var/run/docker.sock
-p 3000:3000
--add-host host.docker.internal=host-gateway
ghcr.io/opendevin/opendevin:0.3.1

It's working.

@zhonggegege
Copy link

image
The web UI selects a model, but still requests the wrong model when requesting the endpoint.

@enyst
Copy link
Collaborator

enyst commented Apr 26, 2024

@zhonggegege are you running the latest version? Please use 0.4.0, the latest released version, or pull from git if that's how you're running.

It might also be useful to clear your local storage in the browser, if you choose from the UI and it doesn't seem to get applied, although it shouldn't be necessary...

@enyst
Copy link
Collaborator

enyst commented Apr 26, 2024

@dagelf Can you please upgrade to 0.4.0? You were using 0.3.1 before. And if you're running with an UI (as you are), choose the model in the UI.

@zhonggegege
Copy link

zhonggegege commented Apr 27, 2024

@dagelf Can you please upgrade to 0.4.0? You were using 0.3.1 before. And if you're running with an UI (as you are), choose the model in the UI.

thank you for your reply. Yes,now I am using the latest 0.4.0 and pulling the latest repository code. I've described a complete error of multiple attempts here:#1380

@rbren rbren added the severity:low Minor issues, code cleanup, etc label May 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working severity:low Minor issues, code cleanup, etc
Projects
None yet
Development

No branches or pull requests

4 participants