New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
运行报错, 提示404 #1385
Comments
有大佬在线吗 |
LLM_MODEL="gpt-35-turbo" |
INFO: connection open |
just solved it |
The embedding deployment name doesn't look right (it's not a GPT). This is in addition, @ayanjiushishuai to the other error, the one you experienced and solved. The API version also seems like it should contain more than a single number, though I'm not sure.
@he-aook
For embeddings, this is your choice. You can choose to try local:
Or, alternative:
For the chat model: The model is the deployment name defined in your Azure account that corresponds to GPT-3.5-turbo. It might have the same name, so try GPT-3.5-turbo. |
@enyst You are right. |
@ayanjiushishuai 好的, 我尝试一下 #1385 (comment) |
@ayanjiushishuai 还是没法正常启动对话, 我这边用的是微软的模型, gpt 3.5 and 4.0 都提示不支持 |
什么报错?你LiteLLM本地能调起来吗?代码有更新到最新版本吗?页面选择你设置的模型了吗 |
#1385 (comment) |
@ayanjiushishuai 我选择的是我的模型, 部署的是最新的版本, 也还是不行的; |
|
@ayanjiushishuai LiteLLM 这个本地怎么跑呢? |
import问题 是不是venv/conda没有装litLLM? |
@ayanjiushishuai 这个我安装了 |
如果你按照上述配置还是错误,最简单的办法就是重建conda环境,我尝试过很多次,最简单的办法就是直接使用 关键点:e LLM_MODEL、-e LLM_MODEL,最重要的是在WEB UI中使用时模型的选择LLM_MODEL,这里比较容易出错。当然我是本地LLM的运行方式,你可以参考。还有一种就是开发模式 |
@he-aook 抱歉 我之前没注意 你这个问题实际上不是因为没安装 是因为你本地文件起的名字和类库的名字重叠了 然后他import了他自己导致没有找到那个 |
@ayanjiushishuai |
@he-aook 我看这个报错可能是你的参数配置有问题 建议确认下你的Azure上的参数和配置的是否有出入 from litellm import completion
## set ENV variables
os.environ["AZURE_API_KEY"] = ""
os.environ["AZURE_API_BASE"] = "" ## miss
os.environ["AZURE_API_VERSION"] = "" ## miss
# azure call
response = completion(
model = "azure/<your_deployment_name>",
messages = [{ "content": "Hello, how are you?","role": "user"}]
) 如果这个用例本地运行正常了 再对应填到opendevin的参数 应该就行了 |
好的, 谢谢,我尝试一下
在 2024-04-29 15:52:41,"阿言ayan" ***@***.***> 写道:
@he-aook 我看这个报错可能是你的参数配置有问题 建议确认下你的Azure上的参数和配置的是否有出入
查看你之前的截图 可能是部分参数没有写
#1385 (comment)
fromlitellmimportcompletion## set ENV variablesos.environ["AZURE_API_KEY"] =""os.environ["AZURE_API_BASE"] =""## missos.environ["AZURE_API_VERSION"] =""## miss# azure callresponse=completion(
model="azure/<your_deployment_name>",
messages= [{ "content": "Hello, how are you?","role": "user"}]
)
如果这个用例本地运行正常了 再对应填到opendevin的参数 应该就行了
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
@ayanjiushishuai set ENV variablesos.environ["OPENAI_API_KEY"] = "XXXXXXXXXXXXXX" messages = [{ "content": "Hello, how are you?","role": "user"}] cohere callresponse = completion( 这样配置执行也是报错的, 使用的是python3.11 执行的 |
|
以下是我的配置, 执行之后还是报错的
from litellm import completion
import os
## set ENV variables
os.environ["OPENAI_API_KEY"] = "XXXXXX"
os.environ["AZURE_API_BASE"] = "https://ai-XXX-dev.openai.azure.com/"
os.environ["AZURE_API_VERSION"] = "2024-01-25-Preview"
messages = [{ "content": "Hello, how are you?","role": "user"}]
# cohere call
response = completion(
model = "azure/gpt4-1106-test",
messages = [{ "content": "Hello, how are you?","role": "user"}]
)
print(response)
~
~
报错信息:
azure_client = AzureOpenAI(**azure_client_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniforge3/lib/python3.11/site-packages/openai/lib/azure.py", line 169, in __init__
raise OpenAIError(
openai.OpenAIError: Missing credentials. Please pass one of `api_key`, `azure_ad_token`, `azure_ad_token_provider`, or the `AZURE_OPENAI_API_KEY` or `AZURE_OPENAI_AD_TOKEN` environment variables.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/miniforge3/lib/python3.11/site-packages/litellm/main.py", line 842, in completion
response = azure_chat_completions.completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniforge3/lib/python3.11/site-packages/litellm/llms/azure.py", line 307, in completion
raise AzureOpenAIError(status_code=500, message=str(e))
litellm.llms.azure.AzureOpenAIError: Missing credentials. Please pass one of `api_key`, `azure_ad_token`, `azure_ad_token_provider`, or the `AZURE_OPENAI_API_KEY` or `AZURE_OPENAI_AD_TOKEN` environment variables.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/OpenDevin-2/python/a.py", line 11, in <module>
response = completion(
^^^^^^^^^^^
File "/root/miniforge3/lib/python3.11/site-packages/litellm/utils.py", line 3077, in wrapper
raise e
File "/root/miniforge3/lib/python3.11/site-packages/litellm/utils.py", line 2975, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniforge3/lib/python3.11/site-packages/litellm/main.py", line 2148, in completion
raise exception_type(
^^^^^^^^^^^^^^^
File "/root/miniforge3/lib/python3.11/site-packages/litellm/utils.py", line 8823, in exception_type
raise e
File "/root/miniforge3/lib/python3.11/site-packages/litellm/utils.py", line 8760, in exception_type
raise APIError(
litellm.exceptions.APIError: AzureException - Missing credentials. Please pass one of `api_key`, `azure_ad_token`, `azure_ad_token_provider`, or the `AZURE_OPENAI_API_KEY` or `AZURE_OPENAI_AD_TOKEN` environment variables.
在 2024-04-29 19:03:42,"阿言ayan" ***@***.***> 写道:
os.environ["AZURE_API_VERSION"] = "0125-Preview"
这个配置的格式是错误的 应该为类似于2024-01-25-preview这样格式
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
https://litellm.vercel.app/docs/providers/azure |
Okay, I'm giving it a try
在 2024-05-03 17:27:14,"மனோஜ்குமார் பழனிச்சாமி" ***@***.***> 写道:
https://litellm.vercel.app/docs/providers/azure
set AZURE_API_KEY instead of OPENAI_API_KEY
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
@SmartManoj 不管怎么配置调试, 都会报错 ERROR:root:<class 'KeyError'>: "Please set 'LLM_API_KEY' in config.toml (base) [root@iZrj9f205caqao6ghsyor0Z python]# cat a.py set ENV variablesos.environ["AZURE_API_KEY"] = "PPPPPPP" messages = [{ "content": "Hello, how are you?","role": "user"}] cohere callresponse = completion( |
Please set LLM_API_KEY instead of AZURE_API_KEY coz the LLM_API_KEY is required var. |
@SmartManoj set ENV variablesos.environ["LLM_API_KEY"] = "LLLLLL" messages = [{ "content": "Hello, how are you?","role": "user"}] cohere callresponse = completion( 还是不对 |
Please add full traceback. api_key is already passed. Could you also set |
@SmartManoj 不搞了, 实在搞不懂了 |
Please uncomment this and run. |
Describe your question
运行报错, 提示404
openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
File "/root/.cache/pypoetry/virtualenvs/opendevin-QzKVoApH-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 1012, in _request
^^^^^^^^^^^^^^
openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
File "/root/.cache/pypoetry/virtualenvs/opendevin-QzKVoApH-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 1012, in _request
raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
ode': '404', 'message': 'Resource not found'}}
openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
raise self._make_status_error_from_response(err.response) from None
Additional context
The text was updated successfully, but these errors were encountered: