Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

运行后在发送问题后出现如下报错 #3911

Closed
cstkn opened this issue Apr 28, 2024 · 1 comment
Closed

运行后在发送问题后出现如下报错 #3911

cstkn opened this issue Apr 28, 2024 · 1 comment
Assignees
Labels
bug Something isn't working

Comments

@cstkn
Copy link

cstkn commented Apr 28, 2024

INFO: 127.0.0.1:58524 - "GET / HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:58524 - "GET /docs HTTP/1.1" 200 OK
INFO: 127.0.0.1:58524 - "GET /openapi.json HTTP/1.1" 200 OK
2024-04-28 14:41:00,003 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:58538 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-28 14:41:00,008 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
2024-04-28 14:41:00,526 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:58538 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-28 14:41:00,530 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:58538 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK
2024-04-28 14:41:00,562 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK"
2024-04-28 14:41:16,194 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:58550 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-28 14:41:16,199 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
2024-04-28 14:41:16,606 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:58550 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-28 14:41:16,611 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:58550 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK
2024-04-28 14:41:16,642 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:58550 - "POST /chat/chat HTTP/1.1" 200 OK
2024-04-28 14:41:16,654 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/chat/chat "HTTP/1.1 200 OK"
2024-04-28 14:41:17 | INFO | stdout | INFO: 127.0.0.1:58553 - "POST /v1/chat/completions HTTP/1.1" 200 OK
2024-04-28 14:41:17,386 - _client.py[line:1758] - INFO: HTTP Request: POST http://127.0.0.1:20000/v1/chat/completions "HTTP/1.1 200 OK"
2024-04-28 14:41:17 | INFO | httpx | HTTP Request: POST http://127.0.0.1:20002/worker_generate_stream "HTTP/1.1 200 OK"
2024-04-28 14:41:17,741 - utils.py[line:38] - ERROR: object of type 'NoneType' has no len()
Traceback (most recent call last):
File "C:\Users\Administrator\Desktop\langchain-master\server\utils.py", line 36, in wrap_done
await fn
File "C:\Users\Administrator\Desktop\langchain-master\venv\Lib\site-packages\langchain\chains\base.py", line 385, in acall
raise e
File "C:\Users\Administrator\Desktop\langchain-master\venv\Lib\site-packages\langchain\chains\base.py", line 379, in acall
await self._acall(inputs, run_manager=run_manager)
File "C:\Users\Administrator\Desktop\langchain-master\venv\Lib\site-packages\langchain\chains\llm.py", line 275, in _acall
response = await self.agenerate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\langchain-master\venv\Lib\site-packages\langchain\chains\llm.py", line 142, in agenerate
return await self.llm.agenerate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\langchain-master\venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 554, in agenerate_prompt
return await self.agenerate(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\langchain-master\venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 514, in agenerate
raise exceptions[0]
File "C:\Users\Administrator\Desktop\langchain-master\venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 617, in _agenerate_with_cache
return await self._agenerate(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\langchain-master\venv\Lib\site-packages\langchain_community\chat_models\openai.py", line 522, in _agenerate
return await agenerate_from_stream(stream_iter)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\langchain-master\venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 87, in agenerate_from_stream
async for chunk in stream:
File "C:\Users\Administrator\Desktop\langchain-master\venv\Lib\site-packages\langchain_community\chat_models\openai.py", line 493, in _astream
if len(chunk["choices"]) == 0:
^^^^^^^^^^^^^^^^^^^^^
TypeError: object of type 'NoneType' has no len()
2024-04-28 14:41:17,744 - utils.py[line:40] - ERROR: TypeError: Caught exception: object of type 'NoneType' has no len()

@cstkn cstkn added the bug Something isn't working label Apr 28, 2024
Copy link

dosubot bot commented Apr 28, 2024

The error TypeError: object of type 'NoneType' has no len() suggests that the code is attempting to measure the length of an object that is None. This occurs in the line if len(chunk["choices"]) == 0:, indicating that chunk is expected to be a dictionary with a key "choices", but it is actually None.

To address this issue, ensure chunk is not None and actually contains the key "choices" before trying to access its length. You can modify your code as follows:

if chunk is not None and "choices" in chunk and len(chunk["choices"]) == 0:
    # Proceed with your logic here

This adjustment checks that chunk is not None and that the "choices" key exists within chunk before attempting to evaluate its length, preventing the TypeError.

If chunk is None, it might be due to the API endpoints (/list_models, /llm_model/list_running_models, /llm_model/list_config_models, /v1/chat/completions) returning None or an empty response under certain conditions. Review the API documentation or the code handling these responses for more insight.

For enhanced error handling, consider wrapping the code that accesses chunk in a try-except block and log any exceptions. This approach can provide more detailed insights into runtime errors:

try:
    if chunk is not None and "choices" in chunk and len(chunk["choices"]) == 0:
        # Your logic here
except Exception as e:
    logging.exception("Error while processing chunk: %s", e)

This will help in identifying the cause of chunk being None and allow for more targeted troubleshooting.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants