Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] 科学上网时无法对话 API通信遇到错误:peer closed connection without sending complete message body (incomplete chunked read) #3950

Open
jiasu-hezhip opened this issue May 6, 2024 · 1 comment
Assignees
Labels
bug Something isn't working

Comments

@jiasu-hezhip
Copy link

问题描述 / Problem Description
使用clash科学上网,代理手动配置如下
image
第一次进入时报错api.list_running_models()为None
修改server/utils.py中的def get_httpx_client如下解决
proxies = {
"http://": "socks5://127.0.0.1:7891",
"https://": "socks5://127.0.0.1:7891"
}
kwargs.update(timeout=timeout, proxies=proxies)
但是在提问后报错API通信遇到错误:peer closed connection without sending complete message body (incomplete chunked read)
具体不知道改哪里,请大神指教一下
以下为详细日志
==============================Langchain-Chatchat Configuration==============================
操作系统:Linux-6.5.0-18-generic-x86_64-with-glibc2.35.
python版本:3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0]
项目版本:v0.2.10
langchain版本:0.0.354. fastchat版本:0.2.35

当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['Qwen1.5-7B-Chat'] @ cuda
{'device': 'cuda',
'host': '0.0.0.0',
'infer_turbo': False,
'model_path': '/home/jiasu/model/Qwen/Qwen1.5-7B-Chat',
'model_path_exists': True,
'port': 20002}
当前Embbedings模型: bge-large-zh-v1.5 @ cuda
==============================Langchain-Chatchat Configuration==============================

2024-05-06 17:01:11,878 - startup.py[line:655] - INFO: 正在启动服务:
2024-05-06 17:01:11,878 - startup.py[line:656] - INFO: 如需查看 llm_api 日志,请前往 /home/jiasu/pythonproject/Langchain-Chatchat-master/logs
/root/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: 模型启动功能将于 Langchain-Chatcx重写,支持更多模式和加速启动,0.2.x中相关功能将废弃
warn_deprecated(
2024-05-06 17:01:14 | ERROR | stderr | INFO: Started server process [8067]
2024-05-06 17:01:14 | ERROR | stderr | INFO: Waiting for application startup.
2024-05-06 17:01:14 | ERROR | stderr | INFO: Application startup complete.
2024-05-06 17:01:14 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:20000 (Press CTRL+C to quit)
2024-05-06 17:01:14 | INFO | model_worker | Loading the model ['Qwen1.5-7B-Chat'] on worker 17d17830 ...
Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]
Loading checkpoint shards: 25%|████████████████████████▊ | 1/4 [00:00<00:01, 2.44it/s]
Loading checkpoint shards: 50%|█████████████████████████████████████████████████▌ | 2/4 [00:00<00:00, 2.45it/s]
Loading checkpoint shards: 75%|██████████████████████████████████████████████████████████████████████████▎ | 3/4 [00:01<00:00, 2.40it/s]
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:01<00:00, 2.50it/s]
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:01<00:00, 2.47it/s]
2024-05-06 17:01:16 | ERROR | stderr |
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
2024-05-06 17:01:18 | INFO | model_worker | Register to controller
INFO: Started server process [8179]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:7861 (Press CTRL+C to quit)

==============================Langchain-Chatchat Configuration==============================
操作系统:Linux-6.5.0-18-generic-x86_64-with-glibc2.35.
python版本:3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0]
项目版本:v0.2.10
langchain版本:0.0.354. fastchat版本:0.2.35

当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['Qwen1.5-7B-Chat'] @ cuda
{'device': 'cuda',
'host': '0.0.0.0',
'infer_turbo': False,
'model_path': '/home/jiasu/model/Qwen/Qwen1.5-7B-Chat',
'model_path_exists': True,
'port': 20002}
当前Embbedings模型: bge-large-zh-v1.5 @ cuda

服务端运行信息:
OpenAI API Server: http://127.0.0.1:20000/v1
Chatchat API Server: http://127.0.0.1:7861
Chatchat WEBUI Server: http://0.0.0.0:8501
==============================Langchain-Chatchat Configuration==============================

You can now view your Streamlit app in your browser.

URL: http://0.0.0.0:8501

2024-05-06 17:47:22,635 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:58916 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-05-06 17:47:22,637 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
2024-05-06 17:47:22,785 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:58916 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-05-06 17:47:22,787 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:58916 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK
2024-05-06 17:47:22,793 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK"
2024-05-06 17:47:42,547 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:49492 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-05-06 17:47:42,549 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
2024-05-06 17:47:42,594 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:49492 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-05-06 17:47:42,597 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:49492 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK
2024-05-06 17:47:42,604 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:49492 - "POST /chat/chat HTTP/1.1" 200 OK
/root/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The class langchain_community.chat_models.openai.ChatOpenAI was deprecated in langchain-community 0.0.10 and will be removed in 0.2.0. An updated version of the class exists in the langchain-openai package and should be used instead. To use it run pip install -U langchain-openai and import as from langchain_openai import ChatOpenAI.
warn_deprecated(
2024-05-06 17:47:42,638 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/chat/chat "HTTP/1.1 200 OK"
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/sse_starlette/sse.py", line 269, in call
await wrap(partial(self.listen_for_disconnect, receive))
File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/sse_starlette/sse.py", line 258, in wrap
await func()
File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/sse_starlette/sse.py", line 215, in listen_for_disconnect
message = await receive()
^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 580, in receive
await self.message_event.wait()
File "/root/anaconda3/envs/langchain/lib/python3.11/asyncio/locks.py", line 213, in wait
await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 7fc3afefb390

During handling of the above exception, another exception occurred:

  • Exception Group Traceback (most recent call last):
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 419, in run_asgi
    | result = await app( # type: ignore[func-returns-value]
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in call
    | return await self.app(scope, receive, send)
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in call
    | await super().call(scope, receive, send)
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/applications.py", line 119, in call
    | await self.middleware_stack(scope, receive, send)
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in call
    | raise exc
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in call
    | await self.app(scope, receive, _send)
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/middleware/cors.py", line 83, in call
    | await self.app(scope, receive, send)
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in call
    | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    | raise exc
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    | await app(scope, receive, sender)
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/routing.py", line 762, in call
    | await self.middleware_stack(scope, receive, send)
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/routing.py", line 782, in app
    | await route.handle(scope, receive, send)
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle
    | await self.app(scope, receive, send)
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/routing.py", line 77, in app
    | await wrap_app_handling_exceptions(app, request)(scope, receive, send)
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    | raise exc
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    | await app(scope, receive, sender)
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/routing.py", line 75, in app
    | await response(scope, receive, send)
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/sse_starlette/sse.py", line 255, in call
    | async with anyio.create_task_group() as task_group:
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 678, in aexit
    | raise BaseExceptionGroup(
    | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
    +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/sse_starlette/sse.py", line 258, in wrap
    | await func()
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/sse_starlette/sse.py", line 245, in stream_response
    | async for data in self.body_iterator:
    | File "/home/jiasu/pythonproject/Langchain-Chatchat-master/server/chat/chat.py", line 53, in chat_iterator
    | model = get_ChatOpenAI(
    | ^^^^^^^^^^^^^^^
    | File "/home/jiasu/pythonproject/Langchain-Chatchat-master/server/utils.py", line 60, in get_ChatOpenAI
    | model = ChatOpenAI(
    | ^^^^^^^^^^^
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 180, in warn_if_direct_instance
    | return wrapped(self, *args, **kwargs)
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | File "/root/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 107, in init
    | super().init(**kwargs)
    | File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init
    | pydantic.error_wrappers.ValidationError: 1 validation error for ChatOpenAI
    | root
    | Unknown scheme for proxy URL URL('socks://127.0.0.1:7891/') (type=value_error)
    +------------------------------------
    2024-05-06 17:47:42,640 - utils.py[line:197] - ERROR: RemoteProtocolError: API通信遇到错误:peer closed connection without sending complete message body (incomplete chunked read)
@jiasu-hezhip jiasu-hezhip added the bug Something isn't working label May 6, 2024
Copy link

dosubot bot commented May 6, 2024

The error you're encountering, "API communication error: peer closed connection without sending complete message body (incomplete chunked read)," suggests an issue with how the client handles the server's response. This can be due to various factors, including proxy misconfiguration, timeout settings, or incorrect handling of HTTP/1.1 chunked transfer encoding.

To address this issue, consider the following steps:

  1. Verify Proxy Configuration: Ensure your proxy is correctly set up to handle HTTP/1.1 chunked transfer encoding. Incorrect proxy settings can lead to incomplete reads.

  2. Increase Timeout Settings: Check and possibly increase the timeout settings in both your proxy and application. A low timeout setting might not accommodate longer request processing times.

  3. Check for Content-Length Header: If the response is being modified by the proxy or your application, ensure the Content-Length header accurately reflects the response size. An incorrect Content-Length can cause the client to expect more data, resulting in an incomplete read error.

  4. Review Chunked Transfer Encoding Handling: Make sure both the proxy and your application correctly process chunked transfer encoding. Issues in handling chunked responses could lead to the observed error.

  5. Inspect Application and Proxy Logs: Detailed logs from your application and the proxy may reveal anomalies or errors that point to the source of the communication breakdown.

  6. Test Without Proxy: As a diagnostic step, try bypassing the proxy to see if the issue persists. If the error does not occur without the proxy, the problem likely resides in the proxy's request or response handling.

By systematically going through these steps, you should be able to identify and resolve the underlying cause of the "incomplete chunked read" error.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@zRzRzRzRzRzRzR zRzRzRzRzRzRzR self-assigned this May 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants