Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] 简洁阐述问题 / Concise description of the issue #3904

Closed
paidaxinbao opened this issue Apr 26, 2024 · 1 comment
Closed

[BUG] 简洁阐述问题 / Concise description of the issue #3904

paidaxinbao opened this issue Apr 26, 2024 · 1 comment
Assignees
Labels
bug Something isn't working

Comments

@paidaxinbao
Copy link

问题描述 / Problem Description
网页启动后,模型不显示对话

复现问题的步骤 / Steps to Reproduce

  1. 执行 ‘python startup.py -a’

预期的结果 / Expected Result

实际结果 / Actual Result
网页无对话产生

环境信息 / Environment Information
==============================Langchain-Chatchat Configuration==============================
操作系统:Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17.
python版本:3.11.0 (main, Mar 1 2023, 18:26:19) [GCC 11.2.0]
项目版本:v0.2.10
langchain版本:0.0.354. fastchat版本:0.2.35

当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['chatglm3-6b'] @ cuda
{'device': 'cuda',
'host': '0.0.0.0',
'infer_turbo': False,
'model_path': '/home/pxy/.cache/huggingface/hub/THUDM/chatglm3-6b',
'model_path_exists': True,
'port': 20002}
当前Embbedings模型: bge-large-zh @ cuda
==============================Langchain-Chatchat Configuration==============================

2024-04-26 17:36:39,024 - startup.py[line:655] - INFO: 正在启动服务:
2024-04-26 17:36:39,024 - startup.py[line:656] - INFO: 如需查看 llm_api 日志,请前往 /home/pxy/Langchain-Chatchat/logs
/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: 模型启动功能将于 Langchain-Chatchat 0.3.x重写,支持更多模式和加速启动,0.2.x中相关功能将废弃
warn_deprecated(
2024-04-26 17:36:43 | ERROR | stderr | INFO: Started server process [74012]
2024-04-26 17:36:43 | ERROR | stderr | INFO: Waiting for application startup.
2024-04-26 17:36:43 | ERROR | stderr | INFO: Application startup complete.
2024-04-26 17:36:43 | ERROR | stderr | ERROR: [Errno 98] error while attempting to bind on address ('0.0.0.0', 20000): address already in use
2024-04-26 17:36:43 | ERROR | stderr | INFO: Waiting for application shutdown.
2024-04-26 17:36:43 | ERROR | stderr | INFO: Application shutdown complete.
2024-04-26 17:36:43 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker 75f76125 ...
2024-04-26 17:36:43 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting eos_token is not supported, use the default one.
2024-04-26 17:36:43 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting pad_token is not supported, use the default one.
2024-04-26 17:36:43 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting unk_token is not supported, use the default one.
Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s]
Loading checkpoint shards: 14%|█████████████████▏ | 1/7 [00:00<00:01, 3.04it/s]
Loading checkpoint shards: 29%|██████████████████████████████████▎ | 2/7 [00:00<00:01, 3.05it/s]
Loading checkpoint shards: 43%|███████████████████████████████████████████████████▍ | 3/7 [00:00<00:01, 3.08it/s]
Loading checkpoint shards: 57%|████████████████████████████████████████████████████████████████████▌ | 4/7 [00:01<00:00, 3.08it/s]
Loading checkpoint shards: 71%|█████████████████████████████████████████████████████████████████████████████████████▋ | 5/7 [00:01<00:00, 2.85it/s]
Loading checkpoint shards: 86%|██████████████████████████████████████████████████████████████████████████████████████████████████████▊ | 6/7 [00:02<00:00, 2.73it/s]
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:02<00:00, 3.03it/s]
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:02<00:00, 2.98it/s]
2024-04-26 17:36:46 | ERROR | stderr |
2024-04-26 17:36:49 | INFO | model_worker | Register to controller
INFO: Started server process [74571]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:7861 (Press CTRL+C to quit)

==============================Langchain-Chatchat Configuration==============================
操作系统:Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17.
python版本:3.11.0 (main, Mar 1 2023, 18:26:19) [GCC 11.2.0]
项目版本:v0.2.10
langchain版本:0.0.354. fastchat版本:0.2.35

当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['chatglm3-6b'] @ cuda
{'device': 'cuda',
'host': '0.0.0.0',
'infer_turbo': False,
'model_path': '/home/pxy/.cache/huggingface/hub/THUDM/chatglm3-6b',
'model_path_exists': True,
'port': 20002}
当前Embbedings模型: bge-large-zh @ cuda

服务端运行信息:
OpenAI API Server: http://127.0.0.1:20000/v1
Chatchat API Server: http://127.0.0.1:7861
Chatchat WEBUI Server: http://0.0.0.0:8501
==============================Langchain-Chatchat Configuration==============================

2024-04-26 17:36:56.653 Port 8501 is already in use
2024-04-26 17:37:22,269 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:46932 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-26 17:37:22,272 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
2024-04-26 17:37:22,389 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:46932 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-26 17:37:22,391 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:46932 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK
2024-04-26 17:37:22,414 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK"
2024-04-26 17:37:26,198 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:46952 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-26 17:37:26,201 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
2024-04-26 17:37:26,227 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:46952 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-26 17:37:26,229 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:46952 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK
2024-04-26 17:37:26,246 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:46952 - "POST /chat/chat HTTP/1.1" 200 OK
/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The class langchain_community.chat_models.openai.ChatOpenAI was deprecated in langchain-community 0.0.10 and will be removed in 0.2.0. An updated version of the class exists in the langchain-openai package and should be used instead. To use it run pip install -U langchain-openai and import as from langchain_openai import ChatOpenAI.
warn_deprecated(
2024-04-26 17:37:26,354 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/chat/chat "HTTP/1.1 200 OK"
2024-04-26 17:37:26,378 - _base_client.py[line:1524] - INFO: Retrying request to /chat/completions in 0.788325 seconds
2024-04-26 17:37:27,172 - _base_client.py[line:1524] - INFO: Retrying request to /chat/completions in 1.532114 seconds
2024-04-26 17:37:28,711 - utils.py[line:38] - ERROR: Connection error.
Traceback (most recent call last):
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/httpx/_transports/default.py", line 67, in map_httpcore_exceptions
yield
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/httpx/_transports/default.py", line 371, in handle_async_request
resp = await self._pool.handle_async_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 216, in handle_async_request
raise exc from None
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 196, in handle_async_request
response = await connection.handle_async_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/httpcore/_async/connection.py", line 101, in handle_async_request
return await self._connection.handle_async_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/httpcore/_async/http11.py", line 143, in handle_async_request
raise exc
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/httpcore/_async/http11.py", line 113, in handle_async_request
) = await self._receive_response_headers(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/httpcore/_async/http11.py", line 186, in _receive_response_headers
event = await self._receive_event(timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/httpcore/_async/http11.py", line 238, in _receive_event
raise RemoteProtocolError(msg)
httpcore.RemoteProtocolError: Server disconnected without sending a response.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/openai/_base_client.py", line 1435, in _request
response = await self._client.send(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/httpx/_client.py", line 1646, in send
response = await self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/httpx/_client.py", line 1674, in _send_handling_auth
response = await self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/httpx/_client.py", line 1711, in _send_handling_redirects
response = await self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/httpx/_client.py", line 1748, in _send_single_request
response = await transport.handle_async_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/httpx/_transports/default.py", line 370, in handle_async_request
with map_httpcore_exceptions():
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/contextlib.py", line 155, in exit
self.gen.throw(typ, value, traceback)
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/httpx/_transports/default.py", line 84, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.RemoteProtocolError: Server disconnected without sending a response.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/pxy/Langchain-Chatchat/server/utils.py", line 36, in wrap_done
await fn
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/langchain/chains/base.py", line 385, in acall
raise e
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/langchain/chains/base.py", line 379, in acall
await self._acall(inputs, run_manager=run_manager)
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/langchain/chains/llm.py", line 275, in _acall
response = await self.agenerate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/langchain/chains/llm.py", line 142, in agenerate
return await self.llm.agenerate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 554, in agenerate_prompt
return await self.agenerate(
^^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 514, in agenerate
raise exceptions[0]
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 617, in _agenerate_with_cache
return await self._agenerate(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/langchain_community/chat_models/openai.py", line 522, in _agenerate
return await agenerate_from_stream(stream_iter)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 87, in agenerate_from_stream
async for chunk in stream:
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/langchain_community/chat_models/openai.py", line 488, in _astream
async for chunk in await acompletion_with_retry(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/langchain_community/chat_models/openai.py", line 105, in acompletion_with_retry
return await llm.async_client.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 1300, in create
return await self._post(
^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/openai/_base_client.py", line 1703, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/openai/_base_client.py", line 1406, in request
return await self._request(
32232
^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/openai/_base_client.py", line 1459, in _request
return await self._retry_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/openai/_base_client.py", line 1528, in _retry_request
return await self._request(
^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/openai/_base_client.py", line 1459, in _request
return await self._retry_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/openai/_base_client.py", line 1528, in _retry_request
return await self._request(
^^^^^^^^^^^^^^^^^^^^
File "/home/pxy/miniconda3/envs/langchain-chat/lib/python3.11/site-packages/openai/_base_client.py", line 1469, in _request
raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.
2024-04-26 17:37:28,714 - utils.py[line:40] - ERROR: APIConnectionError: Caught exception: Connection error.

附加信息 / Additional Information
参考了关于cpu启动的问题解决方案,但是我是GPU启动,部署在服务器上。二次对话后会出现如下报错:

32232

@paidaxinbao paidaxinbao added the bug Something isn't working label Apr 26, 2024
Copy link

dosubot bot commented Apr 26, 2024

To address the issues you're encountering with your Langchain-Chatchat setup, here are some targeted steps:

  1. Address Already in Use Error: This error indicates that the ports ('0.0.0.0', 20000) and port 8501 are currently occupied by other processes. Use lsof -i :PORT to identify these processes and terminate them, or adjust the port settings in configs/server_config.py to use available ports.

  2. LangChainDeprecationWarning: The warning about model startup functionality deprecation in version 0.3.x of Langchain-Chatchat suggests preparing for future updates by familiarizing yourself with upcoming changes. This warning itself doesn't directly impact the current issue but is important for future compatibility.

  3. RemoteProtocolError and APIConnectionError: These errors might be due to unstable server connections or incorrect server configurations. Check your server's network settings and consider increasing the HTTPX_DEFAULT_TIMEOUT in the configuration to accommodate slower network conditions or large model loading times.

  4. GPU Configuration: Ensure your GPU is correctly configured for use by the server. Verify that the LLM_DEVICE setting is correctly set to "cuda" to utilize GPU acceleration. Adjust GPU-related settings as needed to optimize performance.

  5. Debugging Tips:

    • Review Docker container logs for additional error details.
    • Confirm all dependencies are installed and the Docker environment is correctly configured, including GPU access if applicable.
    • Ensure there are no port conflicts with other applications or containers and that the necessary network permissions are in place.

By following these steps and adjusting your setup accordingly, you should be able to resolve the issues preventing dialogue generation on your webpage.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@zRzRzRzRzRzRzR zRzRzRzRzRzRzR self-assigned this May 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants