New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
python startup.py -a执行只出现一个端口号 #3889
Comments
Your service has successfully started on port 20002, as indicated by the log messages. The deprecation warning about model startup functionality and the
Ensure your application's functionality is not impacted by these warnings by testing thoroughly. For any adjustments or issues, consulting the Langchain-Chatchat documentation or community support channels can provide specific guidance.
|
在本地进行模型部署,并未使用API 当前使用的分词器:ChineseRecursiveTextSplitter 2024-04-26 14:50:19,705 - startup.py[line:655] - INFO: 正在启动服务: |
$ python server/llm_api.py def list_running_models( def list_config_models( def get_model_config(
def stop_llm_model( def change_llm_model( def list_search_engines() -> BaseResponse:
|
依赖没装对吧,或者模型没全 |
==============================Langchain-Chatchat Configuration==============================
操作系统:Windows-10-10.0.22631-SP0.
python版本:3.11.7 | packaged by Anaconda, Inc. | (main, Dec 15 2023, 18:05:47) [MSC v.1916 64 bit (AMD64)]
项目版本:v0.2.10
langchain版本:0.0.354. fastchat版本:0.2.35
当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['chatglm2-6b-int4', 'zhipu-api', 'openai-api'] @ cpu
{'device': 'cpu',
'host': '127.0.0.1',
'infer_turbo': False,
'model_path': 'D:\NLP\Langchain-Chatchat\chatglm2-6b-int4',
'model_path_exists': True,
'port': 20002}
{'api_key': '',
'device': 'auto',
'host': '127.0.0.1',
'infer_turbo': False,
'online_api': True,
'port': 21001,
'provider': 'ChatGLMWorker',
'version': 'glm-4',
'worker_class': <class 'server.model_workers.zhipu.ChatGLMWorker'>}
{'api_base_url': 'https://api.openai.com/v1',
'api_key': '',
'device': 'auto',
'host': '127.0.0.1',
'infer_turbo': False,
'model_name': 'gpt-4',
'online_api': True,
'openai_proxy': '',
'port': 20002}
当前Embbedings模型: m3e-base @ cpu
==============================Langchain-Chatchat Configuration==============================
2024-04-26 11:20:41,284 - startup.py[line:655] - INFO: 正在启动服务:
2024-04-26 11:20:41,284 - startup.py[line:656] - INFO: 如需查看 llm_api 日志,请前往 D:\NLP\Langchain-Chatchat\logs
D:\Learning\miniconda3\envs\LangChain-ChatChat\Lib\site-packages\langchain_core_api\deprecation.py:117: LangChainDeprecationWarning: 模型启动功能将于 Langchain-Chatchat 0.3.x重写,支持更多模式和加速启动,0.2.x中相关功能将废弃
warn_deprecated(
2024-04-26 11:20:46 | INFO | model_worker | Register to controller
2024-04-26 11:20:46 | ERROR | stderr | INFO: Started server process [18456]
2024-04-26 11:20:46 | ERROR | stderr | INFO: Waiting for application startup.
2024-04-26 11:20:46 | ERROR | stderr | INFO: Application startup complete.
2024-04-26 11:20:46 | ERROR | stderr | INFO: Uvicorn running on http://127.0.0.1:2000(Press CTRL+C to quit)
2024-04-26 11:20:46 | INFO | model_worker | Loading the model ['chatglm2-6b-int4'] on worker 66924f22 ...
2024-04-26 11:20:47 | ERROR | stderr | D:\Learning\miniconda3\envs\LangChain-ChatChat\Lib\site-packages\torch_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
2024-04-26 11:20:47 | ERROR | stderr | return self.fget.get(instance, owner)()
The text was updated successfully, but these errors were encountered: