New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
启动startup.py -p #3900
Comments
It looks like you're trying to use the Ensure that your environment meets all the necessary prerequisites for the If you're following all the setup instructions correctly and still facing issues, checking for any error messages or exceptions thrown when executing the
|
(langchain_chat_env) root@3b31cc983ec9:/home/langchain/Langchain-Chatchat-master# python startup.py -p
==============================Langchain-Chatchat Configuration==============================
操作系统:Linux-5.19.0-42-generic-x86_64-with-glibc2.31.
python版本:3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0]
项目版本:v0.2.10
langchain版本:0.0.354. fastchat版本:0.2.35
当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['ollama-api'] @ cuda
{'api_base_url': 'http://192.168.1.110:11434',
'api_key': 'ollama',
'device': 'cuda',
'host': '0.0.0.0',
'infer_turbo': False,
'model_name': 'llama3',
'online_api': True,
'openai_proxy': '',
'port': 11434}
当前Embbedings模型: bge-large-zh @ cuda
==============================Langchain-Chatchat Configuration==============================
2024-04-26 16:14:21,549 - startup.py[line:655] - INFO: 正在启动服务:
2024-04-26 16:14:21,549 - startup.py[line:656] - INFO: 如需查看 llm_api 日志,请前往 /home/langchain/Langchain-Chatchat-master/logs
usage: startup.py [-h] [-a] [--all-api] [--llm-api] [-o] [-m] [-n MODEL_NAME [MODEL_NAME ...]] [-c CONTROLLER_ADDRESS]
[--api] [-p] [-w] [-q] [-i]
options:
-h, --help show this help message and exit
-a, --all-webui run fastchat's controller/openai_api/model_worker servers, run api.py and webui.py
--all-api run fastchat's controller/openai_api/model_worker servers, run api.py
--llm-api run fastchat's controller/openai_api/model_worker servers
-o, --openai-api run fastchat's controller/openai_api servers
-m, --model-worker run fastchat's model_worker server with specified model name. specify --model-name if not
using default LLM_MODELS
-n MODEL_NAME [MODEL_NAME ...], --model-name MODEL_NAME [MODEL_NAME ...]
specify model name for model worker. add addition names with space seperated to start multiple
model workers.
-c CONTROLLER_ADDRESS, --controller CONTROLLER_ADDRESS
specify controller address the worker is registered to. default is FSCHAT_CONTROLLER
--api run api.py server
-p, --api-worker run online model api such as zhipuai
-w, --webui run webui.py server
-q, --quiet 减少fastchat服务log信息
-i, --lite 以Lite模式运行:仅支持在线API的LLM对话、搜索引擎对话
The text was updated successfully, but these errors were encountered: