Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ModuleNotFoundError: No module named 'configs' #3894

Closed
ChirdAnti-AK opened this issue Apr 26, 2024 · 1 comment
Closed

ModuleNotFoundError: No module named 'configs' #3894

ChirdAnti-AK opened this issue Apr 26, 2024 · 1 comment
Assignees
Labels
bug Something isn't working

Comments

@ChirdAnti-AK
Copy link

python server/llm_api.py
Traceback (most recent call last):
File "D:\NLP\Langchain-Chatchat\server\llm_api.py", line 4, in
from configs import logger, log_verbose, LLM_MODELS, HTTPX_DEFAULT_TIMEOUT
ModuleNotFoundError: No module named 'configs'
但是源代码直接Run可以运行并没有报错
源代码:from fastapi import Body
import sys
sys.path.append("D:\NLP\Langchain-Chatchat\configs")
from configs import logger, log_verbose, LLM_MODELS, HTTPX_DEFAULT_TIMEOUT
from server.utils import (BaseResponse, fschat_controller_address, list_config_llm_models,get_httpx_client, get_model_worker_config)
from typing import List

def list_running_models(
controller_address: str = Body(None, description="Fastchat controller服务器地址", examples=[fschat_controller_address()]),
placeholder: str = Body(None, description="该参数未使用,占位用"),
) -> BaseResponse:
'''
从fastchat controller获取已加载模型列表及其配置项
'''
try:
controller_address = controller_address or fschat_controller_address()
with get_httpx_client() as client:
r = client.post(controller_address + "/list_models")
models = r.json()["models"]
data = {m: get_model_config(m).data for m in models}
return BaseResponse(data=data)
except Exception as e:
logger.error(f'{e.class.name}: {e}',
exc_info=e if log_verbose else None)
return BaseResponse(
code=500,
data={},
msg=f"failed to get available models from controller: {controller_address}。错误信息是: {e}")

def list_config_models(
types: List[str] = Body(["local", "online"], description="模型配置项类别,如local, online, worker"),
placeholder: str = Body(None, description="占位用,无实际效果")
) -> BaseResponse:
'''
从本地获取configs中配置的模型列表
'''
data = {}
for type, models in list_config_llm_models().items():
if type in types:
data[type] = {m: get_model_config(m).data for m in models}
return BaseResponse(data=data)

def get_model_config(
model_name: str = Body(description="配置中LLM模型的名称"),
placeholder: str = Body(None, description="占位用,无实际效果")
) -> BaseResponse:
'''
获取LLM模型配置项(合并后的)
'''
config = {}
# 删除ONLINE_MODEL配置中的敏感信息
for k, v in get_model_worker_config(model_name=model_name).items():
if not (k == "worker_class"
or "key" in k.lower()
or "secret" in k.lower()
or k.lower().endswith("id")):
config[k] = v

return BaseResponse(data=config)

def stop_llm_model(
model_name: str = Body(..., description="要停止的LLM模型名称", examples=[LLM_MODELS[0]]),
controller_address: str = Body(None, description="Fastchat controller服务器地址", examples=[fschat_controller_address()])
) -> BaseResponse:
'''
向fastchat controller请求停止某个LLM模型。
注意:由于Fastchat的实现方式,实际上是把LLM模型所在的model_worker停掉。
'''
try:
controller_address = controller_address or fschat_controller_address()
with get_httpx_client() as client:
r = client.post(
controller_address + "/release_worker",
json={"model_name": model_name},
)
return r.json()
except Exception as e:
logger.error(f'{e.class.name}: {e}',
exc_info=e if log_verbose else None)
return BaseResponse(
code=500,
msg=f"failed to stop LLM model {model_name} from controller: {controller_address}。错误信息是: {e}")

def change_llm_model(
model_name: str = Body(..., description="当前运行模型", examples=[LLM_MODELS[0]]),
new_model_name: str = Body(..., description="要切换的新模型", examples=[LLM_MODELS[0]]),
controller_address: str = Body(None, description="Fastchat controller服务器地址", examples=[fschat_controller_address()])
):
'''
向fastchat controller请求切换LLM模型。
'''
try:
controller_address = controller_address or fschat_controller_address()
with get_httpx_client() as client:
r = client.post(
controller_address + "/release_worker",
json={"model_name": model_name, "new_model_name": new_model_name},
timeout=HTTPX_DEFAULT_TIMEOUT, # wait for new worker_model
)
return r.json()
except Exception as e:
logger.error(f'{e.class.name}: {e}',
exc_info=e if log_verbose else None)
return BaseResponse(
code=500,
msg=f"failed to switch LLM model from controller: {controller_address}。错误信息是: {e}")

def list_search_engines() -> BaseResponse:
from server.chat.search_engine_chat import SEARCH_ENGINES

return BaseResponse(data=list(SEARCH_ENGINES))
@ChirdAnti-AK ChirdAnti-AK added the bug Something isn't working label Apr 26, 2024
Copy link

dosubot bot commented Apr 26, 2024

To address the ModuleNotFoundError for the 'configs' module in your llm_api.py script, consider using a more robust solution than manually appending to the system path. This error typically occurs when Python cannot find the module in its default search path or the current working directory. A better approach would be to ensure that your project's structure and import statements are set up in a way that Python can naturally recognize the 'configs' module without needing to alter the system path. This could involve placing your script and the 'configs' module within the same directory or using a package structure with an __init__.py file to make Python treat the directories as containing packages. Additionally, using virtual environments for dependency management can help avoid such path issues by isolating your project's dependencies from the global Python environment.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants