You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After finished the environment setting up, I encounter the following problem:
(localGPT) C:\gpt\localGPT>python run_localGPT.py --device_type cpu
2023-12-08 21:29:53,539 - INFO - run_localGPT.py:241 - Running on: cpu
2023-12-08 21:29:53,540 - INFO - run_localGPT.py:242 - Display Source Documents set to: False
2023-12-08 21:29:53,540 - INFO - run_localGPT.py:243 - Use history set to: False
2023-12-08 21:29:54,415 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large
load INSTRUCTOR_Transformer
max_seq_length 512
2023-12-08 21:29:57,018 - INFO - run_localGPT.py:59 - Loading Model: TheBloke/Llama-2-7b-Chat-GGUF, on: cpu
2023-12-08 21:29:57,019 - INFO - run_localGPT.py:60 - This action can take a few minutes!
2023-12-08 21:29:57,019 - INFO - load_models.py:38 - Using Llamacpp for GGUF/GGML quantized models
Traceback (most recent call last):
File "C:\gpt\localGPT\run_localGPT.py", line 282, in
main()
File "C:\Users\Administrator.conda\envs\localGPT\lib\site-packages\click\core.py", line 1157, in call
return self.main(*args, **kwargs)
File "C:\Users\Administrator.conda\envs\localGPT\lib\site-packages\click\core.py", line 1078, in main
rv = self.invoke(ctx)
File "C:\Users\Administrator.conda\envs\localGPT\lib\site-packages\click\core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\Administrator.conda\envs\localGPT\lib\site-packages\click\core.py", line 783, in invoke
return callback(*args, **kwargs)
File "C:\gpt\localGPT\run_localGPT.py", line 249, in main
qa = retrieval_qa_pipline(device_type, use_history, promptTemplate_type=model_type)
File "C:\gpt\localGPT\run_localGPT.py", line 150, in retrieval_qa_pipline
qa = RetrievalQA.from_chain_type(
File "C:\Users\Administrator.conda\envs\localGPT\lib\site-packages\langchain\chains\retrieval_qa\base.py", line 100, in from_chain_type
combine_documents_chain = load_qa_chain(
File "C:\Users\Administrator.conda\envs\localGPT\lib\site-packages\langchain\chains\question_answering_init.py", line 249, in load_qa_chain
return loader_mapping[chain_type](
File "C:\Users\Administrator.conda\envs\localGPT\lib\site-packages\langchain\chains\question_answering_init.py", line 73, in _load_stuff_chain
llm_chain = LLMChain(
File "C:\Users\Administrator.conda\envs\localGPT\lib\site-packages\langchain\load\serializable.py", line 74, in init
super().init(**kwargs)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain
llm
none is not an allowed value (type=type_error.none.not_allowed)
Is there anybody can tell me how to solve this problem? thanks.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
After finished the environment setting up, I encounter the following problem:
(localGPT) C:\gpt\localGPT>python run_localGPT.py --device_type cpu
2023-12-08 21:29:53,539 - INFO - run_localGPT.py:241 - Running on: cpu
2023-12-08 21:29:53,540 - INFO - run_localGPT.py:242 - Display Source Documents set to: False
2023-12-08 21:29:53,540 - INFO - run_localGPT.py:243 - Use history set to: False
2023-12-08 21:29:54,415 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large
load INSTRUCTOR_Transformer
max_seq_length 512
2023-12-08 21:29:57,018 - INFO - run_localGPT.py:59 - Loading Model: TheBloke/Llama-2-7b-Chat-GGUF, on: cpu
2023-12-08 21:29:57,019 - INFO - run_localGPT.py:60 - This action can take a few minutes!
2023-12-08 21:29:57,019 - INFO - load_models.py:38 - Using Llamacpp for GGUF/GGML quantized models
Traceback (most recent call last):
File "C:\gpt\localGPT\run_localGPT.py", line 282, in
main()
File "C:\Users\Administrator.conda\envs\localGPT\lib\site-packages\click\core.py", line 1157, in call
return self.main(*args, **kwargs)
File "C:\Users\Administrator.conda\envs\localGPT\lib\site-packages\click\core.py", line 1078, in main
rv = self.invoke(ctx)
File "C:\Users\Administrator.conda\envs\localGPT\lib\site-packages\click\core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\Administrator.conda\envs\localGPT\lib\site-packages\click\core.py", line 783, in invoke
return callback(*args, **kwargs)
File "C:\gpt\localGPT\run_localGPT.py", line 249, in main
qa = retrieval_qa_pipline(device_type, use_history, promptTemplate_type=model_type)
File "C:\gpt\localGPT\run_localGPT.py", line 150, in retrieval_qa_pipline
qa = RetrievalQA.from_chain_type(
File "C:\Users\Administrator.conda\envs\localGPT\lib\site-packages\langchain\chains\retrieval_qa\base.py", line 100, in from_chain_type
combine_documents_chain = load_qa_chain(
File "C:\Users\Administrator.conda\envs\localGPT\lib\site-packages\langchain\chains\question_answering_init.py", line 249, in load_qa_chain
return loader_mapping[chain_type](
File "C:\Users\Administrator.conda\envs\localGPT\lib\site-packages\langchain\chains\question_answering_init.py", line 73, in _load_stuff_chain
llm_chain = LLMChain(
File "C:\Users\Administrator.conda\envs\localGPT\lib\site-packages\langchain\load\serializable.py", line 74, in init
super().init(**kwargs)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain
llm
none is not an allowed value (type=type_error.none.not_allowed)
Is there anybody can tell me how to solve this problem? thanks.
Beta Was this translation helpful? Give feedback.
All reactions