Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fail hard if model can't be loaded. #504

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

simi
Copy link
Contributor

@simi simi commented Sep 20, 2023

when model fail to load, it returns None, which later results into

pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain

I had hard time to find out what's the problem (in my case it was missing AVX2 CPU support). Revealing the original load error message would be much easier to follow and debug.

@kime541200
Copy link

I have the same problem, is there any solution?

@simi
Copy link
Contributor Author

simi commented Sep 21, 2023

I have the same problem, is there any solution?

In my case the problem was related to old CPU not compatible with AVX2 instructions. Compiling llama-cpp-python with AVX2 off fixed the problem.

CMAKE_ARGS="-DLLAMA_AVX2=off" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.83 --no-cache-dir

@matinlotfali
Copy link

The PR looks similar to #491

@simi
Copy link
Contributor Author

simi commented Sep 28, 2023

@matinlotfali yup, it is. I'm not sure how much sense it make to continue (return None) if model load fails. In this PR I changed it to just fail hard (and not try to continue) to prevent "cryptic" messages later (caused by no model present). #491 adds warning, but continues.

@matinlotfali
Copy link

We need a combination of both PRs

@ghrahul
Copy link

ghrahul commented Oct 12, 2023

Hi @matinlotfali , I have updated this PR, combining both things. Please review. Thanks!

#491

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants