Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mistral not supported #778

Open
testercell opened this issue Mar 29, 2024 · 2 comments
Open

Mistral not supported #778

testercell opened this issue Mar 29, 2024 · 2 comments

Comments

@testercell
Copy link

I'm trying to use the following as the model id and base name
MODEL_ID = "TheBloke/Mistral-7B-Instruct-v0.1-GPTQ"
MODEL_BASENAME = "wizardLM-7B-GPTQ-4bit.compat.no-act-order.safetensors"

But when runing run_localgpt.py i get the following error
\miniconda3\Lib\site-packages\auto_gptq\modeling_utils.py", line 147, in check_and_get_model_type
raise TypeError(f"{config.model_type} isn't supported yet.")
TypeError: mistral isn't supported yet.

Any help is super appreciated!!

@FinlandBreakfast
Copy link

What is your OS?
I set the following

MODEL_ID = "TheBloke/Mistral-7B-Instruct-v0.1-GPTQ"
MODEL_BASENAME = "model.safetensors"

and got this

logging.INFO("GPTQ models will NOT work on Mac devices. Please choose a different model.")
TypeError: 'int' object is not callable

@Bhavya031
Copy link

if you are using cuda use GPTQ model or you are on mac use GGUF https://youtu.be/ASpageg8nPw?t=74

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants