New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mistral not supported #778
Comments
What is your OS?
and got this
|
if you are using cuda use GPTQ model or you are on mac use GGUF https://youtu.be/ASpageg8nPw?t=74 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I'm trying to use the following as the model id and base name
MODEL_ID = "TheBloke/Mistral-7B-Instruct-v0.1-GPTQ"
MODEL_BASENAME = "wizardLM-7B-GPTQ-4bit.compat.no-act-order.safetensors"
But when runing run_localgpt.py i get the following error
\miniconda3\Lib\site-packages\auto_gptq\modeling_utils.py", line 147, in check_and_get_model_type
raise TypeError(f"{config.model_type} isn't supported yet.")
TypeError: mistral isn't supported yet.
Any help is super appreciated!!
The text was updated successfully, but these errors were encountered: