Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA version (11.8) mismatches PyTorch (12.1) #1410

Open
1 task done
joshuacox opened this issue Feb 29, 2024 · 1 comment
Open
1 task done

CUDA version (11.8) mismatches PyTorch (12.1) #1410

joshuacox opened this issue Feb 29, 2024 · 1 comment

Comments

@joshuacox
Copy link

⚠️ Check for existing issues before proceeding. ⚠️

  • I have searched the existing issues, and there is no existing issue for my problem

Where are you using SuperAGI?

Linux

Which branch of SuperAGI are you using?

Main

Do you use OpenAI GPT-3.5 or GPT-4?

GPT-3.5

Which area covers your issue best?

Installation and setup

Describe your issue.

When I do a docker compose -f local-llm-gpu up --build, I am getting this error:

1.295 RuntimeError: 
1.295 The detected CUDA version (11.8) mismatches the version that was used to compile
1.295 PyTorch (12.1). Please make sure to use the same CUDA versions.
1.295 
------
failed to solve: process "/bin/sh -c cd /app/repositories/GPTQ-for-LLaMa/ && python3 setup_cuda.py install" did not complete successfully: exit code: 1

But aren't both PyTorch and CUDA inside these docker images?

How to replicate your Issue?

docker compose -f local-llm-gpu up --build

I haven't done anything special than make the config file from the template.

Upload Error Log Content

https://gist.github.com/joshuacox/f9d4aa78b84ab614af5954a361cc6b2b

@joshuacox
Copy link
Author

this PR appears to address the issue, though I am still having issues with it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant