-
Notifications
You must be signed in to change notification settings - Fork 571
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
多卡报错,Qwen1.5-7B-Chat FastApi 部署调用 #79
Comments
请问您是在什么环境下使用本教程的呢? 可以尝试使用这个镜像:https://www.codewithgpu.com/i/datawhalechina/self-llm/self-llm-Qwen1.5 |
不是在AutoDL上跑的,是不是可以理解为所有的教程只适合在AutoDL上跑呢? |
因为本教程是在AutoDL环境经过测试的,所以在autodl平台更容易复现教程中的操作。在自己的机器上,尤其是windows环境比较复杂,容易出现其他未知bug。 |
hello,有2张卡报错了。
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
如果只设置一张(CUDA_VISIBLE_DEVICES=0),就报有cpu参与
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
The text was updated successfully, but these errors were encountered: