Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

多卡报错,Qwen1.5-7B-Chat FastApi 部署调用 #79

Open
linzhonghong opened this issue Apr 11, 2024 · 3 comments
Open

多卡报错,Qwen1.5-7B-Chat FastApi 部署调用 #79

linzhonghong opened this issue Apr 11, 2024 · 3 comments

Comments

@linzhonghong
Copy link

hello,有2张卡报错了。
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
如果只设置一张(CUDA_VISIBLE_DEVICES=0),就报有cpu参与
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)

@linzhonghong linzhonghong changed the title 多卡报错,qwen1.5的 多卡报错,Qwen1.5-7B-Chat FastApi 部署调用 Apr 11, 2024
@KMnO4-zx
Copy link
Contributor

KMnO4-zx commented Apr 11, 2024

请问您是在什么环境下使用本教程的呢?

可以尝试使用这个镜像:https://www.codewithgpu.com/i/datawhalechina/self-llm/self-llm-Qwen1.5

@linzhonghong
Copy link
Author

请问您是在什么环境下使用本教程的呢?

可以尝试使用这个镜像:https://www.codewithgpu.com/i/datawhalechina/self-llm/self-llm-Qwen1.5

不是在AutoDL上跑的,是不是可以理解为所有的教程只适合在AutoDL上跑呢?

@KMnO4-zx
Copy link
Contributor

请问您是在什么环境下使用本教程的呢?
可以尝试使用这个镜像:https://www.codewithgpu.com/i/datawhalechina/self-llm/self-llm-Qwen1.5

不是在AutoDL上跑的,是不是可以理解为所有的教程只适合在AutoDL上跑呢?

因为本教程是在AutoDL环境经过测试的,所以在autodl平台更容易复现教程中的操作。在自己的机器上,尤其是windows环境比较复杂,容易出现其他未知bug。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants