-
Notifications
You must be signed in to change notification settings - Fork 258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] multiple GPUs on a single machine #2160
Comments
When I run this script(https://github.com/pytorch/rl/blob/v0.3.1/examples/distributed/collectors/single_machine/generic.py), it reports the following error. |
system info:
2.2.1+cu121 0.3.1 0.3.1 1.26.4 3.9.19 (main, Mar 21 2024, 17:11:28) [GCC 11.2.0] linux |
Motivation
I want to train using multiple GPUs on a single machine, but I can't find relevant tutorial documentation.
Could you provide an example of training using multiple GPUs on a single machine? For instance, updating the network on cuda:0 while gathering data on cuda:1?
please, thanks.
Solution
A clear and concise description of what you want to happen.
Alternatives
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
Checklist
The text was updated successfully, but these errors were encountered: