Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DistributedDataParallel #535

Open
Clean1224 opened this issue Aug 5, 2023 · 1 comment
Open

DistributedDataParallel #535

Clean1224 opened this issue Aug 5, 2023 · 1 comment
Assignees
Labels

Comments

@Clean1224
Copy link

First of all, I would like to thank you for your code. I have noticed that you have implemented multi-GPU training using DataParallel in the PyTorch framework. However, using DataParallel can lead to low GPU utilization. I was wondering if you could provide a tutorial on how to use DistributedDataParallel for parallel training?

@adalca
Copy link
Collaborator

adalca commented Aug 5, 2023

I'll ping @balakg and @ahoopes who know a lot more about the pytorch vxm implementation than I do. Don't know enough details about pytorch parallelism to comment meaningfully.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants