-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training is slow and not using GPU #176
Comments
Hi @davidfstein , Thank you for letting us know about the issue. This is not our expected performance. Could you try setting We will continue working on efficiency optimization. |
Hi, I added a data.to(device) in the encoder training loop and now the models are using the GPU. I will try to go back and take a look to see why that data isn't being moved to the GPU in the first place. I will try to update here later |
I'm attempting to run the GraphCL example with a custom dataset (n~=150,000). I am passing device='cuda' and my GPU is available, but the GPU utilization is at 0% and the evaluate training loop is expected to run for ~12 hours. Is there a way to increase GPU utilization and do you expect the implementation to scale to larger datasets?
The text was updated successfully, but these errors were encountered: