-
Notifications
You must be signed in to change notification settings - Fork 591
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Uneven GPU utilization. #271
Comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi,
I am currently trying to train a model. But it seems to me that my GPU is not used fully all the time. As I can tell by the GPU power logging using HWiNFO the GPU is idle about 50% of the time. To a novice this looks like a bottleneck somewhere. What is causing this behaviour?
My GPU is a RTX 3060 Ti and the CPU is an AMD R5 5600X.
I am running the training on Windows with WSL2 using the following arguments:
stylegan2_pytorch --name attempt5 --data ./2022_100k --num_train_steps 10000 --image_size 64 --log --transparent --save_frames --network-capacity 32 --batch-size 8 --gradient-accumulate-every 10 --save_every 300 --evaluate_every 100 --attn-layers [1,2]
Here is a GPU power graph produced by HWiNFO. The interval are 5-7 seconds in duration.
The text was updated successfully, but these errors were encountered: