-
Notifications
You must be signed in to change notification settings - Fork 145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The MNIST_DCGAN learning is too slow #1
Comments
Hi, I don't know exactly why learning time is different. I ran my code again using these settings - 64 x 64 MNIST 60,000 images, 128 batch size, 'd' = 128 and got a similar result. Check your data, batch size, and dimension parameter of network ( 'd' value in my network) etc. |
Dear Kang:
My apologize, I didnt check my email in time. I finally found out
that, someone else was also using GPU, so the GPU is busy, thats the reaso.
PS:
Im new with pytorch. I used caffe. I was wondering why
PyTorch's Volatile GPU-Util is quite high as by using caffe I can only
achieve around 70%. But when using PyTorch, it is quite easy to achieve
90%+. Do you have any idea with that?
2017-11-23 10:26 GMT+08:00 Hyeonwoo Kang <[email protected]>:
… Hi,
I don't know exactly why learning time is different.
I ran my code again using these settings - 64 x 64 MNIST 60,000 images,
128 batch size, 'd' = 128 and got a similar result.
Check your data, batch size, and dimension parameter of network ( 'd'
value in my network) etc.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#1 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AQ_3nuZJS1yU1KZGmaTKCzJca-iYsq3tks5s5NflgaJpZM4Qm8iJ>
.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Sir, as u said the avg time for an epoch is around 180s, while on my server, it shows:
[1/20] - ptime: 372.38, loss_d: 0.597, loss_g: 5.759
My environment is:
ubuntu 16.04+cuda8.0+cudnn 6+ pytorch 0.2 +Titan XP
I also set the worker_num for train data loader to 2, so it shouldn't be a problem of IO.
Do u have any idea of what's going wrong , Sir?
The text was updated successfully, but these errors were encountered: