New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]: GPU memory leak during GAN-based vocoders training #181
Comments
Hi! Thanks for your report! |
Got you, thanks for commenting on this issue. I've reduced batchsize even more and now it's not falling during training. |
Describe the bug
I'm trying to train GAN-based vocoders like HiFi-GAN, APNet, on the medium sized dataset (~ half of LibriTTS clean-100 + clean-300), however training fails with CUDA out of memory error in the end of the first epoch, I was able to fight it by lowering batch size, however in the beginning training doesn't take even half of provided GPU memory, and somewhere closer to the end of epoch growth.
How To Reproduce
commit 5cb75d8d605ef12c90c64ba2e04919f4d5d834a1
Expected behavior
Expect for GPU memory to be constant during training.
Screenshots
(If applicable, add screenshots to help explain your problem.)
Environment Information
The text was updated successfully, but these errors were encountered: