-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak #183
Comments
@amjltc295 I have the same issue RuntimeError: CUDA error: out of memory. Have you solve the problem yet? |
@YupengGao No, I just keep the batch size small. I wish someone could give me a clue. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I notice that there seems to be a memory leak issue during training.
On my K80, I set the batch size to 24 and the GPU memory consumption is about 4000MB. However, as the training process goes on, the GPU memory consumptions increases and a
RuntimeError: CUDA error: out of memory
is raised about 30 minutes later. If I set the batch size to 16 the error would not take place but the memory still increase.The text was updated successfully, but these errors were encountered: