-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom DPO Trainer CUDA OOM #1626
Comments
I was thinking: could this be related to the fact that some sequences later in the training have a greater length? |
Hi @TheGhoul21 |
Hi, I overwrote some implementation of the DPOTrainer to test this paper.
After half of the training is gone, there's a sudden spike in the GPU Memory usage which is quite hard to understand
after a while being at 99.9% it just crashes with the following error:
I tried with different setups (2xA10g's, 1x, 1xT4 etc) but nothing changes, after a while it just goes brr. I also tried to have a smaller batch size at the cost of under-using the GPU for most of the time: after half the training the GPU is not enough anymore.
Since I was planning to contribute to this project with this custom loss function paper implementation I'm asking for help. Here's the code:
The text was updated successfully, but these errors were encountered: