-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Minimum required GPU RAM for different architectures #1948
Comments
The batch size in inference and train results tables is the Measuring actual GPU use is not particularly reliable, there is so much variability due to the way the allocation and kernel benchmarking works, you really have to try the batch size and see it succeed or fail to know if it works.. |
Is your feature request related to a problem? Please describe.
Is the minimum required GPU memory for different architectures documented anywhere?
E.g., I want to know what GPU(s) I need to rent to be able to do a backward pass on
ViT-g/14
.Describe the solution you'd like
If not, it would be very helpful to add a sheet that documents this data for:
I am not familiar with distributed inference/training; will the amount of GPU RAM needed be linearly divided when using multiple GPUs? Is the overhead of using multiple GPUs different for different models?
Describe alternatives you've considered
The alternative is testing these manually, but this option is expensive as one needs to first get access to a big GPU. It also wastes everyone's time as everybody needs to do this by themselves.
The text was updated successfully, but these errors were encountered: