-
Notifications
You must be signed in to change notification settings - Fork 798
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about CUDA failed with error out of memory error #809
Comments
Use smaller model or use
You can't. |
Thank you @Purfview for your reply.
Is the same or is better to remain with this and specify the cpu mode? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi, I have recently changed the server and with this I have the possibility of an intel i5-10th iGPU and a nvidia GTX 1650
I am installing openai-whisper with the faster_whisper module, this is the docker-compose
I was hoping to be able to use the
large
model but I think it is too big, in fact I get the errorCUDA failed with error out of memory
, it seems like it wants to dump it all into the video card ram, maybe what I am asking is impossible, but I don't know how the whole system works and I don't see mounted volumes, so I ask hoping the question is not too stupid, can't you download the model locally and use it without the need for it all to be loaded into memory? Or is there a way to share the system ram with the video card ram when needed?J
The text was updated successfully, but these errors were encountered: