-
Notifications
You must be signed in to change notification settings - Fork 195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is the expected speedup when using OpenCL? #123
Comments
This is an excellent question that opens up plenty of passionate debate. When I am renting hardware to run my own models, I prefer renting CPU only hardware without GPU. The motivations are: AVX capable CPUs are cheap and I don't risk exceeding VRAM. Before you consider me crazy, I recommend having a look at:
To reply to your question, small non convolutional models might be slower on GPU. I would use GPU only on bigger models with convolutions. I would expect improvement from 2x to 8x in convolutional models using GPU. My own models are trained on CPU only environments because I have found better price x performance on CPU. Depending on where I am renting hardware, I can get 20 CPU cores for the cost of 1 GPU. Anyway, one model can be price effective on GPU and then the next model may not be. It's a moving target. |
No description provided.
The text was updated successfully, but these errors were encountered: