generated from aarnphm/bazix
-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: cuBLAS Support #104
Labels
enhancement
New feature or request
Comments
I'd also be interested in this. You don't even need a very powerful GPU. My 1070Ti can infer at about 2x faster than realtime on the large model with beam size 1. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Feature request
It would be nice to be able to compile with cuBLAS support while installing/building locally. I haven't found a way to do so, but I am also unfamiliar with Bazel so apologies if this is already possible, and if so, how would I go about doing this?
Motivation
This would offload lots of processing from the CPU and onto the GPU speeding up transcribing time considerably for those with a powerful GPU.
Other
No response
The text was updated successfully, but these errors were encountered: