Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: cuBLAS Support #104

Open
loukylor opened this issue May 5, 2023 · 1 comment
Open

feat: cuBLAS Support #104

loukylor opened this issue May 5, 2023 · 1 comment
Labels
enhancement New feature or request

Comments

@loukylor
Copy link

loukylor commented May 5, 2023

Feature request

It would be nice to be able to compile with cuBLAS support while installing/building locally. I haven't found a way to do so, but I am also unfamiliar with Bazel so apologies if this is already possible, and if so, how would I go about doing this?

Motivation

This would offload lots of processing from the CPU and onto the GPU speeding up transcribing time considerably for those with a powerful GPU.

Other

No response

@loukylor loukylor added the enhancement New feature or request label May 5, 2023
@lachesis
Copy link

lachesis commented Aug 6, 2023

I'd also be interested in this. You don't even need a very powerful GPU. My 1070Ti can infer at about 2x faster than realtime on the large model with beam size 1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants