Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mini-batch with adaptation? #599

Open
EtienneAb3d opened this issue Oct 22, 2021 · 0 comments
Open

Mini-batch with adaptation? #599

EtienneAb3d opened this issue Oct 22, 2021 · 0 comments

Comments

@EtienneAb3d
Copy link

EtienneAb3d commented Oct 22, 2021

In my spell and grammar corrector NeuroSpell (a completely new improved version will be released soon), I'm using MMT with its adaptation feature.

When processing sentences, I'm only reach about 36% of GPU use, with a low use of the video RAM. It lets think there is a lot to gain somewhere.

In the same kind of the idea explain in #598, do you think it would be possible to mini-batch a set of sentences + sentence pairs, to do the adaptation+translation job, of course keeping each sentence with its pair processed individually, but all processed GPU side at once? Knowing the video RAM is under-used, this can be obtained using several copies of the model on the card.

Any idea to get this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant