You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In my spell and grammar corrector NeuroSpell (a completely new improved version will be released soon), I'm using MMT with its adaptation feature.
When processing sentences, I'm only reach about 36% of GPU use, with a low use of the video RAM. It lets think there is a lot to gain somewhere.
In the same kind of the idea explain in #598, do you think it would be possible to mini-batch a set of sentences + sentence pairs, to do the adaptation+translation job, of course keeping each sentence with its pair processed individually, but all processed GPU side at once? Knowing the video RAM is under-used, this can be obtained using several copies of the model on the card.
Any idea to get this?
The text was updated successfully, but these errors were encountered:
In my spell and grammar corrector NeuroSpell (a completely new improved version will be released soon), I'm using MMT with its adaptation feature.
When processing sentences, I'm only reach about 36% of GPU use, with a low use of the video RAM. It lets think there is a lot to gain somewhere.
In the same kind of the idea explain in #598, do you think it would be possible to mini-batch a set of sentences + sentence pairs, to do the adaptation+translation job, of course keeping each sentence with its pair processed individually, but all processed GPU side at once? Knowing the video RAM is under-used, this can be obtained using several copies of the model on the card.
Any idea to get this?
The text was updated successfully, but these errors were encountered: