Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changing batch size to 16 or 32 #78

Open
Peleg-Bruck opened this issue May 15, 2023 · 0 comments
Open

Changing batch size to 16 or 32 #78

Peleg-Bruck opened this issue May 15, 2023 · 0 comments

Comments

@Peleg-Bruck
Copy link

Hi, I've read some of the issues here, and saw that you mentioned that you were suspecting that changing the batch size from 64 may cause the conversion to be suboptimal. Did I understand you correctly?
My GPU cannot take a batch of 64 and I therefore changed it to 16. I trained for 150k iterations(starting from your pretrained 24khz checkpoint) and the conversions are not all that great. It sounds crisp, but the similarity is not very good.
If you think the batch size is the culprit, should I keep batch size 64 and decrease the training wav size? Plus, given 10 hours of data, how many iterations do you think I should train?
Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant