-
Notifications
You must be signed in to change notification settings - Fork 318
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SLM Adversarial Training did not start when finetuning #227
Comments
Same issue |
You seem to be missing the configuration options for when the second training starts See lines 6 and 7 in the LibriTTS Config File
Should be able to kick-off second stage training by loading your current model checkpoint and setting epochs_1st to 0 |
Thanks, do I also need the |
This did not fix the issue unfortunately |
I've managed to get to:
Aa where things go sour. If you have a batch size of 2, then it will always be 1, meaning SLMADV never starts. You need to change |
@78Alpha |
They're going to be zero for a while unless the conditions it's looking for are met. On about 1 epoch of training, my tensorboard only showed 60 steps worth of SLM training when i set batch percentage to 1. I don't know what exactly is looking for. |
@78Alpha |
Yeah, that's what it should look like. All graphs filled is the sign of all parts working. |
I tried to do finetuning on a small dataset with 2 speakers. I set
epochs=25
,diff_epoch=8
,joint_epoch=15
.The Style Diffusion training started as expected, but SLM Adversarial Training never started throughout the entire finetuning process.
My config is
What have I missed? Thanks!
The text was updated successfully, but these errors were encountered: