-
Notifications
You must be signed in to change notification settings - Fork 168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
反向kl loss 微调 #132
Comments
可以参考 bert_vits_aishell3分支,加入反向kl loss微调时,反向kl loss的比例应该根据使用的数据去调试,比如我在AISHELL3数据上使用了0.05,冻结PosteriorEncoder, |
请问调节时的insight是什么呢? |
这个出至微软nature speech,在优质语料上面使用这个loss确实没什么问题; 所以在不使用这个loss前,把模型训练到能训练的状态,保存状态,需要用这个状态进行后面的多次尝试;
|
请问一下,微调时大概训练多久。我在自己训练的模型尝试加入反向kl loss微调,反而kl loss 变大,生成效果很差
The text was updated successfully, but these errors were encountered: