-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get a generated speech from the output of the trained Generator? #55
Comments
If your data is very different from vctk, you probably need to re-train the F0-converter |
Many many thanks for your quick answering. I am now using the speech with the sampling rate of 44100hz, does it mean that it is nesscuary to retrain the F0_Converter and the wavegen model? I have found that the speech I generated is much shorter than the original speech....... (Using the trained G model and the pretrained wavegen model obtained in this project). |
Yes. In that case, you probably need to tweak other parts of the model as well. |
I have trained the Generator model with my own data. However, I found that there may not exist a code for generating the speech from the trained Generator. And I check the code named "demo.ipynb" for founding out the way. It indicates that a trained F0_Converter is needed.
So I would like to ask the author that dose it nessusary to train a F0_Converter first for generating the speech from the trained Generator?(Because I found no code for training F0_Converter)? Or we just need to use the pretrained F0_Converter?
The text was updated successfully, but these errors were encountered: