Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The bert details #24

Closed
mondorysix opened this issue May 15, 2024 · 2 comments
Closed

The bert details #24

mondorysix opened this issue May 15, 2024 · 2 comments

Comments

@mondorysix
Copy link

Thank you for sharing your work. I am truly impressed by your project and have developed a keen interest in understanding it more deeply. If it's convenient for you, I have a few questions that I'd like to ask.
I noticed that you've used BERT for extracting prosodic features in your project. I've conducted some experiments on my own, but the BERT models I found on HuggingFace didn't yield results as good or as natural as yours. I've tried the WWM version and the large models, but neither seemed to work very well. This has been a point of confusion for me, and I was hoping you could help clarify. Is the BERT model you used trained from yourself or taken from Google? Is it the WWM version and did you do some modification? Also, have you fine-tuned it on datasets other than Chinese Wikipedia? I would greatly appreciate your insights on these matters.

@Executedone
Copy link
Owner

try this model: https://github.com/ymcui/Chinese-BERT-wwm, model_name: RoBERTa-wwm-ext, Chinese

@mondorysix
Copy link
Author

thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants