New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Welcome to try our open-source reproduction of AnimateAnyone~ #58
Comments
Thank you for releasing it and making it open source. We appreciate the efforts from the community. Also, did you check this other open source project which aims to replicate the results from this paper? I hope you both could collaborate together to make faster progress? Check this project repo if you haven't already: https://github.com/guoqincode/Open-AnimateAnyone you can reach out to @guoqincode Also, when can we expect training codes to be released? |
As mentioned in the readme of https://github.com/MooreThreads/Moore-AnimateAnyone |
Thank you @yhyu13, looking forward to the training code as it can help the community immensely. |
looks like training code has dropped - MooreThreads/Moore-AnimateAnyone@d31bf2a |
wow these people are the best!! more power to you guys @lixunsong!! Thank you for truly open sourcing the knowledge. |
BTW, a different paper for animating people called CHAMP was just released, and they credit MooreThreads/Moore-AnimateAnyone as being what they built on top of. So this project even helped out researchers! That code is here: https://github.com/fudan-generative-vision/champ |
Hello guys, we are devloping to reproduce this work and happy to release our codes and pretrained weights now. Our reproduction approximates the performance demonstrated by the original paper, for example:
compare-1-1.mp4
The repo is avaliable at: https://github.com/MooreThreads/Moore-AnimateAnyone. Hoping for your feedbacks and ideas!
The text was updated successfully, but these errors were encountered: