Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Congratulations to this repo on reaching 12000+ ⭐ in 70 days with no code. #57

Open
ReEnMikki opened this issue Jan 8, 2024 · 19 comments

Comments

@ReEnMikki
Copy link

With the amount of hype, interest and public attention around this animation consistency technology, which will obviously be revolutionizing in the animation industry, I think it's fair to understand if the authors decide not to release the code publicly, and instead continue making daily revenue via their service on the Tongyi Qianwen app. It's a reasonable and logical decision.

Why would any sane person invest their time, effort and money into creating something really revolutionary and powerful, only to give it away to public for free? Even OpenAI, the company that is supposed to be non-profit, saw how game-breaking the capability of ChatGPT, GPT-4 and DALL-E-3 would be, thus they decided to change their plan.

Altruism doesn't exist in capitalism if the technology breakthrough is too game-changing and has too much potential to bring profit. Any other company in their place would do the same thing, and there's nothing wrong with that. It's just how the system works.

@youngwoo-dev
Copy link

I agree with you. However, the regrettable point is they shouldn't have said this, or at least they should have provided an explanation for it.

Thank you all for your incredible support and interest in our project. We've received lots of inquiries regarding a demo or the source code. We want to assure you that we are actively working on preparing the demo and code for public release. Although we cannot commit to a specific release date at this very moment, please be certain that the intention to provide access to both the demo and our source code is firm.

Our goal is to not only share the code but also ensure that it is robust and user-friendly, transitioning it from an academic prototype to a more polished version that provides a seamless experience. We appreciate your patience as we take the necessary steps to clean, document, and test the code to meet these standards.

Thank you for your understanding and continuous support.

@Collin-Budrick
Copy link

@youngwoo-dev you are 💯 right. That's why I changed my position on this project mid way through. That announcement was very clear.

@3qdr
Copy link

3qdr commented Jan 9, 2024

It is available now with limited features, however, since it is made by Alibaba, a Chinese company. This feature is probably impossible for non-Chinese speakers to try since everything is in Chinese, and it doesn't have that much capability in that demo version. If you still want to try it out, the app is called 通义千问, and if you want to try something similar but you can't read Chinese, there are quite a lot of those things out there, but almost all of it is worse because of shutter and stuff.

references: https://v.douyin.com/iLNk7gsy/
https://www.ithome.com/0/743/370.htm

@ReEnMikki
Copy link
Author

It is available now with limited features, however, since it is made by Alibaba, a Chinese company. This feature is probably impossible for non-Chinese speakers to try since everything is in Chinese, and it doesn't have that much capability in that demo version. If you still want to try it out, the app is called 通义千问, and if you want to try something similar but you can't read Chinese, there are quite a lot of those things out there, but almost all of it is worse because of shutter and stuff.

references: https://v.douyin.com/iLNk7gsy/ https://www.ithome.com/0/743/370.htm

Well, translation shouldn't be the biggest problem, but it requires login using Chinese phone number, doesn't it?

@3qdr
Copy link

3qdr commented Jan 10, 2024

It is available now with limited features, however, since it is made by Alibaba, a Chinese company. This feature is probably impossible for non-Chinese speakers to try since everything is in Chinese, and it doesn't have that much capability in that demo version. If you still want to try it out, the app is called 通义千问, and if you want to try something similar but you can't read Chinese, there are quite a lot of those things out there, but almost all of it is worse because of shutter and stuff.
references: https://v.douyin.com/iLNk7gsy/ https://www.ithome.com/0/743/370.htm

Well, translation shouldn't be the biggest problem, but it requires login using Chinese phone number, doesn't it?

uhm yes, however I doubt you can find a Chinese SMS receiver thing

@chrisbward
Copy link

@youngwoo-dev you are 💯 right. That's why I changed my position on this project mid way through. That announcement was very clear.

I think you owe me an apology.

@yl22011
Copy link

yl22011 commented Jan 10, 2024

I am not 100% sure but I believe the version on Tongyi is dreamoving https://github.com/dreamoving/dreamoving-project which has a hugging face demo available here https://huggingface.co/spaces/jiayong/Dreamoving.

The reason for this being is if you go to https://www.modelscope.cn/studios/vigen/video_generation/summary which is linked on their github.io page and translate the page you can see that it is provided by Tongyi Laboratory-Open Vision-Chaiying Team.

@Collin-Budrick
Copy link

@youngwoo-dev you are 💯 right. That's why I changed my position on this project mid way through. That announcement was very clear.

I think you owe me an apology.

@chrisbward, you were right, I was wrong - I had confidence in the beginning, and especially after their announcement. That dev update in the Readme has jaded and, as promised, leaves me with a bad impression of their company. While I am unsure when they will release it. I think they should privatize their repo and project website until they have some code to pair with it.

@ShawnFumo
Copy link

Why would any sane person invest their time, effort and money into creating something really revolutionary and powerful, only to give it away to public for free?

Eh, the thing is that it isn't THAT revolutionary when you look at the previous projects it is building on and other things that are going on in the industry. This isn't a criticism of the project, since it still looks like the best version of this particular kind of generative video, but it didn't come out of nowhere. And the paper is out there already.

It is convenient for open source people to mess with PyTorch code and a pre-trained model, but people have already tried reproducing it. And the other research labs that made AIs that AA built on will themselves make new versions built on the techniques described in this paper. Releasing the code speeds things up a little, but in the end it won't really matter.

And whether it is Alibaba's code or someone else, that's going to combine with all the other crazy stuff going on. Like check out this thing Meta dropped just the other day that does animated pose and lip generation built just from an audio clip. And they already released all the code: https://github.com/facebookresearch/audio2photoreal/assets/17986358/5cba4079-275e-48b6-aecc-f84f3108c810

@chrisbward
Copy link

@youngwoo-dev you are 💯 right. That's why I changed my position on this project mid way through. That announcement was very clear.

I think you owe me an apology.

@chrisbward, you were right, I was wrong - I had confidence in the beginning, and especially after their announcement. That dev update in the Readme has jaded and, as promised, leaves me with a bad impression of their company. While I am unsure when they will release it. I think they should privatize their repo and project website until they have some code to pair with it.

Look, I didn't want to be right. I would hope that we would be enjoying some brilliant innovation and get to contribute to such an awesome concept, probably like yourself.

But this is not an apology for the ad-hominem attack of "impatient and shallow-minded"

@Collin-Budrick
Copy link

But this is not an apology for the ad-hominem attack of "impatient and shallow-minded"

@chrisbward by saying you're right, I'm implying you aren't shallow-minded in your original post. You definitely had foresight on these BS repositories that garnish attention through empty promises and never deliver.

Your post came prior to the Readme announcement, and I felt it was impatient. I don't think that's an ad-hominem.

@ReEnMikki
Copy link
Author

It is available now with limited features, however, since it is made by Alibaba, a Chinese company. This feature is probably impossible for non-Chinese speakers to try since everything is in Chinese, and it doesn't have that much capability in that demo version. If you still want to try it out, the app is called 通义千问, and if you want to try something similar but you can't read Chinese, there are quite a lot of those things out there, but almost all of it is worse because of shutter and stuff.
references: https://v.douyin.com/iLNk7gsy/ https://www.ithome.com/0/743/370.htm

Well, translation shouldn't be the biggest problem, but it requires login using Chinese phone number, doesn't it?

uhm yes, however I doubt you can find a Chinese SMS receiver thing

Well, I tried tons of them, none worked( And I gave up. If anyone knows one working pls lmk

@G-force78
Copy link

congrats for creating the biggest bait and switch, the good thing is I'm sure someone somewhere knows how it works from the paper and is replicating it.

@lixunsong
Copy link

There is a repo contains codes and pretrained weights of our reproduction, of AnimateAnyone: https://github.com/MooreThreads/Moore-AnimateAnyone, which approximates the performance of the original paper. You guys can have a try, and we are developing it :)

@G-force78
Copy link

There is a repo contains codes and pretrained weights of our reproduction, of AnimateAnyone: https://github.com/MooreThreads/Moore-AnimateAnyone, which approximates the performance of the original paper. You guys can have a try, and we are developing it :)

Are you the DreaMover developers?

@lixunsong
Copy link

There is a repo contains codes and pretrained weights of our reproduction, of AnimateAnyone: https://github.com/MooreThreads/Moore-AnimateAnyone, which approximates the performance of the original paper. You guys can have a try, and we are developing it :)

Are you the DreaMover developers?

No, we are from MooreThreads, the another company

@wweevv-johndpope
Copy link

@3qdr
Copy link

3qdr commented Jan 13, 2024

https://github.com/guoqincode/Open-AnimateAnyone

this as said is built on other models, which is different as they usually cause flickering and the physics logic isn't very good , seems very unatrual

@gxrxrdx
Copy link

gxrxrdx commented Jan 13, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests