New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Congratulations to this repo on reaching 12000+ ⭐ in 70 days with no code. #57
Comments
I agree with you. However, the regrettable point is they shouldn't have said this, or at least they should have provided an explanation for it.
|
@youngwoo-dev you are 💯 right. That's why I changed my position on this project mid way through. That announcement was very clear. |
It is available now with limited features, however, since it is made by Alibaba, a Chinese company. This feature is probably impossible for non-Chinese speakers to try since everything is in Chinese, and it doesn't have that much capability in that demo version. If you still want to try it out, the app is called 通义千问, and if you want to try something similar but you can't read Chinese, there are quite a lot of those things out there, but almost all of it is worse because of shutter and stuff. references: https://v.douyin.com/iLNk7gsy/ |
Well, translation shouldn't be the biggest problem, but it requires login using Chinese phone number, doesn't it? |
uhm yes, however I doubt you can find a Chinese SMS receiver thing |
I think you owe me an apology. |
I am not 100% sure but I believe the version on Tongyi is dreamoving https://github.com/dreamoving/dreamoving-project which has a hugging face demo available here https://huggingface.co/spaces/jiayong/Dreamoving. The reason for this being is if you go to https://www.modelscope.cn/studios/vigen/video_generation/summary which is linked on their github.io page and translate the page you can see that it is provided by Tongyi Laboratory-Open Vision-Chaiying Team. |
@chrisbward, you were right, I was wrong - I had confidence in the beginning, and especially after their announcement. That dev update in the Readme has jaded and, as promised, leaves me with a bad impression of their company. While I am unsure when they will release it. I think they should privatize their repo and project website until they have some code to pair with it. |
Eh, the thing is that it isn't THAT revolutionary when you look at the previous projects it is building on and other things that are going on in the industry. This isn't a criticism of the project, since it still looks like the best version of this particular kind of generative video, but it didn't come out of nowhere. And the paper is out there already. It is convenient for open source people to mess with PyTorch code and a pre-trained model, but people have already tried reproducing it. And the other research labs that made AIs that AA built on will themselves make new versions built on the techniques described in this paper. Releasing the code speeds things up a little, but in the end it won't really matter. And whether it is Alibaba's code or someone else, that's going to combine with all the other crazy stuff going on. Like check out this thing Meta dropped just the other day that does animated pose and lip generation built just from an audio clip. And they already released all the code: https://github.com/facebookresearch/audio2photoreal/assets/17986358/5cba4079-275e-48b6-aecc-f84f3108c810 |
Look, I didn't want to be right. I would hope that we would be enjoying some brilliant innovation and get to contribute to such an awesome concept, probably like yourself. But this is not an apology for the ad-hominem attack of "impatient and shallow-minded" |
@chrisbward by saying you're right, I'm implying you aren't shallow-minded in your original post. You definitely had foresight on these BS repositories that garnish attention through empty promises and never deliver. Your post came prior to the Readme announcement, and I felt it was impatient. I don't think that's an ad-hominem. |
Well, I tried tons of them, none worked( And I gave up. If anyone knows one working pls lmk |
congrats for creating the biggest bait and switch, the good thing is I'm sure someone somewhere knows how it works from the paper and is replicating it. |
There is a repo contains codes and pretrained weights of our reproduction, of AnimateAnyone: https://github.com/MooreThreads/Moore-AnimateAnyone, which approximates the performance of the original paper. You guys can have a try, and we are developing it :) |
Are you the DreaMover developers? |
No, we are from MooreThreads, the another company |
this as said is built on other models, which is different as they usually cause flickering and the physics logic isn't very good , seems very unatrual |
impressive job nonetheless, I wonder how much money it costs to train a
model like this one.
…On Fri, Jan 12, 2024 at 7:54 PM idk-what-name ***@***.***> wrote:
https://github.com/guoqincode/Open-AnimateAnyone
this as said is built on other models, which is different as they usually
cause flickering and the physics logic isn't very good , seems very unatrual
—
Reply to this email directly, view it on GitHub
<#57 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADCX5RDFTD35NQB26CR3PFDYOHSPDAVCNFSM6AAAAABBRUYSKOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOJQGI2DONBVGA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
With the amount of hype, interest and public attention around this animation consistency technology, which will obviously be revolutionizing in the animation industry, I think it's fair to understand if the authors decide not to release the code publicly, and instead continue making daily revenue via their service on the Tongyi Qianwen app. It's a reasonable and logical decision.
Why would any sane person invest their time, effort and money into creating something really revolutionary and powerful, only to give it away to public for free? Even OpenAI, the company that is supposed to be non-profit, saw how game-breaking the capability of ChatGPT, GPT-4 and DALL-E-3 would be, thus they decided to change their plan.
Altruism doesn't exist in capitalism if the technology breakthrough is too game-changing and has too much potential to bring profit. Any other company in their place would do the same thing, and there's nothing wrong with that. It's just how the system works.
The text was updated successfully, but these errors were encountered: