-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attention in Motion Module of UNetMotionModel #7968
Comments
cc @DN6 |
Hi @huyduong7101 Cross Attention support does exist in the original AnimateDiff codebase, however it isn't actually used in the model. Here because the block types are |
Closing this for now. If you have further questions @huyduong7101 please feel free to create a post in the Discussions section. |
Describe the bug
As far as I know, UNetMotionModel is adopted for AnimateDiff. Hence, I look into the original implementation of AnimateDiff, it is noticed that they use cross-attention in Motion Module (https://github.com/guoyww/AnimateDiff/blob/main/animatediff/models/unet_blocks.py#L382). On the contrary, Diffusers uses self-attention (https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_3d_blocks.py#L1175)
Reproduction
Logs
No response
System Info
Diffusers 0.27.2
Who can help?
@DN6 @yiyixuxu @sayakpaul
The text was updated successfully, but these errors were encountered: