-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provided pooled_prompt_embeds is overwritten via prompt_embeds[0] #7365
Comments
Golden catch. Please PR it. |
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
So I was unsure how it was pooled before, and we were going through the code trying to figure it out. It seems like pooled_prompt_embeds = prompt_embeds[0] Should really be pooled_prompt_embeds = list(filter(lambda x: x is not None, [
getattr(prompt_embeds, 'pooler_output', None),
getattr(prompt_embeds, 'text_embeds', None),
]))[0] For clarity, as the two CLIP models output completely different classes and contain their pooled outputs in different properties. My concern was originally that instead of using the pooled output in the case of the one CLIP model, we were actually selecting the first token with |
Order in which the tokenizers and text encoders are being passed matters, so I think the implementation is correct. If any comment would help, please file a PR, more than happy to work on a priority on that.
That reason it's there is because it helps with |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
not stale |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
still not stale :D |
@sayakpaul i opened the pull request for this. but the code in question only runs when prompt embeds are None. do we want to mix and match provided pooled embeds with generated prompt embeds? |
diffusers/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py
Line 386 in 25caf24
Simple fix:
Sorry this isn't a pr :P
The text was updated successfully, but these errors were encountered: