-
-
Notifications
You must be signed in to change notification settings - Fork 746
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about Classifier-free Guidance #343
Comments
@varunponda that reads like a ChatGPT answer lol @lucala I think it does not matter which method is used for zeroing out as long as it is consistent between sampling and training. Although I do not have experiments on that myself, it makes sense. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
When embedding my text for conditioning, the trick for classifier-free guidance is to drop the embedding sometimes (usually 10% of the time).
My question is, what does drop mean? It seems I have come across two variants: using a random tensor as a substitute or a zero tensor.
GLIDE mentions in section 2.3 "we sometimes replace text captions with an empty sequence" - this would be a third option, using the embedding from the empty string?
I haven't been able to find any explanation on this, does someone know?
The text was updated successfully, but these errors were encountered: