Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What loss is suitable for one anchor, multiple positive and multiple negative? #693

Closed
ImmortalSdm opened this issue Apr 4, 2024 · 1 comment

Comments

@ImmortalSdm
Copy link

No description provided.

@KevinMusgrave
Copy link
Owner

KevinMusgrave commented Apr 29, 2024

Apologies for the late reply.

You can use the concept of ref_emb to separate anchors from positives and negatives.

For example, using ContrastiveLoss:

from pytorch_metric_learning.losses import ContrastiveLoss

loss_fn = ContrastiveLoss()

# anchors has shape NxD
# anchor_labels has shape N
# ref_emb has shape MxD
# ref_labels has shape M
loss = loss_fn(anchors, anchor_labels, ref_emb=ref_emb, ref_labels=ref_labels)

Positive pairs will be formed by embeddings in anchors and ref_emb that have the same label.
Negative pairs will be formed by embeddings in anchors and ref_emb that have different labels.

You can have multiple positive pairs and negative pairs for any of the embeddings in anchors. In the extreme case, you could have a single embedding in anchors (shape 1xD), and many positive and negative embeddings in ref_emb.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants