Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

question about your Reproduced Results #10

Open
Bardielz opened this issue Dec 2, 2020 · 3 comments
Open

question about your Reproduced Results #10

Bardielz opened this issue Dec 2, 2020 · 3 comments

Comments

@Bardielz
Copy link

Bardielz commented Dec 2, 2020

hi,i have a question about how the four parts set:
radom part: it is just radomly samlping from the traing set and train?
reference : is it not the learn loss?
ground truth loss: is it diffr=erent from random part?
learn loss: it is the paper using ?

@Mephisto405
Copy link
Owner

Hi,
As you know, The LL4AL method uses the loss learning module to select the next set of data points to be labeled at every cycle. In this sense,
Random: Select random data points in the remaining unlabeled set. Then label them.
Learn loss: This is the result of our reproduction.
Reference: It is just the reported results from the paper. Thus, our reproduced model should reach this accuracy.
Ground truth loss: This is our unique experiment, so it does not appear in the original authors' paper. It does not use the loss learning module to predict the loss of the unlabeled data. It actually calculates the loss from the ground-truth label of the unlabeled data. This is possible in CIFAR10 since all images have labels. This might be confused. Note that (1) the CIFAR10 dataset, in fact, has labels and authors intentionally remove labels, and (2) therefore, the authors 'predict' loss, not 'calculate' it.

@Reasat
Copy link

Reasat commented Apr 19, 2021

Hi @Mephisto405
Isn't it weird that the query strategy by ground truth losses is performing so poorly? Theoretically. this strategy should be as good as ll4al or better. Do you have any intuitions why this is happening?

@manza-ari
Copy link

@Mephisto405

Hi @Mephisto405
Isn't it weird that the query strategy by ground truth losses is performing so poorly? Theoretically. this strategy should be as good as ll4al or better. Do you have any intuitions why this is happening?

I also have the same question, why ground truth loss is performing poorly?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants