Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the value of 'm‘ in “Batched Adversarial Attack” of your paper #16

Open
chenpan0103 opened this issue Oct 29, 2022 · 10 comments
Open
Labels
question Further information is requested

Comments

@chenpan0103
Copy link

Excuse me, what's the value of 'm‘ in “Batched Adversarial Attack” of your paper? Could you give a detailed guidance to reproduce the result of Table in your paper?

@cdluminate cdluminate added the question Further information is requested label Oct 29, 2022
@cdluminate
Copy link
Owner

By default -m is not specified. In that way, the algorithm will go through the whole dataset. The step-wise, detailed guidance is written in README.md -- please be specific about the part which you don't understand. Meanwhile, this code base is connected with two papers, please be specific about the name of the paper.

@cdluminate
Copy link
Owner

In the paper you can find that the algorithm will go through the whole dataset for reporting the number. -m is mostly used for debugging, and can be directly omitted.

@chenpan0103
Copy link
Author

chenpan0103 commented Oct 30, 2022

ok,thanks a lot. I want to reproduce the result of 《Enhancing Adversarial Robustness for Deep Metric Learning》. However, it is relatively different from the results of the paper. Is it normal?
image

@chenpan0103
Copy link
Author

I follow the step of README.md, I think there's no wrong step. And the result is trained on CUB dataset

@cdluminate
Copy link
Owner

cdluminate commented Oct 30, 2022

This difference is normal. What you see here is within the error bar. Due to different initialization and other factors (such as the number of GPUs in DDP mode), the performance differs slightly. If you want to see a higher ERS, just try to limit the GPU number to 1 or 2 (if I remember correctly. If not, it should be the reverse way -- more GPUs -- there will be a slight trade-off between R1 and ERS when changing the GPU number. This is a common phenomenon in parallel training), and try some more initialization.

@chenpan0103
Copy link
Author

chenpan0103 commented Oct 30, 2022

ok ,I will try it. As to the R1/R2/mAP/NMI in your paper, are they the result of training end?
image

@cdluminate
Copy link
Owner

Yes, they are reported at the training end status. Because in adversarial training, these standard benign metrics may look like a U-shape curve or directly a descending curve ... That's a part of adversarial training sacrificing the benign performance.

@chenpan0103
Copy link
Author

Get it, thank you very much!

@chenpan0103
Copy link
Author

chenpan0103 commented Oct 31, 2022

Excuse me, which is the mAP in 《Enhancing Adversarial Robustness for Deep Metric Learning》, mAP or mAP@R?

@cdluminate
Copy link
Owner

It's simply the original mAP. If mAP@R is used, it should have been explicitly justified.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants