Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Paper does not report attack success rate for targeted adversarial examples #11

Open
carlini opened this issue Feb 26, 2019 · 2 comments

Comments

@carlini
Copy link

carlini commented Feb 26, 2019

When measuring how well targeted attacks work, the metric should be targeted attack success rate. However, Table V measures model misclassification rate. This is not the right way to do measure it.

It's also unclear why PGD and BIM are listed as untargeted attacks and not as targeted attacks, when it works both ways (i.e., CW2 is the same and could just as easily be classified as an untargeted attack).

@ryderling
Copy link
Owner

When measuring how well targeted attacks work, the metric should be targeted attack success rate. However, Table V measures model misclassification rate. This is not the right way to do measure it.

It's also unclear why PGD and BIM are listed as untargeted attacks and not as targeted attacks, when it works both ways (i.e., CW2 is the same and could just as easily be classified as an untargeted attack).

We agree with you that when measuring how well targeted attacks work, the metric should be targeted attack success rate, and we actually measure and analyze the targeted attack success rate of targeted attacks in Table III and section IV.A of the paper.

However, Table V does not measure the success rate of attacks, while it measures classification accuracies of defense-enhanced models (targeted success rate of attacks should less than or equal to 100% - accuracy of defense). Again, in non-adaptive scenarios, for defense-enhanced models, defenders do not need to know which type of attack belongs to (targeted or non-targeted). The only purpose for defenders is to classify the adversarial examples correctly, so we evaluate the classification accuracies of defense-enhanced models against successful adversarial examples in Table V.

@carlini
Copy link
Author

carlini commented Mar 16, 2019

It's great that you do measure this for the attacks against the undefended model. But I still care about how well targeted attacks work even when considering defended models from the perspective of the adversary.

For example, for LLC you report that the average model accuracy is 39.4% whereas ILLC has an average model accuracy of 50.9%. It may very well be the case that ILLC is better at generating targeted adversarial examples on defended models, however. But the current data doesn't show this.

Compared to all the other significant issues, this point is very minor. It's just something that I would have liked to see for evaluating attacks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants