Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible for the smoothed classifier to completely abstain on test set? #8

Open
kirk86 opened this issue Jul 20, 2020 · 2 comments

Comments

@kirk86
Copy link

kirk86 commented Jul 20, 2020

@jmcohen Hi, thanks for releasing the code.
If you don't mind me asking, I'm trying to understand if its possible for a smooth classifier trained using randomised smoothing to completely abstain on the test set of cifar-10 corrupted with PGD l-infintiy norm?

I've trained a smooth classifier using noise=0.56 and at test time I use PGD with epsilon=0.1 and l-infinity norm to evaluate the robustness of the smooth classifier.

e.g. running one epoch on test set of cifar-10

for each batch in minibatches
    adversarial_samples = produce adv. noisy samples for this batch <-- PGD with l-infinity & epsilon=0.1
    for each x in the adversarial_samples
        # compute randomized smoothing labels
        predicted_labels = smooth_classifier.predict(x, n=10, alpha=0.001, batch_size=128)

Am I missing sth or is it completely normal in this case for the smoothed classifier to abstain from prediction for the whole test set on cifar10?

Thanks!

@jmcohen
Copy link
Collaborator

jmcohen commented Jul 21, 2020 via email

@kirk86
Copy link
Author

kirk86 commented Jul 21, 2020

@jmcohen Hi Jeremy, thanks for getting back to me, appreciate it!

I believe the issue is that n=10 is too few samples for an alpha=0.001

I eventually figured it out through trial and error that n + noise_std seems to be the key for success.
I used n=55 and seems to be providing better results than alternative methods.
The only downside is that prediction time increases dramatically depending on n.
I"m pleasantly surprised though how well it performs compared to other existing alternatives.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants