Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create backdoor-clean-label #2275

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

OrsonTyphanel93
Copy link

@OrsonTyphanel93 OrsonTyphanel93 commented Sep 7, 2023

Description

This code transforms the audios dirty label backdoor attack into a truly robust clean label attack!,

Please include a summary of the change, motivation and which issue is fixed. Any dependencies changes should also be included.

Fixes # (issue)

Type of change

This class implements a clean label attack, in particular for poisoning attacks with clean labels. The main contributions of this are as follows:

Robust clean label backdoor attack !

Please check all relevant options.

Test Configuration:

  • OS
  • Python version
  • ART version or commit number
  • TensorFlow / Keras / PyTorch / MXNet version

Checklist

  • My code follows the style guidelines of this project

  • This code defines a class " PoisoningAttackCleanLabelBackdoor" that performs a true clean label backdoor robust attack.

  • When the poison method is called, it applies the trigger function to the input data and returns the poisoned data with the same clean labels as the original data and applies an alpha factor to make the attack very imperceptible even if the audio trigger has a high volume!

@beat-buesser beat-buesser self-assigned this Sep 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants