This repository contains the implementation of two adversarial example attack methods FGSM, IFGSM and one Input Transformation defense mechanism against all attacks using Imagenet dataset.
-
Updated
Mar 13, 2023 - Jupyter Notebook
This repository contains the implementation of two adversarial example attack methods FGSM, IFGSM and one Input Transformation defense mechanism against all attacks using Imagenet dataset.
Experimental Adversarial Attack notebooks on CV models
Adversarial attacks to SRNet
Adversarially-robust Image Classifier
Fast Gradient Sign Method Adversarial Attack on Digit Recognition Model
Notebook to implement different approaches for Adversarial Attack using Python and PyTorch.
Label Smoothing and Adversarial Robustness
adversarial attack on malware detector
FGSM(Fast Gradient Sign Method)
In this work, we extend the FGSM method proposing multistep adversarial perturbation (MSAP) procedures to study the recommenders’ robustness under powerful methods. Letting fixed the perturbation magnitude, we illustrate that MSAP is much more harmful than FGSM in corrupting the recommendation performance of BPR-MF.
Machine Learning (2019 Spring)
This project tests multiple different machine learning algorithms that can detect adversarial attacks in multi-agent reinforcement learning settings. Baselines were used to compare performance of a proposed ensemble model. Then, using FGSM, we re-attacked the ensemble detection model with perturbed observations. Read more at the pdf titled Final…
Simple examples done in MxNet
Homework of Security and Privacy of Machine Learning (SPML Lectured by Shang-Tse Chen at NTU)
Add a description, image, and links to the fgsm topic page so that developers can more easily learn about it.
To associate your repository with the fgsm topic, visit your repo's landing page and select "manage topics."