Label Smoothing and Adversarial Robustness
-
Updated
Nov 4, 2020 - Jupyter Notebook
Label Smoothing and Adversarial Robustness
ECE C147: Neural Networks & Deep Learning. Repository for "Developing Robust Networks to Defend Against Adversarial Examples". Implementing adversarial data augmentation on CNNs and RNNs.
This repository contains the implementation of two adversarial example attack methods FGSM, IFGSM and one Input Transformation defense mechanism against all attacks using Imagenet dataset.
FGSM(Fast Gradient Sign Method)
Experimental Adversarial Attack notebooks on CV models
using adversarial attacks to confuse deep-chicken-terminator 🛡️ 🐔
Adversarially-robust Image Classifier
A Comprehensive Study on Cloud-Based Model Interpretability, Accountability, and Privacy in Machine Learning with Resilience to Adversarial Attacks
Adversarial attacks to SRNet
Implementation of FGSM (Fast Gradient Sign Method) attack on fine-tuned MobileNet architecture trained for flood detection in images.
Fast Gradient Sign Method Adversarial Attack on Digit Recognition Model
FGSM attack Pytorch module for semantic segmentation networks, with examples provided for Deeplab V3.
Machine Learning (2019 Spring)
Simple examples done in MxNet
Notebook to implement different approaches for Adversarial Attack using Python and PyTorch.
Add a description, image, and links to the fgsm topic page so that developers can more easily learn about it.
To associate your repository with the fgsm topic, visit your repo's landing page and select "manage topics."