Guided Perturbations: Self-Corrective Behavior in Convolutional Neural Networks
-
Updated
Jan 16, 2018 - Python
Guided Perturbations: Self-Corrective Behavior in Convolutional Neural Networks
Binary Iterative Method for Non Adversarial Attack
Evaluation of various defence mechanisms and various UAPs. Done as a part of GD-UAP.
[Master Thesis] Research project at the Data Analytics Lab in collaboration with Daedalean AI. The thesis was submitted to both ETH Zürich and Imperial College London.
The codes of my Master thesis: Certification of Neural Network Reinforcement Learning Policies based on Randomized Smoothing
Code for our USENIX Security '22 paper: Transferring Adversarial Robustness Through Robust Representation Matching.
RSS feed for adversarial example papers.
Studied the impact of adversarial attacks on RNN Based load forecasting model.
Doing analysis on shared embedding space for the natural languages of English and Tamil
Using Gaussian Processes for Deep Neural Network Predictive Uncertainty Estimation
Physical adversarial attack for fooling the Faster R-CNN object detector
A stochastic input pre-processing technique based on a process of down-sampling/up-sampling using convolution and transposed convolution layers. Defending convolutional neural network against adversarial attacks.
Program that uses tensorflow and keras to recognize images and by the way has an extra code to confuse said artificial intelligence with fake recreated images. Technologies and languages used: Jupyter, Tensorflow, Keras and Python. Own learning.
Employing Adversarial Machine Learning and Computer Audition for Smartphone-Based Real-Time Arrhythmia Classification in Heart Sounds
Talk presented during 3rd SeComp from UTFPR, Brazil, Apucarana. This repository contains all codes, slides, and supplementary material.
Notes, tutorials, code snippets and templates focused on Generative Adversarial NNs for Machine Learning
A repository about Robust Deep Neural Networks with Uncertainty, Local Competition and Error-Correcting-Output-Codes in TensorFlow.
Adversarial attacks on CNNs using gradients of the network
Network Intrusion Detection in an Adversarial setting
Improving model's robustness to transfer attacks by regularizing projection of input gradients.
Add a description, image, and links to the adversarial-machine-learning topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-machine-learning topic, visit your repo's landing page and select "manage topics."