Skip to content
/ ML-Talk Public

๐Ÿ“„ [Talk] OFFZONE 2022 / ODS Data Halloween 2022: Black-box attacks on ML models + with use of open-source tools

Notifications You must be signed in to change notification settings

qwqoro/ML-Talk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

12 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

gif

[ Have a look at the presentation slides: slides-OFFZONE.pdf / slides-ODS.pdf ]
[ Related demonstration (Jupyter notebook): demo.ipynb ]

Overview | Attacks | Tools | More on the topic


An overview of black-box attacks on AI and tools that might be useful during security testing of machine learning models.

๐Ÿ“ฆ Overview

demo.ipynb:
A demonstration of use of multifunctional tools during security testing of machine learning models digits_blackbox & digits_keras trained on the MNIST dataset and provided in Counterfit as example targets.

Slides:
โ€ƒโ€“โ€ƒMachine Learning in products
โ€ƒโ€“โ€ƒThreats to Machine Learning models
โ€ƒโ€“โ€ƒExample model overview
โ€ƒโ€“โ€ƒEvasion attacks
โ€ƒโ€“โ€ƒModel inversion attacks
โ€ƒโ€“โ€ƒModel extraction attacks
โ€ƒโ€“โ€ƒDefences
โ€ƒโ€“โ€ƒAdversarial Robustness Toolbox
โ€ƒโ€“โ€ƒCounterfit

โš”๏ธ Attacks

๐Ÿ”ง Tools

โ€ƒโ€“โ€ƒ[ Trusted AI, IBM ] Adversarial Robustness Toolbox (ART): :octocat: Trusted-AI/adversarial-robustness-toolbox
โ€ƒโ€“โ€ƒ[ Microsoft Azure ] Counterfit: :octocat: Azure/counterfit

๐Ÿ“‘ More on the topic