Questa repository è un contenitore per il materiale della Summer School, presso la Fondazione MAST, sui temi di Intelligenza Artificiale e Machine Learning.
-
Updated
Jun 24, 2022 - Jupyter Notebook
Questa repository è un contenitore per il materiale della Summer School, presso la Fondazione MAST, sui temi di Intelligenza Artificiale e Machine Learning.
The aim of the project is to apply different global, local and performance interpretability methods as well as model fairness evaluations to a dataset with protected attributes. The dataset regards traffic violations in Montgomery, Maryland, USA. This is a fork of a group project of my Data Science for Business Master's Degree at HEC Paris.
explainable and interpretable methods for AI and data science
Webapp/Application implemention of my thesis about XAI and Interpretability of Transformer Models.
Comparison of sentiment analysis conducted with a lexicon and rule-based dictionary and state-of-the-art pre-trained language models
Optimizing Mind static website v1
Final Year Project Try-Out Codes
Understanding Morphosyntactic Representations in Pretrained Language Models.
Code associated to the InterpretE research paper
JAX-based Model Explanation and Interpretation Library
ICCV2021 paper: Interpretable Image Recognition by Constructing Transparent Embedding Space (TesNet)
Personal collection of resources to get started on Interpretability in AI (... still being updated ...)
Conway's Game of Life is sequential, here high-dimensional states are projected into the two-dimensional space, and connected, furthermore, meta-data is added to create interactive 2D visualizations.
A python project for prototype-based soft feature selection
B.Tech Project
Neural Additive Models - Visualization Tool in PyTorch/Plotly-Dash
IN PROGRESS - after the paper "Shapley-Lorenz decompositions in eXplainable Artificial Intelligence" by Giudici and Raffinetti - 2020
Prototype based ML implementation for Multiple reject thresholds for improving classification reliability
Master Thesis on Determining of Classification Label Security/Certainty
Add a description, image, and links to the interpretable-ai topic page so that developers can more easily learn about it.
To associate your repository with the interpretable-ai topic, visit your repo's landing page and select "manage topics."