A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
-
Updated
May 26, 2024 - Python
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
Bias correction command-line tool for climatic research written in C++
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰
Estimation and inference from generalized linear models using explicit and implicit methods for bias reduction
[ICML 2022] Channel Importance Matters in Few-shot Image Classification
Bias reduction in quasi likelihood estimation
This repository contains the firth bias reduction experiments on the few-shot distribution calibration method conducted in the ICLR 2022 spotlight paper "On the Importance of Firth Bias Reduction in Few-Shot Classification".
A small and simple prototype designed to alert users of the bias of the news source.
Methods for M-estimation of statistical models
🔍In recent years the advancement of ML (machine learning) increased automation for tasks in different domains. One of the challanges was an issues with job recruitment systems that demonstrated bias toward female applicants [4]. This repo will investigate some of the techniques used to overcome these challenges. 👨🏽🔧
This repository contains the code to replicate the numerical studies presented in the paper "A Flexible Bias Correction Method based on Inconsistent Estimators".
This repository contains the experiments conducted in the ICLR 2022 spotlight paper "On the Importance of Firth Bias Reduction in Few-Shot Classification".
Sampling algorithms and machine learning models to reduce bias and predict credit risk.
NeurIPS 2019 Paper: RUBi : Reducing Unimodal Biases for Visual Question Answering
unbiased toxicity detection from comments
Critical questions to help you gain useful information, clarify the context, figure out the pain points, and overcome biases.
Tensorflow implementation of Learning Not to Learn (CVPR 2019)
A method to preprocess the training data, producing an adjusted dataset that is independent of the group variable with minimum information loss.
Location-adjusted Wald statistics
Add a description, image, and links to the bias-reduction topic page so that developers can more easily learn about it.
To associate your repository with the bias-reduction topic, visit your repo's landing page and select "manage topics."