Skip to content

IBMDeveloperUK/Trusted-AI-Workshop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 

Repository files navigation

Evaluating performance, fairness and robustness of models in production

Details

To trust a decision made by an algorithm, we need to know that it is reliable and fair, that it can be accounted for, and that it will cause no harm. We need assurance that it cannot be tampered with and that the system itself is secure. We need to understand the rationale behind the algorithmic assessment, recommendation or outcome, and be able to interact with it, probe it – even ask questions. And we need assurance that the values and norms of our societies are also reflected in those outcomes.

In this workshop we will use Watson OpenScale, which is build with trusted AI open-source projects. Watson OpenScale tracks and measures outcomes from your AI models, and helps ensure they remain fair, explainable and compliant wherever your models were built or are running.

We will walk through the process of deploying a credit risk model and then monitoring the model to explore the different aspects of trusted AI.

By the end of the lab, you will have:

  • Deployed a model from development to a runtime environment.
  • Monitored the performance (operational) of the model over time.
  • Tracked the model quality (accuracy metrics) over time.
  • Identified and explored the fairness of the model as it's receiving new data. -Understood how the model arrived at its predictions.
  • Tracked the robustness of the model.

Getting Started

  1. Sign up for an IBM Cloud account.
  2. Follow the instructions in this gitbook.

Slides

Find them here on slide share

About

Hands on workshop material evaluating performance, fairness and robustness of models

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published