Skip to content

Optimization-based deep learning models can give explainability with output guarantees and certificates of trustworthiness.

License

Notifications You must be signed in to change notification settings

TypalAcademy/xai-l2o

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

65 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

License: MIT Docs

XAI-L2O: Explainable AI via Learning to Optimize

Indecipherable black boxes are common in machine learning (ML), but applications increasingly require explainable artificial intelligence (XAI). The core of XAI is to establish transparent and interpretable data-driven algorithms. This work provides concrete tools for XAI in situations where prior knowledge must be encoded and untrustworthy inferences flagged. We use the "learn to optimize" (L2O) methodology wherein each inference solves a data-driven optimization problem. Our L2O models are straightforward to implement, directly encode prior knowledge, and yield theoretical guarantees (e.g. satisfaction of constraints). We also propose use of interpretable certificates to verify whether model inferences are trustworthy. Numerical examples are provided in the applications of dictionary-based signal recovery, CT imaging, and arbitrage trading of cryptoassets.

Publication

Explainable AI via Learning to Optimize (Scientific Reports link and arXiv link)

@article{heaton2023explainable,
  title={{Explainable AI via Learning to Optimize}},
  author={Heaton, Howard and Wu Fung, Samy},
  journal={Scientific Reports},
  year={2023},
  url={https://doi.org/10.1038/s41598-023-36249-3},
}

See documentation site for more details.

About

Optimization-based deep learning models can give explainability with output guarantees and certificates of trustworthiness.

Topics

Resources

License

Stars

Watchers

Forks

Languages