Decoding AI: Enhancing Transparency with Interpretability and Explainability

Explore the importance of transparency in AI decision-making with tools to enhance accountability and interpret model outputs. 
Format

Online
Course

Expert

Ulrich Aïvodji & Maryam Babaei

Live session date

June 4 , 2025

Live session time

12:30- 3:00PM

Individual preparatory work

1 Hour

Price

$225 + Tax

About the module

Understanding how AI systems make decisions is critical for accountability and mitigating negative impacts in a complex technological landscape. This module examines algorithmic transparency, introducing methods to interpret model outputs. Participants will explore both interpretable model development techniques and post-hoc explanation methods to clarify black-box models, making AI systems more transparent and comprehensible to stakeholders. Additionally, they will learn about emerging topics in AI transparency and the limitations of current approaches.

Learning Outcomes

Learn the importance of transparency in high-stake AI-based decision-making process.
Understand different approaches to promote transparency in AI and their limitations.
Apply interpretability by design techniques and post-hoc explanation methods to concrete prediction tasks.

Who is this module for?

All AI professionals, including executive leaders, Data Scientists, ML/AI engineers, AI developers, AI product managers, AI consultants and investors.

Tailored for participants with a foundational understanding of AI concepts and basic knowledge of probability, linear algebra, and machine/deep learning. Familiarity with machine/deep learning frameworks (e.g., PyTorch, Scikit-Learn) will be helpful for practicing with integrated coding examples.

Course Lessons

Ulrich Aïvodji

Ulrich Aïvodji is an Assistant Professor of Computer Science at ETS Montreal in the Software and Information Technology Engineering Department. .
His research areas of interest are machine learning, optimization, data privacy, and computer security. His current research focuses on several aspects of trustworthy machine learning, such as fairness, privacy-preserving machine learning, and explainability.

Maryam Babaei

Maryam Babaei is a PhD candidate at the École de Technologie Supérieure (ÉTS) in
Montréal, conducting her research at the TISL Lab under the supervision of Dr. Ulrich
Aïvodji (ÉTS) and Dr. Sébastien Gambs (UQAM). She is also affiliated with Mila –
Quebec AI Institute and the NSERC CREATE program for the responsible development
of AI. Her research focuses on the privacy and security risks associated with post-hoc
explanations in machine learning models, aiming to identify vulnerabilities and develop
mitigation strategies to enhance the trustworthiness of AI systems.