Skip to main content


AI innovations are growth at a rapid pace. The the way we design AI systems is poised to transform businesses and society. However, AI systems developed without careful consideration can have unintended harmful consequences. This workshop gives participants hands-on learning on how to use the Responsible AI to debug their machine learning model in order to improve the model's performance to be more fair, inclusive, safe & reliable, and transparent. After completing this workshop, participants will learn tools and best practices of using responsible AI to produce AI solutions that are less harmful to society and more trustworthy.

Responsible AI is a framework that helps AI developers to identify and mitigate risks and harms that could impact people, businesses, and society. This provides practices that are necessary to help AI developers address compliance regulations and harms that could impact threat to human rights; risk of physical or psychological injury; or life opportunities.

Defining What's Important

Using practical tools and practices that help empower data scientists and AI developers to easily integrate responsible AI into their everyday development to address AI risks.

Operationalizing Responsible AI

Building society's trust in your AI systems

Building Society Trust in Your AI System