Live Events

First Steps to AI Certification: Explainability and Verification

Overview

This second webinar in the series follows on from the creation of a dynamic system that contains AI component such as a reduced order model, machine learning model or a trained deep network.  We walk through an introduction to explainable AI to show techniques to interrogate the results of a model to establish boundaries on the models performance.  Also, we will introduce a pathway to certified AI systems that need to meet regulatory approval for safety critical systems.

Highlights

  • See how to test an AI model’s workings
  • Explore results from formal AI adversarial testing
  • Learn about pathways towards safety certified AI systems

 

About the Presenter

Dr Peter Brady is a Principal Application Engineer with a background in numerical simulation, big data analysis and high-performance computing. Prior to joining MathWorks he worked at several civil and defence contractors undertaking detailed computational fluid dynamics investigations. At MathWorks Australia Peter supports the areas of: maths and statistics, machine and deep learning as well as providing an Australian based contact MathWorks’ autonomous customers to access global resources. He holds a bachelor’s degree in civil engineering and PhD in mechanical engineering.

Shine is a Senior Application Engineer at MathWorks with a background in machine learning and the Theory of Constraints (TOC). Over the past seven years, Shine worked as a Data Analyst at gold mining companies, contributing to a broad range of data-driven initiatives in both operational and technical domains. Shine holds an MPhil in Data Science, an MSc in Electrical and Computer Science, and a BSc in Biomedical Engineering.

Product Focus

This event is part of a series of related topics. View the full list of events in this series.

Registration closed

View on-demand webinars