Embedded AI: Compression techniques and performance optimization using automated C/C++ code generation
| Start Time | End Time |
|---|---|
| 3 Feb 2026, 04:00 EST | 3 Feb 2026, 05:00 EST |
Overview
Deploying AI models on embedded devices is challenging due to limited memory, low computational power, and strict energy constraints. Large, accurate models often exceed the capabilities of embedded hardware, making compression and optimization essential for successful deployment.
This webinar explores how to efficiently bring AI models to embedded systems using model compression and automated C/C++ code generation. You’ll learn how to reduce model size, optimize performance, and deploy reliably across a wide range of hardware platforms. The session will showcase practical workflows using MATLAB and Simulink, including simulation, validation, and deployment strategies tailored for resource-constrained environments.
Highlights
- Deploy deep learning models to embedded targets using automated code generation
- Reduce model size with structural and datatype compression techniques
- Optimize performance with vectorization and multi-threading
- Simulate and validate AI models using MATLAB and Simulink workflows
About the Presenter
Christoph Stockhammer holds a M.Sc. degree in Mathematics from the Technical University Munich with an emphasis on optimization. He joined MathWorks in 2012 as a Technical Support Engineer and transferred to Application Engineering in 2017. His focus areas include Mathematics and data analytics, Machine Learning, Deep Learning as well as the integration of MATLAB software components in other programming languages and environments.
Product Focus
We will not sell or rent your personal contact information. See our privacy policy for details.
You are already signed in to your MathWorks Account. Please press the "Submit" button to complete the process.