Modeling and Simulation of Autonomous Mobile Robot Algorithms
Overview
Engineers who work on autonomous mobile robots (AMRs) often run into challenges with:
- Designing and evaluating different navigation algorithms in real-world scenarios
- Testing early and often on target hardware to estimate unmodelled characteristics.
In this talk, we will address these and other common challenges by showing how MATLAB and Simulink can help to implement an integrated AMR design workflow.
Highlights
In this webinar, we will cover:
- generating occupancy maps and simulation environment from images
- leveraging out-of-the-box tools to rapidly prototype navigation algorithms
- integrating external simulators by utilizing a ROS interface
About the Presenters
Ronal George is an application engineer for robotics and autonomous systems at MathWorks. Prior to joining MathWorks in April 2019, Ronal worked as an inside sales engineer at SPX Transformer Solutions and as an electrical design engineer at WindLabs. Ronal has a master’s degree in electrical engineering from North Carolina State University. As a part of his master’s, Ronal worked with the Advanced Diagnosis, Automation and Control (ADAC) Laboratory to develop planning and localization algorithms for multiagent systems.
Julia Antoniou is an application engineer for the aerospace and defense industry at MathWorks. She specializes in modeling and simulation of physical systems, with a focus on robotic and autonomous systems. Prior to joining MathWorks in 2017, Julia worked at companies such as iRobot and Johnson & Johnson in their mechanical engineering, systems engineering, and manufacturing engineering departments. Julia holds B.S. and M.S. degrees in mechanical engineering from Northeastern University.
YJ Lim is a Senior Technical Product Manager of robotics and autonomous systems at the MathWorks. YJ has over 20 years of experience in robotics and autonomous systems area, including prior experience at Vecna robotics, Hstar Technologies, SimQuest, Energid Technologies, and GM Korea. Lim received his Ph.D. in mechanical engineering from Rensselaer Polytechnic Institute (RPI) and his Master from KAIST in S. Korea.
Recorded: 26 May 2022
Hello, everyone. Welcome to this webinar. My name is YJ Lim, Technical Robotics Product Manager at MathWorks. Today, I will presenting with my two colleague, Ronal George, and Julia Antoniou. We will be talking about how to model and simulate autonomous mobile robot.
Let me start with sharing what we use as stories. We now live with COVID, and the pandemic has given a new urgency to the development of a robot to fight COVID-19. Weston Robot in Singapore designed such a type of a disinfection robot and temperature monitoring robot.
Dr. Yanliang, a managing director at the Weston Robot, told us they use the Model-Based Design approach along with several tools, such as a Stateflow, Robotics System Toolbox, and Coder product.
They were able to build a robot to fight the COVID-19 in 10 days. That was great to hear that. What they design was with MATLAB Simulink allows to reduce the time to the market. This topic we'd like to discuss today is quite relevant to Model-Based Design approach, where virtual models are at the center of the development process, with a simulation.
Here is what we want to discuss. I will start with some challenges of designing autonomous system. This include market trend, complexity of autonomous system, and core component of autonomous system design. Then Ronal and Julia will discuss the detail on how to model and simulate autonomous mobile robots.
They will show you why Multi-fidelity simulation when designing mobile robot. Not only virtual model, but also they will discuss deploying autonomous mobile robot algorithm into their hardware for testing algorithms.
Let me begin with a few observations about the importance of integrated simulation frameworks. There are a number of megatrends that are influencing the direction of the system development. Electrification, connectivity, and autonomous system, where a lot of attention is on autonomous vehicles currently.
But it now expand for the automation at the factories, warehouses, and the different systems. And artificial intelligence, which is a game changer for many industry in changing what system can do, and offer new capability that were not possible before.
No matter what technology you are working on, there is one trend that is universal. There is growing system complexity. Systems are exponentially more complex than they were just a decade ago. Autonomous car is probably the most complex system people have created. And then software is at the crux of it. Jaguar Land Rover estimate one billion line of code will be needed for their autonomous vehicle.
Looking at the mobile robot, which is today's topic, Automated Guided Vehicle, or AGV, use marked line, wires, or a QR code on the floor to follow. These robot move around on a predicted path with a controlled speed.
Autonomous Mobile Robot, or AMR, use onboard sensors to move material autonomously without any need of a physical guidance, or any other markers on the floor. AMR was developed earlier than AGV, in 1951.
A few years later, the first AGV was developed for a tow truck that followed a wire on the floor. Starting with Kiva System and Seegrid in early 2000, many AGV and AMR was developed for logistics in the warehouses, hospitals, and many other industrial applications.
So these are mobile robots, basically AMRs are becoming more and more complex systems to develop. Therefore, all the system method no longer work. This chart shown here is when the design error was introduced, and when those errors were detected in the traditional development process.
Errors are introduced at the various stages of development. Studies show that most of the defects are introduced in the initial design. Unfortunately, errors are not detected until later in the process when testing is done.
Some errors still remain in the software, and the may only be detected in the field. Cost to fix the error in the later will be high. So we should try to find the error in design process as close to the place where they were introduced.
According to the embedded systems market study, on average only 6% of embedded system development time is spent on simulation. This is a lost opportunity and cause the system failure.
With increased simulation, we can start finding errors at the early stage. As a part of the design and development technique, the simulation is intrinsic to the development process.
Autonomous systems continue to be developed and deployed at an accelerated pace. Here is a smart and autonomy package delivery concept. Manufacturer will ship the delta product carried by self-driving truck to the smart warehouse facility, where in the warehouse mobile robot will move and arrange it in the appropriate shelf.
When you order some product, unmanned aerial vehicle, or drone, will perform the last mile delivery. The market is growing rapidly toward this way. And soon, we will be able to see the smart warehouses extending to a fully autonomous delivery chains.
So the core element of the autonomous system design is that the robot need to perceive the environment, keep track of the moving part, and then the plan of course of movement for the system itself.
This perception, planning, and control workflow is in the critical for wide range of the autonomous system, including autonomous mobile robot. Each of these has main and their own technology topics, and the challenges.
Perception rely on sensory data to acquire self and global awareness. Planning provide a safe and collision-free path toward the desired destination in the presence of the obstacle. Finally, the control system get the robot actuator to execute a specific set of motion to closely follow the path to the goal in a realistic setting. MATLAB and Simulink offer various tools to make this easier.
When I'm talking about advanced robot, like autonomous mobile robot, there are several other related topics involved in the workflow. How do I model robotic platforms? How do I connect and deploy to the robot? How do I ensure to meet them and the system requirement and customer need?
In this webinar, in order to model and simulate the AMR, we will mainly focused about virtual simulation model and autonomous algorithm development.
And the simulation is the proper tool for analyzing, optimizing your design, specifically when your platform or system is complex. Simulation is needed in all stages of the development process and entire product lifecycle with different fidelity.
But there are user pain points to do that. So engineer may need to rebuild the simulation model and test the scenario using different tools. So it might be great to integrate the system modeling and simulation in one integrated environment for all stage of the development process.
Looking at a complete workflow, simulation solution need to provide a comprehensive tool for autonomous algorithms, and to be part of our continuous integration it is desired to work with multi-fidelity of parent models, and different levels of scenario simulation environment within integrated platform. Providing user interface with a global hardware is key as well for validating your autonomous algorithms.
Of course, various third-party in a simulator integration will also help users who are already using these simulator. OK, now I will turn it over to Ronal and Julia for the detail of how to perform multi fidelity simulation for autonomous mobile robot.
Thanks, YJ. And as YJ said, we'll be walking through the next portion of today's agenda. My name is Ronal George. I'm an Application Engineer who focuses on robotics and autonomous systems. I got my master's in electrical engineering from North Carolina State University. With me, I have Julia.
Hi, everyone. My name is Julia Antoniou. I'm also an application engineer at MathWorks. I've been here about five years, and I focus on robotics and also physical modeling. And my background is in mechanical engineering.
So the autonomous systems development workflow that YJ just described. How would we go about putting that into practice? That's what we want to show you guys today. And I always like to, wherever we can, use specific examples. So today, this is our specific example that we're going to show you. This platform, it's called the RACECAR/J. It's about the size you can see here, right?
It's got a few different sensors, like LiDAR, IMU, camera. It actually has a Jetson on it that's running all the algorithms and doing most of the work here to drive it.
It's a really fun platform.
Yeah. And our goal with this platform, within an easy to access environment for us, is our office floor. We want to be able to navigate from Ronal's office, where we are right now, all the way over to my office, and do that completely autonomously. So no given inputs. Just telling the robot, you're starting here, we want you to go there, and have it go there on its own.
So Ronal, I know you've worked with this hardware a lot. It looks pretty prototype-y, right? And it's fine. What's stopping me from-- maybe why should I not just start writing ROS nodes, and deploying stuff on it, and testing it right away? Is there steps I should take first before I go and do that?
Now, you can absolutely start directly with the hardware if you'd like to. But I would recommend that you start with simulations. It's going to be a lot easier to debug your hardware if you start with simulations. And that's kind of what I want to talk about today.
I want to walk you through three different fidelities of simulations. You can see the three different simulations there. The top left is a keyboard simulation. It's really easy to get set up and get started with. The bottom left is actually an external simulator. That's Gazebo. That might take a little more work to get set up. And then finally, moving directly to hardware.
Now, you might ask, why do I wait this long to get to hardware? Hardware can be a little tricky to work with. Like if you think about it, you have to have your batteries charged. You have to make sure that you have network wherever you want to drive this to so you can stay connected.
Doing repeatable simulations is a lot harder. First you have to go grab the hardware and bring it back and try to put it in the same place that you started with. So it's a lot harder to directly jump onto hardware. If you've got your algorithms kind of fine tuned, it's a lot easier to debug why the hardware is not responding the way you expect it to.
So let's start simple. Let's see what we can do in MATLAB. So here, I actually want to introduce robotScenario. It's a cuboid simulation environment. You can see there's two robots on there that have a path that are just going directly through the center. They have LiDAR sensors on them. So as they start moving, they realize their obstacles, and they navigate around it.
This scenario is built with basic like cuboid blocks. You can use cylinders. You can use polygons. You can also import actual SDFs, if you have them. If you have an STL, we can also get vertices and faces and import that as a mesh too. So it's really easy to recreate scenarios. As I mentioned, there is LiDAR, IMU, GPS, so a lot of sensors that you can use to start building out your initial development.
That's kind of what I did here. So you can see that I found a map, a floor plan of our office. I converted that to an STL file, and I imported it. I also found an SDF of the RACECAR platform, so I could also go ahead and import that.
Once I had that, I was able to assimilate the sensors. I was able to run some planning algorithms. There's a lot you can do just with this early prototype stage.
So talking about algorithms. I was able to simulate the sensor data, so I started first verifying my SLAM algorithm. Here, you can see that I use that sensor data and started doing mapping and localization.
I did build some fusion filters. I wanted to take both the INS and the camera and build out a better pose estimation. And of course, you can also test out all your planning algorithms, or use one of ours. We have a ton of global planners. We also provide a local planner, and a control planner that has kinematic constraints.
So you mentioned kinematic constraints there. The models that we've been working with so far here, they're all kinematic models, right? Or have we done dynamic modeling somewhere?
We have not. These are still kinematic, and I know you bring up dynamics because you want to talk about Simscape.
Yes, of course. I want to talk about Simscape. As a mechanical engineer, this is just a tool that I really-- it just makes a lot of sense, I think. You can see it up on the screen here. So instead of building out systems with having to sit down and derive equations, and figure out how to implement that and Simulink, instead we can bring in components and connect them and build a schematic. And it's easy to communicate with other engineers around that, and see what's going on in your system.
So the example we have up here. You can see on the left, we've got our schematic. It's got a motor, gearbox. We've brought in a CAD part to represent the wheel. And we're even modeling the contact between the wheel and the floor, and you can see we've got a model that's similar to our race car on the right over here.
And that was just a simple example. But we can really get quite complex with these tools. So here's an example that we put out recently of a Mars Rover. So it's navigating this uneven terrain. We're modeling all the contact between the wheels and the terrain. It's even doing a maneuver to try and pick up a sample here and put it back onto the Rover.
And this integrates really nicely with some of the things we've been talking about. Like you would need to plan paths for this, and figure out what your trajectory and motion profiles are going to look like. And then you can dynamically simulate how it performs.
Perfect.
So we've covered some MATLAB based simulation options. Everything we've covered so far, everything was kind of within the MATLAB ecosystem. But I know there's some other tools out there for dynamics that you might want to use, depending on what your goals are, like Gazebo, right?
Right. You mentioned Gazebo. I actually use Gazebo here specifically because this is a ROS enabled robot, which means I have ROS nodes that are there for the drivers, the controllers. And other than just testing my algorithms, I want to make sure that when I work with those ROS nodes, it works the same way.
So let me take a little time to talk about Gazebo, and then I want to cover how we actually go through ROS. So Gazebo has a large set of asset libraries, right? It provides a photorealistic scene, has multiple physics engines you could choose from. For MATLAB, you can connect a Gazebo either through ROS, or through an actual plugin we provide.
The benefit of using the plugin is that we get time synchronized simulation. So here, you can see I was able to pause the simulation. You can see that both of the time steps on each Gazebo and Simulink are the same. I'm controlling the robot using ROS, but I'm also using the Gazebo plugin, which I think is really powerful, because I get the ease of interfaces, across publishers and subscribers, but I also get a deterministic time synchronized simulation, which is repeatable.
So let me take a second to talk about what MATLAB and Simulink can help you when you're using ROS. There's three workflows that you see here. The first is data analysis and playback. If you have some ROS bags that have data, we can bring that in, play them back, and use that to build out your algorithms.
But of course, you can also do desktop prototyping. So this is where you're connected live with an existing ROS network. It can be a virtual network. It can be a network in an actual hardware. And then finally, of course, you can do deployment, which I'll talk about a little later. This is where we can generate C++ ROS nodes, and we can actually deploy those and build them on your platform.
Right, so let me get into the Gazebo simulation, and how I built that. Here, I was able to take that same SDL that I created for our cuboid and launch it here. There are some ROS nodes that came with this robot that I was able to run. And then you can see, I'm able to visualize my camera sensor, my LiDAR sensor, and verify that the data that I get from ROS topics are accurate. I was also able to verify, this being a dynamic simulator, if my controller can actually navigate from one end to the other.
Nice. So the visuals you've got on the screen right now, especially like the camera sensor and the LiDAR sensor, it's making me think of other types of photorealistic simulators, like Unity and Unreal. Could we do something similar to what we're doing here, but with those simulators connecting with MATLAB and Simulink?
Yeah, so with Unreal, we have a really nice time where we can connect with Unreal. We can launch an existing Unreal scene. We can deploy a platform. We can get sensor data like camera and LiDAR from Unreal.
Similarly with Unity, we don't have a direct tie-in. But if you have ROS publishers and subscribers set in your Unity environment, then we can directly connect to those, and use that to control the objects in your Unity environment.
Got it. So we've done lots of simulations so far, right? We looked at MATLAB based options, whether it was the robot scenario, integrating with dynamic models and Simscape. We've talked about moving on to tools like Gazebo, and even Unreal and Unity, and how we can connect that with the work we've been doing in MATLAB and Simulink. So are we finally ready to move to the hardware and start running this thing around? Or are there still steps you think that we should be taking first?
I think you're close enough to go directly to hardware. But I do want to recommend one more step before you actually go to hardware. So everything we've done so far, we've done it as running it as a simulation.
What I would want to confirm is when we go to hardware, we're actually going to run ROSE nodes, so C++ code. Maybe we should confirm that our code does the right thing before we directly go and generate code and test it out.
Sounds like a good idea.
Right. So that's the actual process I want to talk about here. We talked about simulation. This is where you can see that I have Simulink on one side, Gazebo on the other. We're using ROS topics, ROS publishers and subscribers to actually communicate and move this vehicle.
And this is simple. It's in simulation. You get to control a lot of parameters. The final piece we talk about is deployment. But before we go to deployment, I want to talk about external mode.
Now, this is a median in the process, where Simulink is not going to run just as Simulink. We're going to generate code from Simulink, move it over to your platform, and actually execute that code on your platform. The benefit here is we keep one thread open with Simulink, so we can still debug. We can still view what your outputs are.
We create a lot of parameters, so that anything that's a tunable parameter in Simulink can now be tuned while running as code. So you do get to verify that even as code you get similar performance, or maybe you can tune it a little just to confirm that your code does what you expect it to do.
And then finally, you can, once you tune that platform, you can go ahead and generate that code. You see I do build and load. This is going to generate code, move it directly onto my platform. And once it's done, I can go and actually run that ROS node and confirm it works the way it's expected to.
So it seems like the theme throughout this whole session really is just continuing to build our confidence level in the hardware. So maybe when we started, and maybe when I very first initially suggested, let's just try it on the hardware, maybe it was like a 50-50 shot on whether something would actually work, right?
And debugging where the issue would have been difficult. So as we've added different types of simulation modes, starting with the desktop, starting with simple methods, we built that confidence level of maybe like 60, 70%.
Maybe by the time we're at like the Gazebo level, it's like 80. And now we're getting even higher by being able to go through this process here, and really confirm that the code is working. It's in Gazebo in this case. But then we can move that over to hardware, and we're really confident it's going to be like that last 5% of things that we wouldn't be able to find unless we ran it on hardware.
Right, absolutely. So that's exactly what I'm going to do next. I'm actually going to keep running an external mode, so I'm generating code, but I'm actually going to move it to my hardware. So I'm going to make sure that the hardware executes the way I expect it to.
Here, you can see that I'm running my robot in the environment with an external mode, and making sure that the localization does what it's expected to do. And then, finally, I can go ahead and actually run the entire scenario on my hardware. So here, you can see that I'm going to go ahead and launch my Simulink application.
And you can see that the robot starts. I'm able to visualize the data in Simulink. It's still an external mode. You do notice something clearly here, that my localization, because I was going through this narrow corner, got a little lost, so I can confirm what actually is happening. But then once I turned the corner, I was able to kind of re-localize and get my estimates correct.
So there are lot of things we can do. This is still in external mode, right? And once we've confirmed that it works as it should, we can go ahead and actually deploy this as readable C++ code.
So speaking, I guess, of completely deployed, and you did just say readable. I want to ask about that. Because I talked to a lot of engineers who assume when we generate code that it's just going to be like this black box.
They can't really interface with it. They can't look at it. It's difficult to read and work with. Can you talk a little bit more about what that looks like, especially for ROS node generation?
Yeah. So MATLAB and Simulink support generation for a lot of its functions. I mean, especially in our robotics toolchain, maybe 90, 95% of everything is-- can generate code.
Things you'd expect, right? Like maybe not visuals, but algorithms.
Other than visualization functions, you should be able to generate code. And this code is just C++ code. It's totally readable. It has comments in it. You can actually take the code and make sure that that's what you expect it to be. So it's very easy to actually look through your code and debug your code. Even if you're looking directly at the code.
I also want to take a second to talk about actually creating custom apps. So here you can see, I've built this app. And why I did it was so that-- I've worked on this hardware the entire time, but I want other people to test it, and they might not necessarily know how to launch things, how to start the robot.
So what I did here is build an app, in which they can either do test for the plan, or for actual execution of the path. So here, you can see they can load a map. They can select a start and goal position. They can select a specific planner. They can decide some of the parameters of the planner, make sure that the plan that's generated is acceptable.
Similarly, going to the second portion, which is the execution portion, they can go ahead and actually run these plans on the robot. And you'll see that here. You can either do it in Gazebo, or you can do it on the actual hardware. You can choose the existing map, or you can choose an environment that's unknown with unplanned obstacles.
Nice. So an app like this, you could, say, pass it off to me, and I could just be able to start running the robot. There's no more, like in the past, I've had to sit down with this, and like in front of my Linux terminal, remember like the ROS network I should be launching, and even finding some of these parameters to tune. I think that'd be pretty difficult. So having something like this, I can easily make changes and visualize what's happening.
Right. So then you get to test out a ton of different scenarios, things that I might have not been able to test out. You can see how if there are obstacles in the way that were not actually planned for, how the robot will react. So you get to test out all those edge cases that I wasn't able to. And you don't really have to know much about the robot.
Nice. So we've talked about a lot so far, a lot of different phases, a lot of different steps, right? Let's summarize this a little bit and talk about what we've learned today.
Perfect.
So I think the biggest takeaway for me here is all those different phases of simulation, and how each one provides us something new and important on the way to being efficient with our hardware. We learned things that we might not have known previously, and it just really helps us build that confidence that both our algorithms are going to work, and that we made good choices in our hardware, our sensors, things like that.
And another big takeaway for me is how much MATLAB and Simulink and our different tools that we have within those tools can support all of this work. So being able to build some of those scenarios easily is really nice. Having the ability to hook into dynamic models, having built-in ways to hook up to third-party simulators that I might want to use.
And then also code generation, right? Being able to really easily take what I've made, click-- I saw that button in the video there where it was just like, deploy code. And that just generates it and puts it on the hardware. I think that's pretty invaluable.
I agree.
Or valuable. That's what I meant to say. All right. So let's turn this back over to YJ now. He's going to help walk us through some industry examples, talk about how this is being used today, and some resources on how we can get started.
Yeah.
Great. Thank you, Ronal and Julia. I'm back to conclude this webinar with some industrial application references, and then resources. Here are a couple of industrial reference, application references.
Engineers and clearpath robotics use MATLAB to speed up the development of autonomous navigation algorithms for their OTTO line material handling robot. Team found that performing this task with Robot Operating System, or ROS, alone, or by writing programs in C++ or Python was slow and inefficient.
Using MATLAB, they can complement ROS capabilities to analyze and visualize what data and prototype algorithms. Musashi Seimitsu Industry Corporation is another user case. They developed an industrial autonomous carrier robot in six months.
They adopted Model-Build Design, and team used in a Simulink to model the complex vehicle dynamics, and used the simulation to optimize the motion controller parameters.
So we hope you found something similar to one you are working on. The good news is that we have a lot of technical content to get started. There are anything, and I can start with anything in particular that you are looking for.
I encourage you to visit our webpages for the related toolboxes, which I have shown here. We have a white papers, getting started with the guide, and the short videos to help you ramp up on the topics we discussed in this webinar.
In addition, we have a large number of examples that are published on our website to help accelerate your development effort. I would ask you to explore these resources, and see if this can help with your application.
We will be happy to support your specific user cases as well. So feel free to reach out to us with any questions. Thank you for your attention.