RL-Based AMR Controller
Version 1.0.3 (254 ko) par
정호 윤
Using Reinforcement learning, we implemented AMR controller with MATLAB/Simulink.
After unzipping the compressed file, open the AIV_robot.slx file and run the rlEnv.m script to perform AMR learning in conjunction with the Driving Scenario Designer APP.
The 2024b pre-release version must be used so that no ultrasonic sensor-related errors occur.
In 2024a, the ultrasonic sensor is not available due to a measurement error.
At this time, the path to the scenario.mat file must be modified in the Environment/Simulation/Scenario Reader block.
If you want to check the driving performance of the trained agent, you can visualize it using a bird's eye scope on the Simulink model.
If you don't want to learn, just load the agent and run it in Simulink, and the model will run in approximately 18 seconds.
Rewards can be freely modified and used, and isdone can be modified to make the model more dynamic.
Training time takes 20 to 30 hours per 1,000 trials, and it may be more efficient to learn by fine-tuning an existing model rather than learning from scratch.
Citation pour cette source
정호 윤 (2026). RL-Based AMR Controller (https://fr.mathworks.com/matlabcentral/fileexchange/170616-rl-based-amr-controller), MATLAB Central File Exchange. Extrait(e) le .
Compatibilité avec les versions de MATLAB
Créé avec
R2024b
Compatible avec R2024b
Plateformes compatibles
Windows macOS LinuxTags
Découvrir Live Editor
Créez des scripts avec du code, des résultats et du texte formaté dans un même document exécutable.
| Version | Publié le | Notes de version | |
|---|---|---|---|
| 1.0.3 | update Description |
||
| 1.0.2 | update image |
||
| 1.0.1 | Add description |
||
| 1.0.0 |
