Maybe, I wouldn't be able to speed up the training because the visualization (online mapping) is high consuming
Speed up DDPG agent using GPU
12 views (last 30 days)
Hello, Matlab community!
I do my research in the mobile-robotics field solving a mapping problem.
I am using Deep Reinforcement Learning method, the DDPG agent.
So far, I think my code (ddpg agent & custom environment) works okay.
I want to speed up code because the training takes a lot of time. It was around 3 days spent running 600 episodes..
From that time (one month ago), I started to search for a solution..
I was following this tutorial: https://www.mathworks.com/help/reinforcement-learning/ug/train-agents-using-parallel-computing-and-gpu.html
At first, I slightly changed the code editing training options and actor, critic representation options:
opt = rlRepresentationOptions('UseDevice',"gpu");
I figured out very fast that I cant use my local GPU because I don't have it.. so I deleted
Then, I tried to run Matlab container (Matlab BYOL) in azure and amazon web services.. It was easy to create the virtual machine in both places, but they were run on CPU. I requested to increase limits for getting GPU. The support services proved my requests, but I couldn't use them. Honestly, cloud services are kinda different worlds for me... It needs additional time to understand what is there..
I switched to the real hardware. I found Jetson Agx Xavier and I began to play with it. Installed OS and ROS.
I followed the tutorial of Jon Zeosky and Sebastian Castro: https://www.youtube.com/watch?v=0FPPBGAKw8k&t=415s
Created the static library of a. file, but I didn't understand what they did in the catkin workspace.
I don't know what to do.. :(
Please, give me the advice or tutorial on how i can speed up ddpg training?
I wrote the story above not for additional dramatics.. I would like to listen to where my mistake is and what can really work for me..
I attached the video of my program. You can see the result of one episode (120 steps = iterations)