Effacer les filtres
Effacer les filtres

The agent can learn the policy through the external action port in the RL Agent so that the agent mimics the output of the reference signal

3 vues (au cours des 30 derniers jours)
I created a DDPG agent that I wanted to learn from the output of an existing controller before training it later. So, I input the reference signal through the external action port, and set the use external action to 1 for training, when training, the output of the agent is the reference signal, but after the training. When I set the use external action to 0 for verification, the output of the agent is not the same as the reference signal, and the difference is a bit big. Does the external action port work with my idea? What should I do to realize my idea?
The figure below shows that when the external action is set to 0, the output of the trained agent is a red curve, and the reference signal is a green curve

Réponses (1)

Emmanouil Tzorakoleftherakis
It seems the agent started learning how to imitate the existing controller but needs more time. What does the Episode Manager look like? What is your reward signal?
  2 commentaires
凡
le 26 Fév 2024
Meaning the idea is feasible, but may it need more training Episodes?
凡
le 26 Fév 2024
This is the Episode Manager,My bonus signal is: -4*u^2-du/dt,u is an observational measurement,My control goal is to make u 0. My project is to replace the PID controller with an agent,In PID control,u is the input quantity,So I want the agent to mimic the output of the PID at the beginning

Connectez-vous pour commenter.

Produits


Version

R2023a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by