Invalid Settings: Conversion to string from embedded.fi is not possible, Reinforcement Learning Designer

I use Reinforcement Learning Designer, MATLAB R2021b, for DDPG agent training with default actor/critic networks of 25 neurons in hidden layers. Ennvironment is generated from Simulink model, and imported to RL Desogner.
I have achieved good training results for RL agent, but when I try to stop training / or training has to finish due to the number of max episodes, this error appears:
"Invalid Settings: Conversion to string from embedded.fi is not possible"
and results of the training are immediately erased. I am not able to find out the reason of the error. What is the reason and how to fix this?Please, help.
Code for environment is below.
obsInfo = rlNumericSpec([3 1],... % dimension of observation vector
'LowerLimit',[0 -inf inf]',...
'UpperLimit',[inf inf inf]');
obsInfo.Name = 'observations';
obsInfo.Description = 'position, error, integral error';
numObservations = obsInfo.Dimension(1);
actInfo = rlNumericSpec([1 1]);
actInfo.Name = 'voltage';
numActions = actInfo.Dimension(1);
%Build the environment interface object.
env = rlSimulinkEnv('MY_MLS2EM_rl6_PID_RL','MY_MLS2EM_rl6_PID_RL/DDPG Agent',...
obsInfo,actInfo);

1 commentaire

Here is the list of errors I receive in the command window.
> In Simulink.Simulation.internal.DesktopSimHelper
In Simulink.Simulation.internal/DesktopSimHelper/sim
In Simulink/SimulationInput/sim
In rl.env.SimulinkEnvWithAgent>localInnerSimFcn (line 588)
In rl.env.SimulinkEnvWithAgent>@(in_)localInnerSimFcn(simData,in_,[]) (line 251)
In MultiSim.internal.runSingleSim
In MultiSim.internal/SimulationRunnerSerial/executeImplSingle
In MultiSim.internal/SimulationRunnerSerial/executeImpl
In Simulink/SimulationManager/executeSims
In Simulink/SimulationManagerEngine/executeSims
In rl.env/SimulinkEnvWithAgent/executeSimsWrapper (line 229)
In rl.env/SimulinkEnvWithAgent/simWrapper (line 252)
In rl.env/SimulinkEnvWithAgent/simWithPolicyImpl (line 411)
In rl.env/AbstractEnv/simWithPolicy (line 83)
In rl.task/SeriesTrainTask/runImpl (line 33)
In rl.task/Task/run (line 21)
In rl.task/TaskSpec/internal_run (line 166)
In rl.task/TaskSpec/runDirect (line 170)
In rl.task/TaskSpec/runScalarTask (line 194)
In rl.task/TaskSpec/run (line 69)
In rl.train/SeriesTrainer/run (line 24)
In rl.train/TrainingManager/train (line 423)
In rl.train/TrainingManager/run (line 223)
In rl.agent.AbstractAgent/train (line 77)
In rl.internal.app.tool/TrainingSession/startTraining (line 89)
In rl.internal.app/ReinforcementLearningApp/openTrainingSession (line 367)
In rl.internal.app.tab/TrainTab/trainCB (line 224)
In rl.internal.app.tab.TrainTab>@(~,~)trainCB(obj) (line 175)
In internal/Callback/execute (line 128)
In matlab.ui.internal.toolstrip.base/Action/PeerEventCallback (line 828)
In matlab.ui.internal.toolstrip.base.ActionInterface>@(event,data)PeerEventCallback(this,event,data) (line 49)
In hgfeval (line 62)
In javaaddlistener>cbBridge (line 52)
In javaaddlistener>@(o,e)cbBridge(o,e,response) (line 47)

Connectez-vous pour commenter.

Réponses (0)

Catégories

En savoir plus sur Reinforcement Learning Toolbox dans Centre d'aide et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by