How do I solve this error?
1 vue (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Apoorv Pandey
le 24 Mar 2023
Commenté : Cris LaPierre
le 27 Mar 2023
I am getting this error when I try to train a TD3 RL agent.
Thanking You
Apoorv Pandey
1 commentaire
Emmanouil Tzorakoleftherakis
le 24 Mar 2023
If you share a reproduction model it would be easier to debug
Réponse acceptée
Cris LaPierre
le 24 Mar 2023
When defining your rlQValueFunction, include the ActionInputNames and OvservationInputNames name-value pairs.
See this example: https://www.mathworks.com/help/reinforcement-learning/ref/rl.function.rlqvaluefunction.html#mw_da4065e4-5b9a-41c6-b11b-6692d8698a76
% Observation path layers
obsPath = [featureInputLayer( ...
prod(obsInfo.Dimension), ...
Name="netObsInput")
fullyConnectedLayer(16)
reluLayer
fullyConnectedLayer(5,Name="obsout")];
% Action path layers
actPath = [featureInputLayer( ...
prod(actInfo.Dimension), ...
Name="netActInput")
fullyConnectedLayer(16)
reluLayer
fullyConnectedLayer(5,Name="actout")];
%<snip>
critic = rlQValueFunction(net,...
obsInfo,actInfo, ...
ObservationInputNames="netObsInput",...
ActionInputNames="netActInput")
2 commentaires
Cris LaPierre
le 27 Mar 2023
Please share your data and your code. You can attach files using the paperclip icon. If it's easier,save your workspace variables to a mat file and attach that.
Plus de réponses (0)
Voir également
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!