reinforcement learning toolbox - q table
10 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Xinpeng Wang
le 10 Juil 2019
Réponse apportée : Tuong Nguyen
le 7 Oct 2022
I'm a newbie to RL and the RL toolbox. I played with Q-learning agent with a model in simulink. My question is after training, How can I access to the trained Q table? The qTable used to generate the agent is all ZERO. I cannot figure out where the trained Q values and the policies are stored. Thank you!
0 commentaires
Réponse acceptée
Emmanouil Tzorakoleftherakis
le 23 Juil 2019
Hi Xinpeng,
To see the trained table, you have to do is extract it using ‘getCritic’. Try:
critic = getCritic(agent);
The variable ‘critic’ has a field which contains the Qtable after training.
0 commentaires
Plus de réponses (5)
Umut Can Akdag
le 18 Mai 2020
For those who are still looking for the q table I think this is the solution.
critic = getCritic(agent);
qtable = getLearnableParameters(critic);
0 commentaires
RUBEN HERNANDEZ
le 19 Avr 2022
Hi everyone
I want to simulate Q-learning agent for control inverted pendulum in simulink (with Q-table) (just for ilustrative example)
I've picked the rlsimplependulumModel.slx predefined in matlab.
This is my code
mdl = 'rlSimplePendulumModel';
open_system(mdl)
obsInfo = rlNumericSpec([3 1]); % vector of 3 observations: sin(theta), cos(theta), d(theta)/dt
actInfo = rlFiniteSetSpec([-2 0 2]); % 3 possible values for torque: -2 Nm, 0 Nm and 2 Nm
obsInfo.Name = 'observations';
actInfo.Name = 'torque';
agentBlk = [mdl '/RL Agent'];
env = rlSimulinkEnv(mdl,agentBlk,obsInfo,actInfo);
env.ResetFcn = @(in)setVariable(in,'theta0',pi,'Workspace',mdl);
Ts = 0.05; % simulation time
Tf = 20; % sample time
% Fix the random generator seed for reproducibility
rng(0)
%% To create a Q-learning agent:
%1 Create a critic using an rlQValueRepresentation object.
qTable = rlTable(obsInfo, actInfo);
qRepresentation = rlQValueRepresentation(qTable, obsInfo, actInfo);
qRepresentation.Options.LearnRate = 0.99;
%% 2 Specify agent options using an rlQAgentOptions object.
agentOpts = rlQAgentOptions;
agentOpts.DiscountFactor = 0.99;
agentOpts.EpsilonGreedyExploration.Epsilon = 0.9;
agentOpts.EpsilonGreedyExploration.EpsilonDecay = 0.01;
%% 3 Create the agent using an rlQAgent object.
qAgent = rlQAgent(qRepresentation,agentOpts);
%% Training Algorithm
% rlQAgentOptions.
trainOpts = rlTrainingOptions;
trainOpts.MaxStepsPerEpisode = ceil(Tf/Ts);
trainOpts.MaxEpisodes = 2000;
trainOpts.StopTrainingCriteria = "AverageReward";
trainOpts.StopTrainingValue = -740;
trainOpts.ScoreAveragingWindowLength = 5;
trainingStats = train(qAgent,env,trainOpts);
AND THIS IS THE ERROR MESSAGE
Error using rlTable/validateInput (line 131)
Input must be a scalar rlFiniteSetSpec.
Error in rlTable (line 51)
validateInput(obj, ObservationInfo)
Error in qlearningpendulum (line 30)
qTable = rlTable(obsInfo, actInfo);
any suggestions?
0 commentaires
Tuong Nguyen
le 7 Oct 2022
I think to use tabular Q learning, your observation has to be discrete and finite. That means your obsInfo has to be rlFiniteSetSpec(allStates), where in "allStates" you list out all the possible observations. See https://www.mathworks.com/help/reinforcement-learning/ref/rltable.html for the rlTable and https://www.mathworks.com/help/reinforcement-learning/ref/rl.util.rlfinitesetspec.html for the rlFiniteSetSpec.
0 commentaires
Voir également
Catégories
En savoir plus sur Applications dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!