MATLAB Answers

0

Reinforcement Learning Toolbox - Change Action Space

(I'm using a DQN Agent in a custom template enviroment)
Is there a way to change the Action Space from which the action is choosen based on the current state during an episode?
For example let's say I have an agent that is moving in a room by choosing the directions of the motion, I would like that when he reaches the edge of the room in one direction he can no longer choose the direction that would eventually lead him off, thus reducing the Action Space.
Basically I want to reduce the Action Space to handle illegal moves.

  0 Comments

Sign in to comment.

1 Answer

 Accepted Answer

Hi Federico,
Unfortunately, the action space is fixed once created. To reduce the amount of times an action is selected, you could penalize it in the reward signal if certain criteria are met.
I hope this helps.

  3 Comments

Hi Emmanouil,
thank you for your answer, unfortunatly until now every attempt I tried to give a negative reward to the action that I didn't want the agent to do, didn't work. Eventually, after some initial time, the agent will still choose to perform that action. I don't really know how to explain this beheviour, I've tried changing the agent options, the training options, the reward function, the neural network architecture but nothing worked. But I suppose I should ask another question for that, anyway thanks again for the info.
In general, DQN has the tendency to choose more frequently optimistically estimated values due to maximization bias. Some additional things that may be helpful:
1) Make sure you are using double dqn (check the dqn agent options) to reduce overestimation
2) Play with the exploration settings. After exploration decays considerably, agent tends to choose what's best according to current values, which may not converge to true values. Decreasing the decay rate may help.
Once again thanks for the advices, changing the decay factor seems to have improved the agent beheviour, even though he is still going for the wrong action most of the time, for some unknown reason he must be thinking that is the optimal action. Don't really know which of this changes in settings was the most significant or if is the sum of all of them but I will write them here so that they may be helpful to someone else in the future:
GradientThershold from 1 to default (which is inf I belive)
GradientDecayFactor from default to 0.8
SquaredGradientDecayFactor from default to 0.99
L2RegularizationFactor from default to 0.00005
DoubleDQN was already enable.
This was just the first attempt to change the RepresentationOptions, for sure they can be improved even further. I hope that a better tweaking of this parameters will result in the agent choosing more wisely the action.
EDIT: I've just realized that you were probably talking about the AgentOptions and the EpsilonGreedyExploration not the RepresentationOptions, anyway changed that too lowering the epsilondecay and some other things, it seems that is improving.

Sign in to comment.