MATLAB Answers

CAC
0

How to bound DQN critic estimate or RL training progress y-axis

I'm training a DQN agent from the new Reinforcement Learning toolbox. During training, the critic network generates long-term reward estimates (Q0) throughout each episode - these are displayed in green on the training progress plot. In blue and red are the episode and running average reward, respectively. As you can see, the actual rewards average around -1000, but the first few estimates were orders of magnitude greater, and so they permanently skew the y-axis. Therefore we cannot discern the progress of actual rewards in training.
large_Q0_ex.PNG
It seems I either need to bound the critic's estimate, or set limits on the Reinforcement Learning Episode Manager's y-axis. I haven't found a way to do either.

  0 Comments

Sign in to comment.

1 Answer

 Accepted Answer

Hello,
I believe the best approach here is to figure out why the critic estimate takes large values. Even if you scale the plot window, if the critic estimates are off, it will have an impact on training. Also, bounding the estimate values would not be ideal either, because you are losing information (one action may be better than another, but this won't be reflected in the estimate). A few things to try:
1) Make sure that the gradient threshold option ins the representation options of the network if finite, e.g. set it to '1'. This will prevent the weights from changing too much during training.
2) Try reducing the number of layers/nodes
3) Try providing initial (small) values for the network weights (especially the last FC layer)
4) Maybe adding a scaling layer towards the end would be helpful as well

  2 Comments

I agree that I should focus on getting better initial estimates.
  1. I tried changing my gradient threshold from the default (infinity) to 1, but that didn't seem to help.
  2. My current architecture is already seems fairly small - 3 layers on the state path, 1 layer on action path, 2 layers on common output path (24 nodes on each except for one node on last layer). Might this still be too big?
  3. There are many options for weight initializers, so I'm not sure which to choose for each layer. Do you have any tips for this or can you point me to further reading?
  4. I added a scaling layer just before the output, and set the scale & bias according to what I was seeing in the original estimates. This had a great impact and brought the estimates much closer to reality! I would still like to avoid this reactive approach however, so smarter weight initialization I think is preferable.
For weight initialization see the answer here. You can also directly set weight values using the 'Weights' option of the FC layer - see for instance the 'createNetworks' script in the walking robot example.
The scaling layer is actually a better alternative than bounding, because it is a linear process which is taken into consideration during training.

Sign in to comment.