How to Perform Gradient Descent for DQN Loss Function
2 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
I'm writing the DQN from scratch, and I'm confused of the procedure of updating the evaluateNet from the gradient descent.
The standard DQN algorithm is to define two networks:
. Train
with minibatch, and update the
with gradient descent step on 
I define
. When update the
, I first make the
, and then only update
, which guarantee the
. Then I update the
. If I choose the feedforward train method as '
', does [1] update the evalNet correctly via gradient descent?
0 commentaires
Réponses (0)
Voir également
Catégories
En savoir plus sur Deep Learning Toolbox dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!