Main Content

Deep Q-Network (DQN) Agent

The deep Q-network (DQN) algorithm is an off-policy reinforcement learning method for environments with discrete action spaces. A DQN agent trains a Q-value function to estimate the expected discounted cumulative long-term reward when following the optimal policy. DQN is a variant of Q-learning that features a target critic and an experience buffer. The DQN agent supports offline training (training from saved data, without an environment).For more information on Q-learning, see Q-Learning Agent. For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

In Reinforcement Learning Toolbox™, a DQN agent is implemented by an rlDQNAgent object.

DQN agents can be trained in environments with the following observation and action spaces.

Observation SpaceAction Space
Continuous or discreteDiscrete

DQN agents use the following critic.

CriticActor

Q-value function critic Q(S,A), which you create using rlQValueFunction or rlVectorQValueFunction

DQN agents do not use an actor.

During training, the agent:

  • Updates the critic learnable parameters at each time step during learning.

  • Explores the action space using epsilon-greedy exploration. During each control interval, the agent either selects a random action with probability ϵ or selects an action greedily with respect to the action-value function with probability 1-ϵ. The greedy action is the action for which the action-value function is greatest.

  • Stores past experiences using a circular experience buffer. The agent updates the critic based on a mini-batch of experiences randomly sampled from the buffer.

Critic Function Approximators

To estimate the value of the optimal policy, a DQN agent uses two parametrized action-value functions, each maintained by a corresponding critic.

  • Critic Q(S,A;ϕ) — Given observation S and action A this critic stores the corresponding estimate of the expected discounted cumulative long-term reward when following the optimal policy (this is the value of the optimal policy).

  • Target critic Qt(S,A;ϕt) — To improve the stability of the optimization, the agent periodically updates the target critic learnable parameters ϕt using the latest critic parameter values.

Both Q(S,A;ϕ) and Qt(S,A;ϕt) are implemented by function approximator objects having the same structure and parameterization.

For more information on creating critics for value function approximation, see Create Policies and Value Functions.

During training, the agent tunes the parameter values in ϕ. After training, the parameters remain at their tuned value and the trained value function approximator is stored in critic Q(S,A).

Agent Creation

You can create and train DQN agents at the MATLAB® command line or using the Reinforcement Learning Designer app. For more information on creating agents using Reinforcement Learning Designer, see Create Agents Using Reinforcement Learning Designer.

At the command line, you can create a default DQN agent based on the observation and action specifications from the environment. A default DQN agent uses function default approximators that rely on a deep neural network model. To do so, perform the following steps.

  1. Create observation specifications for your environment. If you already have an environment object, you can obtain these specifications using getObservationInfo.

  2. Create action specifications for your environment. If you already have an environment object, you can obtain these specifications using getActionInfo.

  3. If needed, specify the number of neurons in each learnable layer (the default is 256 neurons) or whether to use an LSTM layer (by default no LSTM layer is used). To do so, create an agent initialization option object using rlAgentInitializationOptions.

  4. If needed, specify agent options using an rlDQNAgentOptions object.

  5. Create the agent using an rlDQNAgent object.

Alternatively, you can create a critic and use it to create your agent. In this case, ensure that the dimensions of the observation and action layers in the critic match the corresponding action and observation specifications of the environment.

  1. Create a critic using an rlQValueFunction or rlVectorQValueFunction object.

  2. Specify agent options using an rlDQNAgentOptions object. Alternatively, you can create the agent first (step 3) and then, using dot notation, access its option object and modify the options.

  3. Create the agent using an rlDQNAgent object.

DQN agents support critics that use recurrent deep neural networks as functions approximators.

For more information on creating actors and critics for function approximation, see Create Policies and Value Functions.

Training Algorithm

DQN agents use the following training algorithm, in which they update their critic model at each time step. To configure the training algorithm, specify options using an rlDQNAgentOptions object.

  • Initialize the critic Q(s,a;ϕ) with random parameter values ϕ, and initialize the target critic parameters ϕt with the same values. ϕt=ϕ.

  • For each training time step:

    1. For the current observation S, select a random action A with probability ϵ. Otherwise, select the action for which the critic value function is greatest.

      A=argmaxAQ(S,A;ϕ)

      To specify ϵ and its decay rate, use the EpsilonGreedyExploration option.

    2. Execute action A. Observe the reward R and next observation S'.

    3. Store the experience (S,A,R,S') in the experience buffer. To specify the size of the experience buffer, use the ExperienceBufferLength option in the agent rlDQNAgentOptions object.

    4. Sample a random mini-batch of M experiences (Si,Ai,Ri,S'i) from the experience buffer. To specify M, use the MiniBatchSize option.

    5. For all experiences in the mini-batch, if S'i is a terminal state, set the value function target yi to Ri. Otherwise, set it to

      Amax=argmaxA'Q(Si',A';ϕ)yi=Ri+γQt(Si',Amax;ϕt)(doubleDQN)yi=Ri+γmaxA'Qt(Si',A';ϕt)(DQN)

      Here, the normal DQN algorithm selects the action that maximizes the action-value function maintained by the target critic, while the double DQN selects the action that maximizes the action-value function maintained by the base critic.

      To set the discount factor γ, use the DiscountFactor option. To use double DQN, set the UseDoubleDQN option to true.

      If you specify a value of NumStepsToLookAhead equal to N, then the N-step return (which adds the rewards of the following N steps and the discounted estimated value of the state that caused the N-th reward) is used to calculate the target yi.

    6. Update the critic parameters by one-step minimization of the loss L across all sampled experiences.

      L=12Mi=1M(yiQ(Si,Ai;ϕ))2

    7. Update the target critic parameters depending on the target update method. For more information, see Target Update Methods.

    8. Update the probability threshold ϵ for selecting a random action based on the decay rate you specify in the EpsilonGreedyExploration option.

Target Update Methods

DQN agents update their target critic parameters using one of the following target update methods.

  • Smoothing — Update the target parameters at every time step using smoothing factor τ. To specify the smoothing factor, use the TargetSmoothFactor option.

    ϕt=τϕ+(1τ)ϕt

  • Periodic — Update the target parameters periodically without smoothing (TargetSmoothFactor = 1). To specify the update period, use the TargetUpdateFrequency parameter.

  • Periodic Smoothing — Update the target parameters periodically with smoothing.

To configure the target update method, create a rlDQNAgentOptions object, and set the TargetUpdateFrequency and TargetSmoothFactor parameters as shown in the following table.

Update MethodTargetUpdateFrequencyTargetSmoothFactor
Smoothing (default)1Less than 1
PeriodicGreater than 11
Periodic smoothingGreater than 1Less than 1

References

[1] Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. “Playing Atari with Deep Reinforcement Learning.” ArXiv:1312.5602 [Cs], December 19, 2013. https://arxiv.org/abs/1312.5602.

See Also

Objects

Related Examples

More About