Main Content

gruLayer

Gated recurrent unit (GRU) layer for recurrent neural network (RNN)

Since R2020a

Description

A GRU layer is an RNN layer that learns dependencies between time steps in time series and sequence data.

Creation

Description

example

layer = gruLayer(numHiddenUnits) creates a GRU layer and sets the NumHiddenUnits property.

layer = gruLayer(numHiddenUnits,Name,Value) sets additional OutputMode, Activations, State, Parameters and Initialization, Learning Rate and Regularization, and Name properties using one or more name-value pair arguments. You can specify multiple name-value pair arguments. Enclose each property name in quotes.

Properties

expand all

GRU

This property is read-only.

Number of hidden units (also known as the hidden size), specified as a positive integer.

The number of hidden units corresponds to the amount of information that the layer remembers between time steps (the hidden state). The hidden state can contain information from all the previous time steps, regardless of the sequence length. If the number of hidden units is too large, then the layer might overfit to the training data.

The hidden state does not limit the number of time steps that the layer processes in an iteration. To split your sequences into smaller sequences for when you use the trainNetwork function, use the SequenceLength training option.

The layer outputs data with NumHiddenUnits channels.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

This property is read-only.

Output mode, specified as one of these values:

  • 'sequence' — Output the complete sequence.

  • 'last' — Output the last time step of the sequence.

Flag for state inputs to the layer, specified as 1 (true) or 0 (false).

If the HasStateInputs property is 0 (false), then the layer has one input with name 'in', which corresponds to the input data. In this case, the layer uses the HiddenState property for the layer operation.

If the HasStateInputs property is 1 (true), then the layer has two inputs with names 'in' and 'hidden', which correspond to the input data and hidden state, respectively. In this case, the layer uses the values passed to these inputs for the layer operation. If HasStateInputs is 1 (true), then the HiddenState property must be empty.

Flag for state outputs from the layer, specified as true or false.

If the HasStateOutputs property is 0 (false), then the layer has one output with name 'out', which corresponds to the output data.

If the HasStateOutputs property is 1 (true), then the layer has two outputs with names 'out' and 'hidden', which correspond to the output data and hidden state, respectively. In this case, the layer also outputs the state values computed during the layer operation.

Reset gate mode, specified as one of the following:

  • "after-multiplication" — Apply reset gate after matrix multiplication. This option is cuDNN compatible.

  • "before-multiplication" — Apply reset gate before matrix multiplication.

  • "recurrent-bias-after-multiplication" — Apply reset gate after matrix multiplication and use an additional set of bias terms for the recurrent weights.

For more information about the reset gate calculations, see Gated Recurrent Unit Layer.

Before R2023a: dlnetwork objects support GRU layers with the ResetGateMode set to "after-multiplication" only.

This property is read-only.

Input size, specified as a positive integer or 'auto'. If InputSize is 'auto', then the software automatically assigns the input size at training time.

Data Types: double | char

Activations

Activation function to update the hidden state, specified as one of the following:

  • 'tanh' — Use the hyperbolic tangent function (tanh).

  • 'softsign' — Use the softsign function softsign(x)=x1+|x|.

The layer uses this option as the function σs in the calculations to update the hidden state.

This property is read-only.

Activation function to apply to the gates, specified as one of these values:

  • 'sigmoid' — Use the sigmoid function σ(x)=(1+ex)1.

  • 'hard-sigmoid' — Use the hard sigmoid function

    σ(x)={00.2x+0.51if x<2.5if2.5x2.5if x>2.5.

The layer uses this option as the function σg in the calculations for the layer gates.

State

Hidden state to use in the layer operation, specified as a NumHiddenUnits-by-1 numeric vector. This value corresponds to the initial hidden state when data is passed to the layer.

After you set this property manually, calls to the resetState function set the hidden state to this value.

If HasStateInputs is 1 (true), then the HiddenState property must be empty.

Data Types: single | double

Parameters and Initialization

Function to initialize the input weights, specified as one of the following:

  • 'glorot' — Initialize the input weights with the Glorot initializer [2] (also known as Xavier initializer). The Glorot initializer independently samples from a uniform distribution with zero mean and variance 2/(InputSize + numOut), where numOut = 3*NumHiddenUnits.

  • 'he' — Initialize the input weights with the He initializer [3]. The He initializer samples from a normal distribution with zero mean and variance 2/InputSize.

  • 'orthogonal' — Initialize the input weights with Q, the orthogonal matrix given by the QR decomposition of Z = QR for a random matrix Z sampled from a unit normal distribution. [4]

  • 'narrow-normal' — Initialize the input weights by independently sampling from a normal distribution with zero mean and standard deviation 0.01.

  • 'zeros' — Initialize the input weights with zeros.

  • 'ones' — Initialize the input weights with ones.

  • Function handle — Initialize the input weights with a custom function. If you specify a function handle, then the function must be of the form weights = func(sz), where sz is the size of the input weights.

The layer only initializes the input weights when the InputWeights property is empty.

Data Types: char | string | function_handle

Function to initialize the recurrent weights, specified as one of the following:

  • 'orthogonal' — Initialize the recurrent weights with Q, the orthogonal matrix given by the QR decomposition of Z = QR for a random matrix Z sampled from a unit normal distribution. [4]

  • 'glorot' — Initialize the recurrent weights with the Glorot initializer [2] (also known as Xavier initializer). The Glorot initializer independently samples from a uniform distribution with zero mean and variance 2/(numIn + numOut), where numIn = NumHiddenUnits and numOut = 3*NumHiddenUnits.

  • 'he' — Initialize the recurrent weights with the He initializer [3]. The He initializer samples from a normal distribution with zero mean and variance 2/NumHiddenUnits.

  • 'narrow-normal' — Initialize the recurrent weights by independently sampling from a normal distribution with zero mean and standard deviation 0.01.

  • 'zeros' — Initialize the recurrent weights with zeros.

  • 'ones' — Initialize the recurrent weights with ones.

  • Function handle — Initialize the recurrent weights with a custom function. If you specify a function handle, then the function must be of the form weights = func(sz), where sz is the size of the recurrent weights.

The layer only initializes the recurrent weights when the RecurrentWeights property is empty.

Data Types: char | string | function_handle

Function to initialize the bias, specified as one of the following:

  • zeros' — Initialize the bias with zeros.

  • 'narrow-normal' — Initialize the bias by independently sampling from a normal distribution with zero mean and standard deviation 0.01.

  • 'ones' — Initialize the bias with ones.

  • Function handle — Initialize the bias with a custom function. If you specify a function handle, then the function must be of the form bias = func(sz), where sz is the size of the bias.

The layer only initializes the bias when the Bias property is empty.

Data Types: char | string | function_handle

Input weights, specified as a matrix.

The input weight matrix is a concatenation of the three input weight matrices for the components in the GRU layer. The three matrices are concatenated vertically in the following order:

  1. Reset gate

  2. Update gate

  3. Candidate state

The input weights are learnable parameters. When you train a neural network using the trainNetwork function, if InputWeights is nonempty, then the software uses the InputWeights property as the initial value. If InputWeights is empty, then the software uses the initializer specified by InputWeightsInitializer.

At training time, InputWeights is a 3*NumHiddenUnits-by-InputSize matrix.

Recurrent weights, specified as a matrix.

The recurrent weight matrix is a concatenation of the three recurrent weight matrices for the components in the GRU layer. The three matrices are vertically concatenated in the following order:

  1. Reset gate

  2. Update gate

  3. Candidate state

The recurrent weights are learnable parameters. When you train an RNN using the trainNetwork function, if RecurrentWeights is nonempty, then the software uses the RecurrentWeights property as the initial value. If RecurrentWeights is empty, then the software uses the initializer specified by RecurrentWeightsInitializer.

At training time RecurrentWeights is a 3*NumHiddenUnits-by-NumHiddenUnits matrix.

Layer biases for the GRU layer, specified as a numeric vector.

If ResetGateMode is 'after-multiplication' or 'before-multiplication', then the bias vector is a concatenation of three bias vectors for the components in the GRU layer. The three vectors are concatenated vertically in the following order:

  1. Reset gate

  2. Update gate

  3. Candidate state

In this case, at training time, Bias is a 3*NumHiddenUnits-by-1 numeric vector.

If ResetGateMode is recurrent-bias-after-multiplication', then the bias vector is a concatenation of six bias vectors for the components in the GRU layer. The six vectors are concatenated vertically in the following order:

  1. Reset gate

  2. Update gate

  3. Candidate state

  4. Reset gate (recurrent bias)

  5. Update gate (recurrent bias)

  6. Candidate state (recurrent bias)

In this case, at training time, Bias is a 6*NumHiddenUnits-by-1 numeric vector.

The layer biases are learnable parameters. When you train a neural network, if Bias is nonempty, then trainNetwork uses the Bias property as the initial value. If Bias is empty, then trainNetwork uses the initializer specified by BiasInitializer.

For more information about the reset gate calculations, see Gated Recurrent Unit Layer.

Learning Rate and Regularization

Learning rate factor for the input weights, specified as a numeric scalar or a 1-by-3 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate factor for the input weights of the layer. For example, if InputWeightsLearnRateFactor is 2, then the learning rate factor for the input weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify with the trainingOptions function.

To control the value of the learning rate factor for the three individual matrices in InputWeights, specify a 1-by-3 vector. The entries of InputWeightsLearnRateFactor correspond to the learning rate factor of the following:

  1. Reset gate

  2. Update gate

  3. Candidate state

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: 2

Example: [1 2 1]

Learning rate factor for the recurrent weights, specified as a numeric scalar or a 1-by-3 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate for the recurrent weights of the layer. For example, if RecurrentWeightsLearnRateFactor is 2, then the learning rate for the recurrent weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using the trainingOptions function.

To control the value of the learning rate factor for the three individual matrices in RecurrentWeights, specify a 1-by-3 vector. The entries of RecurrentWeightsLearnRateFactor correspond to the learning rate factor of the following:

  1. Reset gate

  2. Update gate

  3. Candidate state

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: 2

Example: [1 2 1]

Learning rate factor for the biases, specified as a nonnegative scalar or a 1-by-3 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate for the biases in this layer. For example, if BiasLearnRateFactor is 2, then the learning rate for the biases in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using the trainingOptions function.

To control the value of the learning rate factor for the three individual vectors in Bias, specify a 1-by-3 vector. The entries of BiasLearnRateFactor correspond to the learning rate factor of the following:

  1. Reset gate

  2. Update gate

  3. Candidate state

If ResetGateMode is 'recurrent-bias-after-multiplication', then the software uses the same vector for the recurrent bias vectors.

To specify the same value for all the vectors, specify a nonnegative scalar.

Example: 2

Example: [1 2 1]

L2 regularization factor for the input weights, specified as a numeric scalar or a 1-by-3 numeric vector.

The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization factor for the input weights of the layer. For example, if InputWeightsL2Factor is 2, then the L2 regularization factor for the input weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings you specify using the trainingOptions function.

To control the value of the L2 regularization factor for the three individual matrices in InputWeights, specify a 1-by-3 vector. The entries of InputWeightsL2Factor correspond to the L2 regularization factor of the following:

  1. Reset gate

  2. Update gate

  3. Candidate state

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: 2

Example: [1 2 1]

L2 regularization factor for the recurrent weights, specified as a numeric scalar or a 1-by-3 numeric vector.

The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization factor for the recurrent weights of the layer. For example, if RecurrentWeightsL2Factor is 2, then the L2 regularization factor for the recurrent weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings you specify using the trainingOptions function.

To control the value of the L2 regularization factor for the three individual matrices in RecurrentWeights, specify a 1-by-3 vector. The entries of RecurrentWeightsL2Factor correspond to the L2 regularization factor of the following:

  1. Reset gate

  2. Update gate

  3. Candidate state

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: 2

Example: [1 2 1]

L2 regularization factor for the biases, specified as a nonnegative scalar or a 1-by-3 numeric vector.

The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization for the biases in this layer. For example, if BiasL2Factor is 2, then the L2 regularization for the biases in this layer is twice the global L2 regularization factor. The software determines the global L2 regularization factor based on the settings you specify using the trainingOptions function.

To control the value of the L2 regularization factor for the individual vectors in Bias, specify a 1-by-3 vector. The entries of BiasL2Factor correspond to the L2 regularization factor of the following:

  1. Reset gate

  2. Update gate

  3. Candidate state

If ResetGateMode is 'recurrent-bias-after-multiplication', then the software uses the same vector for the recurrent bias vectors.

To specify the same value for all the vectors, specify a nonnegative scalar.

Example: 2

Example: [1 2 1]

Layer

Layer name, specified as a character vector or a string scalar. For Layer array input, the trainNetwork, assembleNetwork, layerGraph, and dlnetwork functions automatically assign names to layers with the name ''.

Data Types: char | string

Number of inputs of the layer.

If the HasStateInputs property is 0 (false), then the layer has one input with name 'in', which corresponds to the input data. In this case, the layer uses the HiddenState property for the layer operation.

If the HasStateInputs property is 1 (true), then the layer has two inputs with names 'in' and 'hidden', which correspond to the input data and hidden state, respectively. In this case, the layer uses the values passed to these inputs for the layer operation. If HasStateInputs is 1 (true), then the HiddenState property must be empty.

Data Types: double

Input names of the layer.

If the HasStateInputs property is 0 (false), then the layer has one input with name 'in', which corresponds to the input data. In this case, the layer uses the HiddenState property for the layer operation.

If the HasStateInputs property is 1 (true), then the layer has two inputs with names 'in' and 'hidden', which correspond to the input data and hidden state, respectively. In this case, the layer uses the values passed to these inputs for the layer operation. If HasStateInputs is 1 (true), then the HiddenState property must be empty.

Number of outputs of the layer.

If the HasStateOutputs property is 0 (false), then the layer has one output with name 'out', which corresponds to the output data.

If the HasStateOutputs property is 1 (true), then the layer has two outputs with names 'out' and 'hidden', which correspond to the output data and hidden state, respectively. In this case, the layer also outputs the state values computed during the layer operation.

Data Types: double

Output names of the layer.

If the HasStateOutputs property is 0 (false), then the layer has one output with name 'out', which corresponds to the output data.

If the HasStateOutputs property is 1 (true), then the layer has two outputs with names 'out' and 'hidden', which correspond to the output data and hidden state, respectively. In this case, the layer also outputs the state values computed during the layer operation.

Examples

collapse all

Create a GRU layer with the name 'gru1' and 100 hidden units.

layer = gruLayer(100,'Name','gru1')
layer = 
  GRULayer with properties:

                       Name: 'gru1'
                 InputNames: {'in'}
                OutputNames: {'out'}
                  NumInputs: 1
                 NumOutputs: 1
             HasStateInputs: 0
            HasStateOutputs: 0

   Hyperparameters
                  InputSize: 'auto'
             NumHiddenUnits: 100
                 OutputMode: 'sequence'
    StateActivationFunction: 'tanh'
     GateActivationFunction: 'sigmoid'
              ResetGateMode: 'after-multiplication'

   Learnable Parameters
               InputWeights: []
           RecurrentWeights: []
                       Bias: []

   State Parameters
                HiddenState: []

  Show all properties

Include a GRU layer in a Layer array.

inputSize = 12;
numHiddenUnits = 100;
numClasses = 9;

layers = [ ...
    sequenceInputLayer(inputSize)
    gruLayer(numHiddenUnits)
    fullyConnectedLayer(numClasses)
    softmaxLayer
    classificationLayer]
layers = 
  5x1 Layer array with layers:

     1   ''   Sequence Input          Sequence input with 12 dimensions
     2   ''   GRU                     GRU with 100 hidden units
     3   ''   Fully Connected         9 fully connected layer
     4   ''   Softmax                 softmax
     5   ''   Classification Output   crossentropyex

Algorithms

expand all

References

[1] Cho, Kyunghyun, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. "Learning phrase representations using RNN encoder-decoder for statistical machine translation." arXiv preprint arXiv:1406.1078 (2014).

[2] Glorot, Xavier, and Yoshua Bengio. "Understanding the Difficulty of Training Deep Feedforward Neural Networks." In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–356. Sardinia, Italy: AISTATS, 2010. https://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf

[3] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification." In Proceedings of the 2015 IEEE International Conference on Computer Vision, 1026–1034. Washington, DC: IEEE Computer Vision Society, 2015. https://doi.org/10.1109/ICCV.2015.123

[4] Saxe, Andrew M., James L. McClelland, and Surya Ganguli. "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks." arXiv preprint arXiv:1312.6120 (2013).

Extended Capabilities

Version History

Introduced in R2020a

expand all