gruLayer
Description
A GRU layer is an RNN layer that learns dependencies between time steps in time series and sequence data.
Creation
Description
creates a GRU layer and sets the layer
= gruLayer(numHiddenUnits
)NumHiddenUnits
property.
sets additional layer
= gruLayer(numHiddenUnits
,Name,Value
)OutputMode
, Activations, State, Parameters and Initialization, Learning Rate and Regularization, and Name
properties using one or more name-value pair arguments.
You can specify multiple name-value pair arguments. Enclose each property name in
quotes.
Properties
GRU
NumHiddenUnits
— Number of hidden units
positive integer
This property is read-only.
Number of hidden units (also known as the hidden size), specified as a positive integer.
The number of hidden units corresponds to the amount of information that the layer remembers between time steps (the hidden state). The hidden state can contain information from all the previous time steps, regardless of the sequence length. If the number of hidden units is too large, then the layer might overfit to the training data.
The hidden state does not limit the number of time steps that the layer processes in
an iteration. To split your sequences into smaller sequences for when you use the
trainNetwork
function, use the SequenceLength
training option.
The layer outputs data with NumHiddenUnits
channels.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
OutputMode
— Output mode
'sequence'
(default) | 'last'
This property is read-only.
Output mode, specified as one of these values:
'sequence'
— Output the complete sequence.'last'
— Output the last time step of the sequence.
HasStateInputs
— Flag for state inputs to layer
0
(false) (default) | 1
(true)
Flag for state inputs to the layer, specified as 1
(true) or
0
(false).
If the HasStateInputs
property is 0
(false), then the layer has one input with name 'in'
, which corresponds to the input data. In this case, the layer uses the HiddenState
property for the layer operation.
If the HasStateInputs
property is 1
(true), then the layer has two inputs with names 'in'
and 'hidden'
, which correspond to the input data and hidden state, respectively. In this case, the layer uses the values passed to these inputs for the layer operation. If HasStateInputs
is 1
(true), then the HiddenState
property must be empty.
HasStateOutputs
— Flag for state outputs from layer
0
(false) (default) | 1
(true)
Flag for state outputs from the layer, specified as true
or
false
.
If the HasStateOutputs
property is 0
(false), then the
layer has one output with name 'out'
, which corresponds to the output
data.
If the HasStateOutputs
property is 1
(true), then the
layer has two outputs with names 'out'
and
'hidden'
, which correspond to the output data and hidden
state, respectively. In this case, the layer also outputs the state values computed
during the layer operation.
ResetGateMode
— Reset gate mode
"after-multiplication"
(default) | "before-multiplication"
| "recurrent-bias-after-multiplication"
Reset gate mode, specified as one of the following:
"after-multiplication"
— Apply reset gate after matrix multiplication. This option is cuDNN compatible."before-multiplication"
— Apply reset gate before matrix multiplication."recurrent-bias-after-multiplication"
— Apply reset gate after matrix multiplication and use an additional set of bias terms for the recurrent weights.
For more information about the reset gate calculations, see Gated Recurrent Unit Layer.
Before R2023a: dlnetwork
objects support
GRU layers with the ResetGateMode
set to
"after-multiplication"
only.
InputSize
— Input size
'auto'
(default) | positive integer
This property is read-only.
Input size, specified as a positive integer or 'auto'
. If
InputSize
is 'auto'
, then the software
automatically assigns the input size at training time.
Data Types: double
| char
Activations
StateActivationFunction
— Activation function to update the hidden state
'tanh'
(default) | 'softsign'
Activation function to update the hidden state, specified as one of the following:
'tanh'
— Use the hyperbolic tangent function (tanh).'softsign'
— Use the softsign function .
The layer uses this option as the function in the calculations to update the hidden state.
GateActivationFunction
— Activation function to apply to gates
'sigmoid'
(default) | 'hard-sigmoid'
This property is read-only.
Activation function to apply to the gates, specified as one of these values:
'sigmoid'
— Use the sigmoid function .'hard-sigmoid'
— Use the hard sigmoid function
The layer uses this option as the function in the calculations for the layer gates.
State
HiddenState
— Hidden state
[]
(default) | numeric vector
Hidden state to use in the layer operation, specified as a
NumHiddenUnits
-by-1 numeric vector. This value corresponds to the
initial hidden state when data is passed to the layer.
After you set this property manually, calls to the resetState
function set the hidden state to this value.
If HasStateInputs
is 1
(true),
then the HiddenState
property must be empty.
Data Types: single
| double
Parameters and Initialization
InputWeightsInitializer
— Function to initialize input weights
'glorot'
(default) | 'he'
| 'orthogonal'
| 'narrow-normal'
| 'zeros'
| 'ones'
| function handle
Function to initialize the input weights, specified as one of the following:
'glorot'
— Initialize the input weights with the Glorot initializer [2] (also known as Xavier initializer). The Glorot initializer independently samples from a uniform distribution with zero mean and variance2/(InputSize + numOut)
, wherenumOut = 3*NumHiddenUnits
.'he'
— Initialize the input weights with the He initializer [3]. The He initializer samples from a normal distribution with zero mean and variance2/InputSize
.'orthogonal'
— Initialize the input weights with Q, the orthogonal matrix given by the QR decomposition of Z = QR for a random matrix Z sampled from a unit normal distribution. [4]'narrow-normal'
— Initialize the input weights by independently sampling from a normal distribution with zero mean and standard deviation 0.01.'zeros'
— Initialize the input weights with zeros.'ones'
— Initialize the input weights with ones.Function handle — Initialize the input weights with a custom function. If you specify a function handle, then the function must be of the form
weights = func(sz)
, wheresz
is the size of the input weights.
The layer only initializes the input weights when the
InputWeights
property is empty.
Data Types: char
| string
| function_handle
RecurrentWeightsInitializer
— Function to initialize recurrent weights
'orthogonal'
(default) | 'glorot'
| 'he'
| 'narrow-normal'
| 'zeros'
| 'ones'
| function handle
Function to initialize the recurrent weights, specified as one of the following:
'orthogonal'
— Initialize the recurrent weights with Q, the orthogonal matrix given by the QR decomposition of Z = QR for a random matrix Z sampled from a unit normal distribution. [4]'glorot'
— Initialize the recurrent weights with the Glorot initializer [2] (also known as Xavier initializer). The Glorot initializer independently samples from a uniform distribution with zero mean and variance2/(numIn + numOut)
, wherenumIn = NumHiddenUnits
andnumOut = 3*NumHiddenUnits
.'he'
— Initialize the recurrent weights with the He initializer [3]. The He initializer samples from a normal distribution with zero mean and variance2/NumHiddenUnits
.'narrow-normal'
— Initialize the recurrent weights by independently sampling from a normal distribution with zero mean and standard deviation 0.01.'zeros'
— Initialize the recurrent weights with zeros.'ones'
— Initialize the recurrent weights with ones.Function handle — Initialize the recurrent weights with a custom function. If you specify a function handle, then the function must be of the form
weights = func(sz)
, wheresz
is the size of the recurrent weights.
The layer only initializes the recurrent weights when the
RecurrentWeights
property is empty.
Data Types: char
| string
| function_handle
BiasInitializer
— Function to initialize bias
'zeros'
(default) | 'narrow-normal'
| 'ones'
| function handle
Function to initialize the bias, specified as one of the following:
zeros'
— Initialize the bias with zeros.'narrow-normal'
— Initialize the bias by independently sampling from a normal distribution with zero mean and standard deviation 0.01.'ones'
— Initialize the bias with ones.Function handle — Initialize the bias with a custom function. If you specify a function handle, then the function must be of the form
bias = func(sz)
, wheresz
is the size of the bias.
The layer only initializes the bias when the Bias
property is
empty.
Data Types: char
| string
| function_handle
InputWeights
— Input weights
[]
(default) | matrix
Input weights, specified as a matrix.
The input weight matrix is a concatenation of the three input weight matrices for the components in the GRU layer. The three matrices are concatenated vertically in the following order:
Reset gate
Update gate
Candidate state
The input weights are learnable parameters. When you train a
neural network using the trainNetwork
function, if InputWeights
is nonempty, then the software uses the InputWeights
property as the initial value. If InputWeights
is empty, then the software uses the initializer specified by
InputWeightsInitializer
.
At training time, InputWeights
is a
3*NumHiddenUnits
-by-InputSize
matrix.
RecurrentWeights
— Recurrent weights
[]
(default) | matrix
Recurrent weights, specified as a matrix.
The recurrent weight matrix is a concatenation of the three recurrent weight matrices for the components in the GRU layer. The three matrices are vertically concatenated in the following order:
Reset gate
Update gate
Candidate state
The recurrent weights are learnable parameters. When you train
an RNN using the trainNetwork
function, if RecurrentWeights
is nonempty, then the software uses the RecurrentWeights
property as the initial value. If RecurrentWeights
is empty, then the software uses the initializer specified by
RecurrentWeightsInitializer
.
At training time RecurrentWeights
is a
3*NumHiddenUnits
-by-NumHiddenUnits
matrix.
Bias
— Layer biases
[]
(default) | numeric vector
Layer biases for the GRU layer, specified as a numeric vector.
If ResetGateMode
is
'after-multiplication'
or
'before-multiplication'
, then the bias vector is a concatenation
of three bias vectors for the components in the GRU layer. The three vectors are
concatenated vertically in the following order:
Reset gate
Update gate
Candidate state
In this case, at training time, Bias
is a
3*NumHiddenUnits
-by-1 numeric vector.
If ResetGateMode
is
recurrent-bias-after-multiplication'
, then the bias vector is a
concatenation of six bias vectors for the components in the GRU layer. The six vectors
are concatenated vertically in the following order:
Reset gate
Update gate
Candidate state
Reset gate (recurrent bias)
Update gate (recurrent bias)
Candidate state (recurrent bias)
In this case, at training time, Bias
is a
6*NumHiddenUnits
-by-1 numeric vector.
The layer biases are learnable parameters. When you train a
neural network, if Bias
is nonempty, then trainNetwork
uses the Bias
property as the
initial value. If Bias
is empty, then
trainNetwork
uses the initializer specified by BiasInitializer
.
For more information about the reset gate calculations, see Gated Recurrent Unit Layer.
Learning Rate and Regularization
InputWeightsLearnRateFactor
— Learning rate factor for input weights
1 (default) | numeric scalar | 1-by-3 numeric vector
Learning rate factor for the input weights, specified as a numeric scalar or a 1-by-3 numeric vector.
The software multiplies this factor by the global learning rate
to determine the learning rate factor for the input weights of the layer. For example, if
InputWeightsLearnRateFactor
is 2
, then the learning
rate factor for the input weights of the layer is twice the current global learning rate. The
software determines the global learning rate based on the settings you specify with the
trainingOptions
function.
To control the value of the learning rate factor for the three individual matrices
in InputWeights
, specify a 1-by-3 vector. The entries of
InputWeightsLearnRateFactor
correspond to the learning rate
factor of the following:
Reset gate
Update gate
Candidate state
To specify the same value for all the matrices, specify a nonnegative scalar.
Example: 2
Example:
[1 2 1]
RecurrentWeightsLearnRateFactor
— Learning rate factor for recurrent weights
1 (default) | numeric scalar | 1-by-3 numeric vector
Learning rate factor for the recurrent weights, specified as a numeric scalar or a 1-by-3 numeric vector.
The software multiplies this factor by the global learning rate
to determine the learning rate for the recurrent weights of the layer. For example, if
RecurrentWeightsLearnRateFactor
is 2
, then the
learning rate for the recurrent weights of the layer is twice the current global learning rate.
The software determines the global learning rate based on the settings you specify using the
trainingOptions
function.
To control the value of the learning rate factor for the three individual matrices
in RecurrentWeights
, specify a 1-by-3 vector. The entries of
RecurrentWeightsLearnRateFactor
correspond to the learning rate
factor of the following:
Reset gate
Update gate
Candidate state
To specify the same value for all the matrices, specify a nonnegative scalar.
Example: 2
Example:
[1 2 1]
BiasLearnRateFactor
— Learning rate factor for biases
1 (default) | nonnegative scalar | 1-by-3 numeric vector
Learning rate factor for the biases, specified as a nonnegative scalar or a 1-by-3 numeric vector.
The software multiplies this factor by the global learning rate
to determine the learning rate for the biases in this layer. For example, if
BiasLearnRateFactor
is 2
, then the learning rate for
the biases in the layer is twice the current global learning rate. The software determines the
global learning rate based on the settings you specify using the trainingOptions
function.
To control the value of the learning rate factor for the three individual vectors
in Bias
, specify a 1-by-3 vector. The entries of
BiasLearnRateFactor
correspond to the learning rate factor of
the following:
Reset gate
Update gate
Candidate state
If ResetGateMode
is
'recurrent-bias-after-multiplication'
, then the software uses the
same vector for the recurrent bias vectors.
To specify the same value for all the vectors, specify a nonnegative scalar.
Example:
2
Example:
[1 2 1]
InputWeightsL2Factor
— L2 regularization factor for input weights
1 (default) | numeric scalar | 1-by-3 numeric vector
L2 regularization factor for the input weights, specified as a numeric scalar or a 1-by-3 numeric vector.
The software multiplies this factor by the global
L2 regularization factor to determine the
L2 regularization factor for the input weights
of the layer. For example, if InputWeightsL2Factor
is 2
,
then the L2 regularization factor for the input
weights of the layer is twice the current global L2
regularization factor. The software determines the L2
regularization factor based on the settings you specify using the trainingOptions
function.
To control the value of the L2 regularization factor for the three individual
matrices in InputWeights
, specify a 1-by-3 vector. The entries of
InputWeightsL2Factor
correspond to the L2 regularization factor
of the following:
Reset gate
Update gate
Candidate state
To specify the same value for all the matrices, specify a nonnegative scalar.
Example: 2
Example:
[1 2 1]
RecurrentWeightsL2Factor
— L2 regularization factor for recurrent weights
1 (default) | numeric scalar | 1-by-3 numeric vector
L2 regularization factor for the recurrent weights, specified as a numeric scalar or a 1-by-3 numeric vector.
The software multiplies this factor by the global
L2 regularization factor to determine the
L2 regularization factor for the recurrent
weights of the layer. For example, if RecurrentWeightsL2Factor
is
2
, then the L2 regularization
factor for the recurrent weights of the layer is twice the current global
L2 regularization factor. The software
determines the L2 regularization factor based on the
settings you specify using the trainingOptions
function.
To control the value of the L2 regularization factor for the three individual
matrices in RecurrentWeights
, specify a 1-by-3 vector. The
entries of RecurrentWeightsL2Factor
correspond to the L2
regularization factor of the following:
Reset gate
Update gate
Candidate state
To specify the same value for all the matrices, specify a nonnegative scalar.
Example: 2
Example:
[1 2 1]
BiasL2Factor
— L2 regularization factor for biases
0 (default) | nonnegative scalar | 1-by-3 numeric vector
L2 regularization factor for the biases, specified as a nonnegative scalar or a 1-by-3 numeric vector.
The software multiplies this factor by the global
L2 regularization factor to
determine the L2 regularization for the biases in
this layer. For example, if BiasL2Factor
is 2
, then
the L2 regularization for the biases in this layer
is twice the global L2 regularization factor. The
software determines the global L2 regularization
factor based on the settings you specify using the trainingOptions
function.
To control the value of the L2 regularization factor for the individual vectors in
Bias
, specify a 1-by-3 vector. The entries of
BiasL2Factor
correspond to the L2 regularization factor of the
following:
Reset gate
Update gate
Candidate state
If ResetGateMode
is
'recurrent-bias-after-multiplication'
, then the software uses the
same vector for the recurrent bias vectors.
To specify the same value for all the vectors, specify a nonnegative scalar.
Example:
2
Example:
[1 2 1]
Layer
Name
— Layer name
''
(default) | character vector | string scalar
Layer name, specified as a character vector or a string scalar.
For Layer
array input, the trainNetwork
, assembleNetwork
, layerGraph
, and
dlnetwork
functions automatically assign
names to layers with the name ''
.
Data Types: char
| string
NumInputs
— Number of inputs
1
| 2
Number of inputs of the layer.
If the HasStateInputs
property is 0
(false), then the layer has one input with name 'in'
, which corresponds to the input data. In this case, the layer uses the HiddenState
property for the layer operation.
If the HasStateInputs
property is 1
(true), then the layer has two inputs with names 'in'
and 'hidden'
, which correspond to the input data and hidden state, respectively. In this case, the layer uses the values passed to these inputs for the layer operation. If HasStateInputs
is 1
(true), then the HiddenState
property must be empty.
Data Types: double
InputNames
— Input names
{'in'}
| {'in','hidden'}
Input names of the layer.
If the HasStateInputs
property is 0
(false), then the layer has one input with name 'in'
, which corresponds to the input data. In this case, the layer uses the HiddenState
property for the layer operation.
If the HasStateInputs
property is 1
(true), then the layer has two inputs with names 'in'
and 'hidden'
, which correspond to the input data and hidden state, respectively. In this case, the layer uses the values passed to these inputs for the layer operation. If HasStateInputs
is 1
(true), then the HiddenState
property must be empty.
NumOutputs
— Number of outputs
1
| 2
Number of outputs of the layer.
If the HasStateOutputs
property is 0
(false), then the
layer has one output with name 'out'
, which corresponds to the output
data.
If the HasStateOutputs
property is 1
(true), then the
layer has two outputs with names 'out'
and
'hidden'
, which correspond to the output data and hidden
state, respectively. In this case, the layer also outputs the state values computed
during the layer operation.
Data Types: double
OutputNames
— Output names
{'out'}
| {'out','hidden'}
Output names of the layer.
If the HasStateOutputs
property is 0
(false), then the
layer has one output with name 'out'
, which corresponds to the output
data.
If the HasStateOutputs
property is 1
(true), then the
layer has two outputs with names 'out'
and
'hidden'
, which correspond to the output data and hidden
state, respectively. In this case, the layer also outputs the state values computed
during the layer operation.
Examples
Create GRU Layer
Create a GRU layer with the name 'gru1'
and 100 hidden units.
layer = gruLayer(100,'Name','gru1')
layer = GRULayer with properties: Name: 'gru1' InputNames: {'in'} OutputNames: {'out'} NumInputs: 1 NumOutputs: 1 HasStateInputs: 0 HasStateOutputs: 0 Hyperparameters InputSize: 'auto' NumHiddenUnits: 100 OutputMode: 'sequence' StateActivationFunction: 'tanh' GateActivationFunction: 'sigmoid' ResetGateMode: 'after-multiplication' Learnable Parameters InputWeights: [] RecurrentWeights: [] Bias: [] State Parameters HiddenState: [] Show all properties
Include a GRU layer in a Layer
array.
inputSize = 12;
numHiddenUnits = 100;
numClasses = 9;
layers = [ ...
sequenceInputLayer(inputSize)
gruLayer(numHiddenUnits)
fullyConnectedLayer(numClasses)
softmaxLayer
classificationLayer]
layers = 5x1 Layer array with layers: 1 '' Sequence Input Sequence input with 12 dimensions 2 '' GRU GRU with 100 hidden units 3 '' Fully Connected 9 fully connected layer 4 '' Softmax softmax 5 '' Classification Output crossentropyex
Algorithms
Gated Recurrent Unit Layer
A GRU layer is an RNN layer that learns dependencies between time steps in time series and sequence data.
The hidden state of the layer at time step t contains the output of the GRU layer for this time step. At each time step, the layer adds information to or removes information from the state. The layer controls these updates using gates.
The following components control the hidden state of the layer.
Component | Purpose |
---|---|
Reset gate (r) | Control level of state reset |
Update gate (z) | Control level of state update |
Candidate state () | Control level of update added to hidden state |
The learnable weights of a GRU layer are the input weights W
(InputWeights
), the recurrent weights R
(RecurrentWeights
), and the bias b
(Bias
). If the ResetGateMode
property is 'recurrent-bias-after-multiplication'
, then the gate and
state calculations require two sets of bias values. The matrices W and
R are concatenations of the input weights and the recurrent weights of
each component, respectively. These matrices are concatenated as follows:
where r, z, and denote the reset gate, update gate, and candidate state, respectively.
The bias vector depends on the ResetGateMode
property. If ResetGateMode
is
'after-multiplication'
or 'before-multiplication'
,
then the bias vector is a concatenation of three vectors:
where the subscript W indicates that this is the bias corresponding to the input weights multiplication.
If ResetGateMode
is
'recurrent-bias-after-multiplication'
, then the bias vector is a
concatenation of six vectors:
where the subscript R indicates that this is the bias corresponding to the recurrent weights multiplication.
The hidden state at time step t is given by
The following formulas describe the components at time step t.
Component | ResetGateMode | Formula | |
---|---|---|---|
Reset gate | 'after-multiplication' | ||
'before-multiplication' | |||
'recurrent-bias-after-multiplication' | |||
Update gate | 'after-multiplication' | ||
'before-multiplication' | |||
'recurrent-bias-after-multiplication' | |||
Candidate state | 'after-multiplication' | ||
'before-multiplication' | |||
'recurrent-bias-after-multiplication' |
In these calculations, and denotes the gate and state activation functions, respectively. The
gruLayer
function, by default, uses the sigmoid function given by to compute the gate activation function and the hyperbolic tangent
function (tanh) to compute the state activation function. To specify the state and gate
activation functions, use the StateActivationFunction
and GateActivationFunction
properties, respectively.
Layer Input and Output Formats
Layers in a layer array or layer graph pass data to subsequent layers as formatted dlarray
objects. The format of a dlarray
object is a string of characters, in which each character describes the corresponding dimension of the data. The formats consists of one or more of these characters:
"S"
— Spatial"C"
— Channel"B"
— Batch"T"
— Time"U"
— Unspecified
For example, 2-D image data represented as a 4-D array, where the first two dimensions
correspond to the spatial dimensions of the images, the third dimension corresponds to the
channels of the images, and the fourth dimension corresponds to the batch dimension, can be
described as having the format "SSCB"
(spatial, spatial, channel,
batch).
You can interact with these dlarray
objects in automatic differentiation workflows such as developing a custom layer, using a functionLayer
object, or using the forward
and predict
functions with dlnetwork
objects.
This table shows the supported input formats of GRULayer
objects and the corresponding output format. If the output of the layer is passed to a custom layer that does not inherit from the nnet.layer.Formattable
class, or a FunctionLayer
object with the Formattable
property set to 0
(false), then the layer receives an unformatted dlarray
object with dimensions ordered corresponding to the formats in this table.
Input Format | OutputMode | Output Format |
---|---|---|
| "sequence" |
|
"last" | ||
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" |
In dlnetwork
objects, GRULayer
objects also support these
input and output format combinations.
Input Format | OutputMode | Output Format |
---|---|---|
| "sequence" |
|
"last" | ||
| "sequence" | |
"last" | ||
| "sequence" | |
"last" | ||
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" | ||
| "sequence" | |
"last" | ||
| "sequence" | |
"last" | ||
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" | ||
| "sequence" | |
"last" | ||
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" |
|
To use these input formats in trainNetwork
workflows, convert the
data to "CB"
(channel, batch) or "CBT"
(channel,
batch, time) format using flattenLayer
.
If the HasStateInputs
property is 1
(true), then the
layer has two additional inputs with the names "hidden"
and
"cell"
, which correspond to the hidden state and cell state,
respectively. These additional inputs expect input format "CB"
(channel,
batch).
If the HasStateOutputs
property is 1
(true), then the
layer has two additional outputs with names "hidden"
and
"cell"
, which correspond to the hidden state and cell state,
respectively. These additional outputs have output format "CB"
(channel,
batch).
References
[1] Cho, Kyunghyun, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. "Learning phrase representations using RNN encoder-decoder for statistical machine translation." arXiv preprint arXiv:1406.1078 (2014).
[2] Glorot, Xavier, and Yoshua Bengio. "Understanding the Difficulty of Training Deep Feedforward Neural Networks." In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–356. Sardinia, Italy: AISTATS, 2010. https://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf
[3] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification." In Proceedings of the 2015 IEEE International Conference on Computer Vision, 1026–1034. Washington, DC: IEEE Computer Vision Society, 2015. https://doi.org/10.1109/ICCV.2015.123
[4] Saxe, Andrew M., James L. McClelland, and Surya Ganguli. "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks." arXiv preprint arXiv:1312.6120 (2013).
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Usage notes and limitations:
For code generation in general, the HasStateInputs
and
HasStateOutputs
properties must be set to 0
(false).
When generating code with Intel® MKL-DNN or ARM® Compute Library:
The
StateActivationFunction
property must be set to'tanh'
.The
GateActivationFunction
property must be set to'sigmoid'
.The
ResetGateMode
property must be set to'after-multiplication'
or'recurrent-bias-after-multiplication'
.
When generating generic C/C++ code:
The
ResetGateMode
property can be set to'after-multiplication'
,'before-multiplication'
or'recurrent-bias-after-multiplication'
.
GPU Code Generation
Generate CUDA® code for NVIDIA® GPUs using GPU Coder™.
Usage notes and limitations:
The
StateActivationFunction
property must be set to'tanh'
.The
GateActivationFunction
property must be set to'sigmoid'
.The
ResetGateMode
property must be set to'after-multiplication'
or'recurrent-bias-after-multiplication'
.The
HasStateInputs
andHasStateOutputs
properties must be set to0
(false).
Version History
Introduced in R2020aR2023a: Specify reset gate mode for GRU layers in dlnetwork
objects
For GRU layers in dlnetwork
objects, specify the reset gate
mode using the ResetGateMode
property.
See Also
trainingOptions
| trainNetwork
| sequenceInputLayer
| lstmLayer
| bilstmLayer
| convolution1dLayer
| maxPooling1dLayer
| averagePooling1dLayer
| globalMaxPooling1dLayer
| globalAveragePooling1dLayer
Topics
- Sequence Classification Using Deep Learning
- Sequence Classification Using 1-D Convolutions
- Time Series Forecasting Using Deep Learning
- Sequence-to-Sequence Classification Using Deep Learning
- Sequence-to-Sequence Regression Using Deep Learning
- Classify Videos Using Deep Learning
- Long Short-Term Memory Neural Networks
- List of Deep Learning Layers
- Deep Learning Tips and Tricks
Ouvrir l'exemple
Vous possédez une version modifiée de cet exemple. Souhaitez-vous ouvrir cet exemple avec vos modifications ?
Commande MATLAB
Vous avez cliqué sur un lien qui correspond à cette commande MATLAB :
Pour exécuter la commande, saisissez-la dans la fenêtre de commande de MATLAB. Les navigateurs web ne supportent pas les commandes MATLAB.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)