gruLayer
Gated recurrent unit (GRU) layer for recurrent neural network (RNN)
Description
A GRU layer is an RNN layer that learns dependencies between time steps in time-series and sequence data.
Creation
Description
creates a GRU layer and sets the layer = gruLayer(numHiddenUnits)NumHiddenUnits property.
sets additional layer = gruLayer(numHiddenUnits,Name,Value)OutputMode, Activations, State, Parameters and Initialization, Learning Rate and Regularization, and
Name properties
using one or more name-value pair arguments. You can specify multiple name-value
pair arguments. Enclose each property name in quotes.
Properties
GRU
Number of hidden units (also known as the hidden size), specified as a positive integer.
The number of hidden units corresponds to the amount of information that the layer remembers between time steps (the hidden state). The hidden state can contain information from all the previous time steps, regardless of the sequence length. If the number of hidden units is too large, then the layer can overfit to the training data. The hidden state does not limit the number of time steps that the layer processes in an iteration.
The layer outputs data with NumHiddenUnits channels.
To set this property, use the numHiddenUnits argument when you
create the GRULayer object. After you create a
GRULayer object, this property is read-only.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64
Output mode, specified as one of these values:
"sequence"— Output the complete sequence."last"— Output the last time step of the sequence.
The GRULayer object stores this property as a character vector.
To set this property, use the corresponding name-value argument when you create the GRULayer object. After you create a GRULayer object, this property is read-only.
Flag for state inputs to the layer, specified as 0
(false) or 1
(true).
If the HasStateInputs property is 0
(false), then the layer has one input with the name
"in", which corresponds to the input data. In this case, the layer
uses the HiddenState property for the layer
operation.
If the HasStateInputs property is 1
(true), then the layer has two inputs with the names
"in" and "hidden", which correspond to the input
data and hidden state, respectively. In this case, the layer uses the values that the
network passes to these inputs for the layer operation. If HasStateInputs is 1 (true), then the
HiddenState property must be empty.
To set this property, use the corresponding name-value argument when you create the GRULayer object. After you create a GRULayer object, this property is read-only.
Flag for state outputs from the layer, specified as 0
(false) or 1 (true).
If the HasStateOutputs property is 0
(false), then the layer has one output with the name
"out", which corresponds to the output data.
If the HasStateOutputs property is 1
(true), then the layer has two outputs with the names
"out" and "hidden", which correspond
to the output data and hidden state, respectively. In this case, the layer also
outputs the state values computed during the layer operation.
To set this property, use the corresponding name-value argument when you create the GRULayer object. After you create a GRULayer object, this property is read-only.
Reset gate mode, specified as one of these values:
"after-multiplication"— Apply the reset gate after matrix multiplication. This option is cuDNN compatible."before-multiplication"— Apply the reset gate before matrix multiplication."recurrent-bias-after-multiplication"— Apply the reset gate after matrix multiplication and use an additional set of bias terms for the recurrent weights.
For more information about the reset gate calculations, see Gated Recurrent Unit Layer.
Before R2023a: dlnetwork
objects support GRU layers with the ResetGateMode set
to "after-multiplication" only.
This property is read-only.
Input size, specified as a positive integer or "auto". If
InputSize is "auto", then the software
automatically assigns the input size at training time.
If InputSize is "auto", then the
GRULayer object stores this property as a character
vector.
Data Types: double | char | string
Activations
Activation function to update the hidden state, specified as one of these values:
"tanh"— Use the hyperbolic tangent function (tanh)."softsign"— Use the softsign function, ."relu"(since R2024b) — Use the rectified linear unit (ReLU) function .
The software uses this option as the function in the calculations to update the hidden state.
The GRULayer object stores this property as a character vector.
To set this property, use the corresponding name-value argument when you create the GRULayer object. After you create a GRULayer object, this property is read-only.
Activation function to apply to the gates, specified as one of these values:
"sigmoid"— Use the sigmoid function, ."hard-sigmoid"— Use the hard sigmoid function,
The software uses this option as the function in the calculations for the layer gates.
The GRULayer object stores this property as a character vector.
To set this property, use the corresponding name-value argument when you create the GRULayer object. After you create a GRULayer object, this property is read-only.
State
Hidden state to use in the layer operation, specified as a
NumHiddenUnits-by-1 numeric vector. This value corresponds to the
initial hidden state when data is passed to the layer.
After you set this property manually, calls to the resetState
function set the hidden state to this value.
If HasStateInputs is 1
(true), then the HiddenState
property must be empty.
Data Types: single | double
Parameters and Initialization
Function to initialize the input weights, specified as one of the following:
"glorot"— Initialize the input weights with the Glorot initializer [2] (also known as Xavier initializer). The Glorot initializer independently samples from a uniform distribution with a mean of zero and a variance of2/(InputSize + numOut), wherenumOut = 3*NumHiddenUnits."he"— Initialize the input weights with the He initializer [3]. The He initializer samples from a normal distribution with a mean of zero and a variance of2/InputSize."orthogonal"— Initialize the input weights with Q, the orthogonal matrix given by the QR decomposition of Z = QR for a random matrix Z sampled from a unit normal distribution. [4]"narrow-normal"— Initialize the input weights by independently sampling from a normal distribution with a mean of zero and a standard deviation of 0.01."zeros"— Initialize the input weights with zeros."ones"— Initialize the input weights with ones.Function handle — Initialize the input weights with a custom function. If you specify a function handle, then the function must be of the form
weights = func(sz), whereszis the size of the input weights.
The layer only initializes the input weights when the
InputWeights property is empty.
The GRULayer object stores this property as a character vector or a
function handle.
Data Types: char | string | function_handle
Function to initialize the recurrent weights, specified as one of the following:
"orthogonal"— Initialize the recurrent weights with Q, the orthogonal matrix given by the QR decomposition of Z = QR for a random matrix Z sampled from a unit normal distribution. [4]"glorot"— Initialize the recurrent weights with the Glorot initializer [2] (also known as Xavier initializer). The Glorot initializer independently samples from a uniform distribution with a mean of zero and a variance of2/(numIn + numOut), wherenumIn = NumHiddenUnitsandnumOut = 3*NumHiddenUnits."he"— Initialize the recurrent weights with the He initializer [3]. The He initializer samples from a normal distribution with a mean of zero and a variance of2/NumHiddenUnits."narrow-normal"— Initialize the recurrent weights by independently sampling from a normal distribution with a mean of zero and a standard deviation of 0.01."zeros"— Initialize the recurrent weights with zeros."ones"— Initialize the recurrent weights with ones.Function handle — Initialize the recurrent weights with a custom function. If you specify a function handle, then the function must be of the form
weights = func(sz), whereszis the size of the recurrent weights.
The layer only initializes the recurrent weights when the
RecurrentWeights property is empty.
The GRULayer object stores this property as a character vector or a
function handle.
Data Types: char | string | function_handle
Function to initialize the bias, specified as one of these values:
"zeros"— Initialize the bias with zeros."narrow-normal"— Initialize the bias by independently sampling from a normal distribution with a mean of zero and standard deviation 0.01."ones"— Initialize the bias with ones.Function handle — Initialize the bias with a custom function. If you specify a function handle, then the function must have the form
bias = func(sz), whereszis the size of the bias.
The layer initializes the bias only when the Bias property is
empty.
The GRULayer object stores this property as a character vector or a
function handle.
Data Types: char | string | function_handle
Input weights, specified as a matrix.
The input weight matrix is a concatenation of the three input weight matrices for the components in the GRU layer. The three matrices are concatenated vertically in the following order:
Reset gate
Update gate
Candidate state
The input weights are learnable parameters. When you train a
neural network using the trainnet function,
if InputWeights is nonempty, then the software uses the
InputWeights property as the initial value. If InputWeights is empty, then the software uses the initializer
specified by InputWeightsInitializer.
At training time, InputWeights is a
3*NumHiddenUnits-by-InputSize
matrix.
Recurrent weights, specified as a matrix.
The recurrent weight matrix is a concatenation of the three recurrent weight matrices for the components in the GRU layer. The three matrices are vertically concatenated in the following order:
Reset gate
Update gate
Candidate state
The recurrent weights are learnable parameters. When you train
an RNN using the trainnet function,
if RecurrentWeights is nonempty, then the software uses the
RecurrentWeights property as the initial value. If
RecurrentWeights is empty, then the software uses the
initializer specified by RecurrentWeightsInitializer.
At training time RecurrentWeights
is a
3*NumHiddenUnits-by-NumHiddenUnits
matrix.
Layer biases, specified as a numeric vector.
If ResetGateMode is
"after-multiplication" or
"before-multiplication", then the bias vector is a concatenation
of three bias vectors for the components in the layer operation. The layer concatenates
the vectors vertically in this order:
Reset gate
Update gate
Candidate state
In this case, at training time, Bias is a 3*NumHiddenUnits-by-1 numeric vector.
If ResetGateMode is
"recurrent-bias-after-multiplication", then the bias vector is a
concatenation of six bias vectors for the components in the GRU layer. The layer
concatenates the vectors vertically in this order:
Reset gate
Update gate
Candidate state
Reset gate (recurrent bias)
Update gate (recurrent bias)
Candidate state (recurrent bias)
In this case, at training time, Bias is a 6*NumHiddenUnits-by-1 numeric vector.
The layer biases are learnable parameters. When you train a neural network, if Bias is nonempty, then the trainnet
function uses the Bias property as the initial value. If
Bias is empty, then software uses the initializer
specified by BiasInitializer.
For more information about the reset gate calculations, see Gated Recurrent Unit Layer.
Learning Rate and Regularization
Learning rate factor for the input weights, specified as a numeric scalar or a 1-by-3 numeric vector.
The software multiplies this factor by the global learning rate
to determine the learning rate factor for the input weights of the layer. For example, if
InputWeightsLearnRateFactor is 2, then the learning
rate factor for the input weights of the layer is twice the current global learning rate. The
software determines the global learning rate based on the settings you specify with the
trainingOptions function.
To control the value of the learning rate factor for the three individual matrices in
InputWeights, specify a 1-by-3 vector. The entries of
InputWeightsLearnRateFactor correspond to the learning rate
factor of these values:
Reset gate
Update gate
Candidate state
To specify the same value for all the matrices, specify a nonnegative scalar.
Example: 2
Example: [1 2 1]
Learning rate factor for the recurrent weights, specified as a numeric scalar or a 1-by-3 numeric vector.
The software multiplies this factor by the global learning rate
to determine the learning rate for the recurrent weights of the layer. For example, if
RecurrentWeightsLearnRateFactor is 2, then the
learning rate for the recurrent weights of the layer is twice the current global learning rate.
The software determines the global learning rate based on the settings you specify using the
trainingOptions function.
To control the value of the learning rate factor for the three individual matrices in
RecurrentWeights, specify a 1-by-3 vector. The entries of
RecurrentWeightsLearnRateFactor correspond to the learning rate
factor of these values:
Reset gate
Update gate
Candidate state
To specify the same value for all the matrices, specify a nonnegative scalar.
Example: 2
Example: [1 2 1]
Learning rate factor for the biases, specified as a nonnegative scalar or a 1-by-3 numeric vector.
The software multiplies this factor by the global learning rate to determine the learning rate for the biases in this layer. For example, if BiasLearnRateFactor is 2, then the learning rate for the biases in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using the trainingOptions function.
To control the value of the learning rate factor for the three individual vectors in
Bias, specify a 1-by-3 vector. The entries of
BiasLearnRateFactor correspond to the learning rate factor of
these values:
Reset gate
Update gate
Candidate state
If ResetGateMode is
"recurrent-bias-after-multiplication", then the software uses the
same vector for the recurrent bias vectors.
To specify the same value for all the vectors, specify a nonnegative scalar.
Example: 2
Example: [1 2 1]
L2 regularization factor for the input weights, specified as a numeric scalar or a 1-by-3 numeric vector.
The software multiplies this factor by the global
L2 regularization factor to determine the
L2 regularization factor for the input weights
of the layer. For example, if InputWeightsL2Factor is 2,
then the L2 regularization factor for the input
weights of the layer is twice the current global L2
regularization factor. The software determines the L2
regularization factor based on the settings you specify using the trainingOptions function.
To control the value of the L2
regularization factor for the three individual matrices in
InputWeights, specify a 1-by-3 vector. The entries of
InputWeightsL2Factor correspond to the
L2 regularization factor of these
values:
Reset gate
Update gate
Candidate state
To specify the same value for all the matrices, specify a nonnegative scalar.
Example:
2
Example:
[1 2 1]
L2 regularization factor for the recurrent weights, specified as a numeric scalar or a 1-by-3 numeric vector.
The software multiplies this factor by the global
L2 regularization factor to determine the
L2 regularization factor for the recurrent
weights of the layer. For example, if RecurrentWeightsL2Factor is
2, then the L2 regularization
factor for the recurrent weights of the layer is twice the current global
L2 regularization factor. The software
determines the L2 regularization factor based on the
settings you specify using the trainingOptions function.
To control the value of the L2
regularization factor for the three individual matrices in
RecurrentWeights, specify a 1-by-3 vector. The entries of
RecurrentWeightsL2Factor correspond to the
L2 regularization factor of these
values:
Reset gate
Update gate
Candidate state
To specify the same value for all the matrices, specify a nonnegative scalar.
Example:
2
Example:
[1 2 1]
L2 regularization factor for the biases, specified as a nonnegative scalar or a 1-by-3 numeric vector.
The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization for the biases in this layer. For example, if BiasL2Factor is 2, then the L2 regularization for the biases in this layer is twice the global L2 regularization factor. The software determines the global L2 regularization factor based on the settings you specify using the trainingOptions function.
To control the value of the L2
regularization factor for the individual vectors in Bias, specify a
1-by-3 vector. The entries of BiasL2Factor correspond to the
L2 regularization factor of these
values:
Reset gate
Update gate
Candidate state
If ResetGateMode is
"recurrent-bias-after-multiplication", then the software uses the
same vector for the recurrent bias vectors.
To specify the same value for all the vectors, specify a nonnegative scalar.
Example:
2
Example:
[1 2 1]
Layer
This property is read-only.
Number of inputs to the layer.
If the HasStateInputs property is 0
(false), then the layer has one input with the name
"in", which corresponds to the input data. In this case, the layer
uses the HiddenState property for the layer
operation.
If the HasStateInputs property is 1
(true), then the layer has two inputs with the names
"in" and "hidden", which correspond to the input
data and hidden state, respectively. In this case, the layer uses the values that the
network passes to these inputs for the layer operation. If HasStateInputs is 1 (true), then the
HiddenState property must be empty.
Data Types: double
This property is read-only.
Layer input names.
If the HasStateInputs property is 0
(false), then the layer has one input with the name
"in", which corresponds to the input data. In this case, the layer
uses the HiddenState property for the layer
operation.
If the HasStateInputs property is 1
(true), then the layer has two inputs with the names
"in" and "hidden", which correspond to the input
data and hidden state, respectively. In this case, the layer uses the values that the
network passes to these inputs for the layer operation. If HasStateInputs is 1 (true), then the
HiddenState property must be empty.
The GRULayer object stores this property as a cell array of character
vectors.
This property is read-only.
Number of outputs from the layer.
If the HasStateOutputs property is 0
(false), then the layer has one output with the name
"out", which corresponds to the output data.
If the HasStateOutputs property is 1
(true), then the layer has two outputs with the names
"out" and "hidden", which correspond
to the output data and hidden state, respectively. In this case, the layer also
outputs the state values computed during the layer operation.
Data Types: double
This property is read-only.
Layer output names.
If the HasStateOutputs property is 0
(false), then the layer has one output with the name
"out", which corresponds to the output data.
If the HasStateOutputs property is 1
(true), then the layer has two outputs with the names
"out" and "hidden", which correspond
to the output data and hidden state, respectively. In this case, the layer also
outputs the state values computed during the layer operation.
The GRULayer object stores this property as a cell array of character
vectors.
Examples
Create a GRU layer with the name gru1 and 100 hidden units.
layer = gruLayer(100,Name="gru1")layer =
GRULayer with properties:
Name: 'gru1'
InputNames: {'in'}
OutputNames: {'out'}
NumInputs: 1
NumOutputs: 1
HasStateInputs: 0
HasStateOutputs: 0
Hyperparameters
InputSize: 'auto'
NumHiddenUnits: 100
OutputMode: 'sequence'
StateActivationFunction: 'tanh'
GateActivationFunction: 'sigmoid'
ResetGateMode: 'after-multiplication'
Learnable Parameters
InputWeights: []
RecurrentWeights: []
Bias: []
State Parameters
HiddenState: []
Show all properties
Include a GRU layer in a Layer array.
inputSize = 12;
numHiddenUnits = 100;
numClasses = 9;
layers = [ ...
sequenceInputLayer(inputSize)
gruLayer(numHiddenUnits)
fullyConnectedLayer(numClasses)
softmaxLayer]layers =
4×1 Layer array with layers:
1 '' Sequence Input Sequence input with 12 dimensions
2 '' GRU GRU with 100 hidden units
3 '' Fully Connected 9 fully connected layer
4 '' Softmax softmax
Algorithms
A GRU layer is an RNN layer that learns dependencies between time steps in time-series and sequence data.
The hidden state of the layer at time step t contains the output of the GRU layer for this time step. At each time step, the layer adds information to or removes information from the state. The layer controls these updates using gates.
These components control the hidden state of the layer.
| Component | Purpose |
|---|---|
| Reset gate (r) | Control level of state reset |
| Update gate (z) | Control level of state update |
| Candidate state () | Control level of update added to hidden state |
The learnable weights of a GRU layer are the input weights W
(InputWeights), the recurrent weights R
(RecurrentWeights), and the bias b
(Bias). If the ResetGateMode
property is "recurrent-bias-after-multiplication", then the gate and
state calculations require two sets of bias values. The matrices W and
R are concatenations of the input weights and the recurrent weights
of each component, respectively. The layer concatenates the matrices in this order:
where r, z, and denote the reset gate, update gate, and candidate state, respectively.
The bias vector depends on the ResetGateMode property. If
ResetGateMode is
"after-multiplication" or "before-multiplication",
then the bias vector is a concatenation of three vectors:
where the subscript W indicates that this bias corresponds to the input weights multiplication.
If ResetGateMode is
"recurrent-bias-after-multiplication", then the bias vector is a
concatenation of six vectors:
where the subscript R indicates that this is the bias corresponding to the recurrent weights multiplication.
The hidden state at time step t is given by this equation:
These formulas describe the components at time step t.
| Component | ResetGateMode | Formula | |
|---|---|---|---|
| Reset gate | "after-multiplication" | ||
"before-multiplication" | |||
"recurrent-bias-after-multiplication" | |||
| Update gate | "after-multiplication" | ||
"before-multiplication" | |||
"recurrent-bias-after-multiplication" | |||
| Candidate state | "after-multiplication" | ||
"before-multiplication" | |||
"recurrent-bias-after-multiplication" | |||
In these calculations, and denote the gate and state activation functions, respectively. The
gruLayer function, by default, uses the sigmoid function given by to compute the gate activation function and the hyperbolic tangent
function (tanh) to compute the state activation function. To specify the state and gate
activation functions, use the StateActivationFunction and GateActivationFunction properties, respectively.
Layers in a layer array or layer graph pass data to subsequent layers as formatted dlarray objects.
The format of a dlarray object is a string of characters in which each
character describes the corresponding dimension of the data. The format consists of one or
more of these characters:
"S"— Spatial"C"— Channel"B"— Batch"T"— Time"U"— Unspecified
For example, you can describe 2-D image data that is represented as a 4-D array, where the
first two dimensions correspond to the spatial dimensions of the images, the third
dimension corresponds to the channels of the images, and the fourth dimension
corresponds to the batch dimension, as having the format "SSCB"
(spatial, spatial, channel, batch).
You can interact with these dlarray objects in automatic differentiation
workflows, such as those for developing a custom layer, using a functionLayer
object, or using the forward and predict functions with
dlnetwork objects.
This table shows the supported input formats of GRULayer objects and the
corresponding output format. If the software passes the output of the layer to a custom
layer that does not inherit from the nnet.layer.Formattable class, or a
FunctionLayer object with the Formattable property
set to 0 (false), then the layer receives an
unformatted dlarray object with dimensions ordered according to the formats
in this table. The formats listed here are only a subset. The layer may support additional
formats such as formats with additional "S" (spatial) or
"U" (unspecified) dimensions.
| Input Format | OutputMode | Output Format |
|---|---|---|
| "sequence" |
|
"last" | ||
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" |
In dlnetwork objects, GRULayer objects also support these input and output format combinations.
| Input Format | OutputMode | Output Format |
|---|---|---|
| "sequence" |
|
"last" | ||
| "sequence" | |
"last" | ||
| "sequence" | |
"last" | ||
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" | ||
| "sequence" | |
"last" | ||
| "sequence" | |
"last" | ||
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" | ||
| "sequence" | |
"last" | ||
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" |
| |
| "sequence" |
|
"last" |
|
If the HasStateInputs property is 1
(true), then the layer has two additional inputs with the names
"hidden" and "cell", which correspond to the
hidden state and cell state, respectively. These additional inputs expect input format
"CB" (channel, batch).
If the HasStateOutputs property is 1
(true), then the layer has two additional outputs with names
"hidden" and "cell", which correspond to the
hidden state and cell state, respectively. These additional outputs have output format
"CB" (channel, batch).
References
[1] Cho, Kyunghyun, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. "Learning phrase representations using RNN encoder-decoder for statistical machine translation." arXiv preprint arXiv:1406.1078 (2014).
[2] Glorot, Xavier, and Yoshua Bengio. "Understanding the Difficulty of Training Deep Feedforward Neural Networks." In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–356. Sardinia, Italy: AISTATS, 2010. https://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf
[3] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification." In 2015 IEEE International Conference on Computer Vision (ICCV), 1026–34. Santiago, Chile: IEEE, 2015. https://doi.org/10.1109/ICCV.2015.123
[4] Saxe, Andrew M., James L. McClelland, and Surya Ganguli. "Exact Solutions to the Nonlinear Dynamics of Learning in Deep Linear Neural Networks.” Preprint, submitted February 19, 2014. https://arxiv.org/abs/1312.6120.
Extended Capabilities
Usage notes and limitations:
For code generation in general, the
HasStateInputsandHasStateOutputsproperties must be set to0(false).Code generation does not support passing
dlarrayobjects with unspecified (U) dimensions to this layer.For code generation, you must pass a
dlarrayobject with a channel (C) dimension as the input to this layer. For example, code generation supports data format such as "SSC" or "SSCBT".When generating code with Intel® MKL-DNN or ARM® Compute Library:
The
StateActivationFunctionproperty must be set to"tanh".The
GateActivationFunctionproperty must be set to"sigmoid".The
ResetGateModeproperty must be set to"after-multiplication"or"recurrent-bias-after-multiplication".
Usage notes and limitations:
The
HasStateInputsandHasStateOutputsproperties must be set to0(false).Code generation does not support passing
dlarrayobjects with unspecified (U) dimensions to this layer.For code generation, you must pass a
dlarrayobject with a channel (C) dimension as the input to this layer. For example, code generation supports data format such as "SSC" or "SSCBT".When generating code with NVIDIA® TensorRT or CUDA deep neural network (cuDNN) library:
The
StateActivationFunctionproperty must be set to"tanh".The
GateActivationFunctionproperty must be set to"sigmoid".The
ResetGateModeproperty must be set to"after-multiplication"or"recurrent-bias-after-multiplication".
Version History
Introduced in R2020aTo specify the ReLU state activation function, set the StateActivationFunction property to
"relu".
For GRU layers in dlnetwork objects, specify the
reset gate mode using the ResetGateMode property.
See Also
trainnet | trainingOptions | dlnetwork | sequenceInputLayer | lstmLayer | bilstmLayer | convolution1dLayer | maxPooling1dLayer | averagePooling1dLayer | globalMaxPooling1dLayer | globalAveragePooling1dLayer
Topics
- Sequence Classification Using Deep Learning
- Sequence Classification Using 1-D Convolutions
- Time Series Forecasting Using Deep Learning
- Sequence-to-Sequence Classification Using Deep Learning
- Sequence-to-Sequence Regression Using Deep Learning
- Classify Videos Using Deep Learning
- Long Short-Term Memory Neural Networks
- List of Deep Learning Layers
- Deep Learning Tips and Tricks
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Sélectionner un site web
Choisissez un site web pour accéder au contenu traduit dans votre langue (lorsqu'il est disponible) et voir les événements et les offres locales. D’après votre position, nous vous recommandons de sélectionner la région suivante : .
Vous pouvez également sélectionner un site web dans la liste suivante :
Comment optimiser les performances du site
Pour optimiser les performances du site, sélectionnez la région Chine (en chinois ou en anglais). Les sites de MathWorks pour les autres pays ne sont pas optimisés pour les visites provenant de votre région.
Amériques
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)