# bilstmLayer

Bidirectional long short-term memory (BiLSTM) layer

## Description

A bidirectional LSTM (BiLSTM) layer learns bidirectional long-term dependencies between time steps of time series or sequence data. These dependencies can be useful when you want the network to learn from the complete time series at each time step.

## Creation

### Syntax

``layer = bilstmLayer(numHiddenUnits)``
``layer = bilstmLayer(numHiddenUnits,Name,Value)``

### Description

````layer = bilstmLayer(numHiddenUnits)` creates a bidirectional LSTM layer and sets the `NumHiddenUnits` property.```

example

````layer = bilstmLayer(numHiddenUnits,Name,Value)` sets additional `OutputMode`, Activations, State, Parameters and Initialization, Learning Rate and Regularization, and `Name` properties using one or more name-value pair arguments. You can specify multiple name-value pair arguments. Enclose each property name in quotes.```

## Properties

expand all

### BiLSTM

Number of hidden units (also known as the hidden size), specified as a positive integer.

The number of hidden units corresponds to the amount of information remembered between time steps (the hidden state). The hidden state can contain information from all previous time steps, regardless of the sequence length. If the number of hidden units is too large, then the layer might overfit to the training data. This value can vary from a few dozen to a few thousand.

The hidden state does not limit the number of time steps that are processed in an iteration. To split your sequences into smaller sequences for training, use the `'SequenceLength'` option in `trainingOptions`.

Example: 200

Output mode, specified as one of the following:

• `'sequence'` – Output the complete sequence.

• `'last'` – Output the last time step of the sequence.

Flag for state inputs to the layer, specified as `0` (false) or `1` (true).

If the `HasStateInputs` property is `0` (false), then the layer has one input with name `'in'`, which corresponds to the input data. In this case, the layer uses the `HiddenState` and `CellState` properties for the layer operation.

If the `HasStateInputs` property is `1` (true), then the layer has three inputs with names `'in'`, `'hidden'`, and `'cell'`, which correspond to the input data, hidden state, and cell state respectively. In this case, the layer uses the values passed to these inputs for the layer operation. If `HasStateInputs` is `1` (true), then the `HiddenState` and `CellState` properties must be empty.

Flag for state outputs from the layer, specified as `0` (false) or `1` (true).

If the `HasStateOutputs` property is `0` (false), then the layer has one output with name `'out'`, which corresponds to the output data.

If the `HasStateOutputs` property is `1` (true), then the layer has three outputs with names `'out'`, `'hidden'`, and `'cell'`, which correspond to the output data, hidden state, and cell state, respectively. In this case, the layer also outputs the state values computed during the layer operation.

Input size, specified as a positive integer or `'auto'`. If `InputSize` is `'auto'`, then the software automatically assigns the input size at training time.

Example: 100

### Activations

Activation function to update the cell and hidden state, specified as one of the following:

• `'tanh'` – Use the hyperbolic tangent function (tanh).

• `'softsign'` – Use the softsign function $\text{softsign}\left(x\right)=\frac{x}{1+|x|}$.

The layer uses this option as the function ${\sigma }_{c}$ in the calculations to update the cell and hidden state. For more information on how activation functions are used in an LSTM layer, see Long Short-Term Memory Layer.

Activation function to apply to the gates, specified as one of the following:

• `'sigmoid'` – Use the sigmoid function $\sigma \left(x\right)={\left(1+{e}^{-x}\right)}^{-1}$.

• `'hard-sigmoid'` – Use the hard sigmoid function

The layer uses this option as the function ${\sigma }_{g}$ in the calculations for the layer gates.

### State

Cell state to use in the layer operation, specified as a `2*NumHiddenUnits`-by-1 numeric vector. This value corresponds to the initial cell state when data is passed to the layer.

After setting this property manually, calls to the `resetState` function set the cell state to this value.

If `HasStateInputs` is `true`, then the `CellState` property must be empty.

Hidden state to use in the layer operation, specified as a `2*NumHiddenUnits`-by-1 numeric vector. This value corresponds to the initial hidden state when data is passed to the layer.

After setting this property manually, calls to the `resetState` function set the hidden state to this value.

If `HasStateInputs` is `true`, then the `HiddenState` property must be empty.

### Parameters and Initialization

Function to initialize the input weights, specified as one of the following:

• `'glorot'` – Initialize the input weights with the Glorot initializer [1] (also known as Xavier initializer). The Glorot initializer independently samples from a uniform distribution with zero mean and variance `2/(InputSize + numOut)`, where `numOut = 8*NumHiddenUnits`.

• `'he'` – Initialize the input weights with the He initializer [2]. The He initializer samples from a normal distribution with zero mean and variance `2/InputSize`.

• `'orthogonal'` – Initialize the input weights with Q, the orthogonal matrix given by the QR decomposition of Z = QR for a random matrix Z sampled from a unit normal distribution. [3]

• `'narrow-normal'` – Initialize the input weights by independently sampling from a normal distribution with zero mean and standard deviation 0.01.

• `'zeros'` – Initialize the input weights with zeros.

• `'ones'` – Initialize the input weights with ones.

• Function handle – Initialize the input weights with a custom function. If you specify a function handle, then the function must be of the form ```weights = func(sz)```, where `sz` is the size of the input weights.

The layer only initializes the input weights when the `InputWeights` property is empty.

Data Types: `char` | `string` | `function_handle`

Function to initialize the recurrent weights, specified as one of the following:

• `'orthogonal'` – Initialize the input weights with Q, the orthogonal matrix given by the QR decomposition of Z = QR for a random matrix Z sampled from a unit normal distribution. [3]

• `'glorot'` – Initialize the recurrent weights with the Glorot initializer [1] (also known as Xavier initializer). The Glorot initializer independently samples from a uniform distribution with zero mean and variance `2/(numIn + numOut)`, where `numIn = NumHiddenUnits` and `numOut = 8*NumHiddenUnits`.

• `'he'` – Initialize the recurrent weights with the He initializer [2]. The He initializer samples from a normal distribution with zero mean and variance `2/NumHiddenUnits`.

• `'narrow-normal'` – Initialize the recurrent weights by independently sampling from a normal distribution with zero mean and standard deviation 0.01.

• `'zeros'` – Initialize the recurrent weights with zeros.

• `'ones'` – Initialize the recurrent weights with ones.

• Function handle – Initialize the recurrent weights with a custom function. If you specify a function handle, then the function must be of the form ```weights = func(sz)```, where `sz` is the size of the recurrent weights.

The layer only initializes the recurrent weights when the `RecurrentWeights` property is empty.

Data Types: `char` | `string` | `function_handle`

Function to initialize the bias, specified as one of the following:

• `'unit-forget-gate'` – Initialize the forget gate bias with ones and the remaining biases with zeros.

• `'narrow-normal'` – Initialize the bias by independently sampling from a normal distribution with zero mean and standard deviation 0.01.

• `'ones'` – Initialize the bias with ones.

• Function handle – Initialize the bias with a custom function. If you specify a function handle, then the function must be of the form `bias = func(sz)`, where `sz` is the size of the bias.

The layer only initializes the bias when the `Bias` property is empty.

Data Types: `char` | `string` | `function_handle`

Input weights, specified as a matrix.

The input weight matrix is a concatenation of the eight input weight matrices for the components (gates) in the bidirectional LSTM layer. The eight matrices are concatenated vertically in the following order:

1. Input gate (Forward)

2. Forget gate (Forward)

3. Cell candidate (Forward)

4. Output gate (Forward)

5. Input gate (Backward)

6. Forget gate (Backward)

7. Cell candidate (Backward)

8. Output gate (Backward)

The input weights are learnable parameters. When training a network, if `InputWeights` is nonempty, then `trainNetwork` uses the `InputWeights` property as the initial value. If `InputWeights` is empty, then `trainNetwork` uses the initializer specified by `InputWeightsInitializer`.

At training time, `InputWeights` is an `8*NumHiddenUnits`-by-`InputSize` matrix.

Recurrent weights, specified as a matrix.

The recurrent weight matrix is a concatenation of the eight recurrent weight matrices for the components (gates) in the bidirectional LSTM layer. The eight matrices are concatenated vertically in the following order:

1. Input gate (Forward)

2. Forget gate (Forward)

3. Cell candidate (Forward)

4. Output gate (Forward)

5. Input gate (Backward)

6. Forget gate (Backward)

7. Cell candidate (Backward)

8. Output gate (Backward)

The recurrent weights are learnable parameters. When training a network, if `RecurrentWeights` is nonempty, then `trainNetwork` uses the `RecurrentWeights` property as the initial value. If `RecurrentWeights` is empty, then `trainNetwork` uses the initializer specified by `RecurrentWeightsInitializer`.

At training time, `RecurrentWeights` is an `8*NumHiddenUnits`-by-`NumHiddenUnits` matrix.

Layer biases, specified as a numeric vector.

The bias vector is a concatenation of the eight bias vectors for the components (gates) in the bidirectional LSTM layer. The eight vectors are concatenated vertically in the following order:

1. Input gate (Forward)

2. Forget gate (Forward)

3. Cell candidate (Forward)

4. Output gate (Forward)

5. Input gate (Backward)

6. Forget gate (Backward)

7. Cell candidate (Backward)

8. Output gate (Backward)

The layer biases are learnable parameters. When you train a network, if `Bias` is nonempty, then `trainNetwork` uses the `Bias` property as the initial value. If `Bias` is empty, then `trainNetwork` uses the initializer specified by `BiasInitializer`.

At training time, `Bias` is an `8*NumHiddenUnits`-by-1 numeric vector.

### Learning Rate and Regularization

Learning rate factor for the input weights, specified as a numeric scalar or a 1-by-8 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate factor for the input weights of the layer. For example, if `InputWeightsLearnRateFactor` is 2, then the learning rate factor for the input weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the `trainingOptions` function.

To control the value of the learning rate factor for the four individual matrices in `InputWeights`, assign a 1-by-8 vector, where the entries correspond to the learning rate factor of the following:

1. Input gate (Forward)

2. Forget gate (Forward)

3. Cell candidate (Forward)

4. Output gate (Forward)

5. Input gate (Backward)

6. Forget gate (Backward)

7. Cell candidate (Backward)

8. Output gate (Backward)

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: `0.1`

Learning rate factor for the recurrent weights, specified as a numeric scalar or a 1-by-8 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate for the recurrent weights of the layer. For example, if `RecurrentWeightsLearnRateFactor` is 2, then the learning rate for the recurrent weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the `trainingOptions` function.

To control the value of the learn rate for the four individual matrices in `RecurrentWeights`, assign a 1-by-8 vector, where the entries correspond to the learning rate factor of the following:

1. Input gate (Forward)

2. Forget gate (Forward)

3. Cell candidate (Forward)

4. Output gate (Forward)

5. Input gate (Backward)

6. Forget gate (Backward)

7. Cell candidate (Backward)

8. Output gate (Backward)

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: `0.1`

Example: `[1 2 1 1 1 2 1 1]`

Learning rate factor for the biases, specified as a nonnegative scalar or a 1-by-8 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate for the biases in this layer. For example, if `BiasLearnRateFactor` is `2`, then the learning rate for the biases in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using the `trainingOptions` function.

To control the value of the learning rate factor for the four individual matrices in `Bias`, assign a 1-by-8 vector, where the entries correspond to the learning rate factor of the following:

1. Input gate (Forward)

2. Forget gate (Forward)

3. Cell candidate (Forward)

4. Output gate (Forward)

5. Input gate (Backward)

6. Forget gate (Backward)

7. Cell candidate (Backward)

8. Output gate (Backward)

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: `2`

Example: `[1 2 1 1 1 2 1 1]`

L2 regularization factor for the input weights, specified as a numeric scalar or a 1-by-8 numeric vector.

The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization factor for the input weights of the layer. For example, if `InputWeightsL2Factor` is 2, then the L2 regularization factor for the input weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings specified with the `trainingOptions` function.

To control the value of the L2 regularization factor for the four individual matrices in `InputWeights`, assign a 1-by-8 vector, where the entries correspond to the L2 regularization factor of the following:

1. Input gate (Forward)

2. Forget gate (Forward)

3. Cell candidate (Forward)

4. Output gate (Forward)

5. Input gate (Backward)

6. Forget gate (Backward)

7. Cell candidate (Backward)

8. Output gate (Backward)

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: `0.1`

Example: `[1 2 1 1 1 2 1 1]`

L2 regularization factor for the recurrent weights, specified as a numeric scalar or a 1-by-8 numeric vector.

The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization factor for the recurrent weights of the layer. For example, if `RecurrentWeightsL2Factor` is 2, then the L2 regularization factor for the recurrent weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings specified with the `trainingOptions` function.

To control the value of the L2 regularization factor for the four individual matrices in `RecurrentWeights`, assign a 1-by-8 vector, where the entries correspond to the L2 regularization factor of the following:

1. Input gate (Forward)

2. Forget gate (Forward)

3. Cell candidate (Forward)

4. Output gate (Forward)

5. Input gate (Backward)

6. Forget gate (Backward)

7. Cell candidate (Backward)

8. Output gate (Backward)

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: `0.1`

Example: `[1 2 1 1 1 2 1 1]`

L2 regularization factor for the biases, specified as a nonnegative scalar.

The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization for the biases in this layer. For example, if `BiasL2Factor` is `2`, then the L2 regularization for the biases in this layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the `trainingOptions` function.

To control the value of the L2 regularization factor for the four individual matrices in `Bias`, assign a 1-by-8 vector, where the entries correspond to the L2 regularization factor of the following:

1. Input gate (Forward)

2. Forget gate (Forward)

3. Cell candidate (Forward)

4. Output gate (Forward)

5. Input gate (Backward)

6. Forget gate (Backward)

7. Cell candidate (Backward)

8. Output gate (Backward)

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: `2`

Example: `[1 2 1 1 1 2 1 1]`

### Layer

Layer name, specified as a character vector or a string scalar. For `Layer` array input, the `trainNetwork`, `assembleNetwork`, `layerGraph`, and `dlnetwork` functions automatically assign names to layers with `Name` set to `''`.

Data Types: `char` | `string`

Number of inputs of the layer.

If the `HasStateInputs` property is `0` (false), then the layer has one input with name `'in'`, which corresponds to the input data. In this case, the layer uses the `HiddenState` and `CellState` properties for the layer operation.

If the `HasStateInputs` property is `1` (true), then the layer has three inputs with names `'in'`, `'hidden'`, and `'cell'`, which correspond to the input data, hidden state, and cell state respectively. In this case, the layer uses the values passed to these inputs for the layer operation. If `HasStateInputs` is `1` (true), then the `HiddenState` and `CellState` properties must be empty.

Data Types: `double`

Input names of the layer.

If the `HasStateInputs` property is `0` (false), then the layer has one input with name `'in'`, which corresponds to the input data. In this case, the layer uses the `HiddenState` and `CellState` properties for the layer operation.

If the `HasStateInputs` property is `1` (true), then the layer has three inputs with names `'in'`, `'hidden'`, and `'cell'`, which correspond to the input data, hidden state, and cell state respectively. In this case, the layer uses the values passed to these inputs for the layer operation. If `HasStateInputs` is `1` (true), then the `HiddenState` and `CellState` properties must be empty.

Number of outputs of the layer.

If the `HasStateOutputs` property is `0` (false), then the layer has one output with name `'out'`, which corresponds to the output data.

If the `HasStateOutputs` property is `1` (true), then the layer has three outputs with names `'out'`, `'hidden'`, and `'cell'`, which correspond to the output data, hidden state, and cell state, respectively. In this case, the layer also outputs the state values computed during the layer operation.

Data Types: `double`

Output names of the layer.

If the `HasStateOutputs` property is `0` (false), then the layer has one output with name `'out'`, which corresponds to the output data.

If the `HasStateOutputs` property is `1` (true), then the layer has three outputs with names `'out'`, `'hidden'`, and `'cell'`, which correspond to the output data, hidden state, and cell state, respectively. In this case, the layer also outputs the state values computed during the layer operation.

## Examples

collapse all

Create a bidirectional LSTM layer with the name `'bilstm1'` and 100 hidden units.

`layer = bilstmLayer(100,'Name','bilstm1')`
```layer = BiLSTMLayer with properties: Name: 'bilstm1' InputNames: {'in'} OutputNames: {'out'} NumInputs: 1 NumOutputs: 1 HasStateInputs: 0 HasStateOutputs: 0 Hyperparameters InputSize: 'auto' NumHiddenUnits: 100 OutputMode: 'sequence' StateActivationFunction: 'tanh' GateActivationFunction: 'sigmoid' Learnable Parameters InputWeights: [] RecurrentWeights: [] Bias: [] State Parameters HiddenState: [] CellState: [] Show all properties ```

Include a bidirectional LSTM layer in a `Layer` array.

```inputSize = 12; numHiddenUnits = 100; numClasses = 9; layers = [ ... sequenceInputLayer(inputSize) bilstmLayer(numHiddenUnits) fullyConnectedLayer(numClasses) softmaxLayer classificationLayer]```
```layers = 5x1 Layer array with layers: 1 '' Sequence Input Sequence input with 12 dimensions 2 '' BiLSTM BiLSTM with 100 hidden units 3 '' Fully Connected 9 fully connected layer 4 '' Softmax softmax 5 '' Classification Output crossentropyex ```

expand all

## Compatibility Considerations

expand all

Behavior changed in R2019a

Behavior changed in R2019a

## References

[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the Difficulty of Training Deep Feedforward Neural Networks." In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–356. Sardinia, Italy: AISTATS, 2010.

[2] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification." In Proceedings of the 2015 IEEE International Conference on Computer Vision, 1026–1034. Washington, DC: IEEE Computer Vision Society, 2015.

[3] Saxe, Andrew M., James L. McClelland, and Surya Ganguli. "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks." arXiv preprint arXiv:1312.6120 (2013).

## Extended Capabilities

Introduced in R2018a