ProjectedLayer
Description
A projected layer is a compressed neural network layer resulting from projection.
Creation
To compress a neural network using projection, use the compressNetworkUsingProjection
function. This feature requires the Deep Learning Toolbox™ Model Compression
Library support package. This support package is a free add-on that you can download using
the Add-On Explorer. Alternatively, see Deep Learning Toolbox Model Compression Library.
Properties
Projected
This property is read-only.
Class of the original layer, returned as a character vector.
Example: 'nnet.cnn.layer.LSTMLayer'
Data Types: char
This property is read-only.
Proportion of learnables removed in the layer, returned as a scalar in the interval [0,1].
Data Types: double
Neural network that represents projection, returned as a
dlnetwork
object.
The neural network that represents the projection depends on the type of layer:
Original Layer | Network |
---|---|
| Network containing two or three convolution1dLayer objects |
convolution2dLayer | Network containing two or three convolution2dLayer objects |
fullyConnectedLayer | Network containing two fullyConnectedLayer objects |
lstmLayer | Network containing a single lstmProjectedLayer object |
gruLayer | Network containing a single gruProjectedLayer object |
For more information, see Projected Layer. To replace the ProjectedLayer
objects in a neural network with
the equivalent network that represents the projection, use the unpackProjectedLayers
function.
This property is read-only.
Number of input channels, returned as a positive integer.
Data Types: double
This property is read-only.
Number of output channels, returned as a positive integer.
Data Types: double
This property is read-only.
Number of columns of the input projector, returned as a positive integer.
The input projector is the matrix Q used to project the layer input. For more information, see Projected Layer.
Data Types: double
This property is read-only.
Number of columns of the output projector, returned as a positive integer.
The output projector is the matrix Q used to project the layer output. For more information, see Projected Layer.
Data Types: double
Layer
Layer name, specified as a character
vector or a string scalar. For Layer
array input, the trainnet
and dlnetwork
functions automatically
assign new unique names to layers that have the name
""
.
When you compress a neural network using the compressNetworkUsingProjection
, the function replaces projectable layers
with ProjectedLayer
objects with the same name.
The ProjectedLayer
object stores this property as a character vector.
Data Types: char
| string
This property is read-only.
Number of inputs to the layer, returned as a positive integer.
Data Types: double
Input names, returned as a cell array of character vectors.
Data Types: cell
This property is read-only.
Number of outputs from the layer, returned as a positive integer.
Data Types: double
Output names, returned as a cell array of character vectors.
Data Types: cell
Examples
Load the pretrained network in dlnetJapaneseVowels
and the training data in JapaneseVowelsTrainData
.
load dlnetJapaneseVowels load JapaneseVowelsTrainData
Create a mini-batch queue containing the training data. To create a mini-batch queue from in-memory data, convert the sequences to an array datastore.
adsXTrain = arrayDatastore(XTrain,OutputType="same");
Create the minibatchqueue
object.
Specify a mini-batch size of 16.
Preprocess the mini-batches using the
preprocessMiniBatchPredictors
function, listed in the Mini-Batch Predictors Preprocessing Function section of the example.Specify that the output data has format
"CTB"
(channel, time, batch).
mbq = minibatchqueue(adsXTrain, ... MiniBatchSize=16, ... MiniBatchFcn=@preprocessMiniBatchPredictors, ... MiniBatchFormat="CTB");
Compress the network.
[netProjected,info] = compressNetworkUsingProjection(net,mbq);
Compressed network has 83.4% fewer learnable parameters. Projection compressed 2 layers: "lstm","fc"
View the network layers.
netProjected.Layers
ans = 4×1 Layer array with layers: 1 'sequenceinput' Sequence Input Sequence input with 12 dimensions 2 'lstm' Projected Layer Projected LSTM with 100 hidden units 3 'fc' Projected Layer Projected fully connected layer with output size 9 4 'softmax' Softmax softmax
View the projected LSTM layer. The LearnablesReduction
property shows the proportion of learnables removed in the layer. The Network
property contains the neural network that represents the projection.
netProjected.Layers(2)
ans = ProjectedLayer with properties: Name: 'lstm' OriginalClass: 'nnet.cnn.layer.LSTMLayer' LearnablesReduction: 0.8408 InputSize: 12 OutputSize: 100 Hyperparameters InputProjectorSize: 8 OutputProjectorSize: 7 Learnable Parameters Network: [1×1 dlnetwork] State Parameters Network: [1×1 dlnetwork] Network Learnable Parameters Network/lstm/InputWeights 400×8 dlarray Network/lstm/RecurrentWeights 400×7 dlarray Network/lstm/Bias 400×1 dlarray Network/lstm/InputProjector 12×8 dlarray Network/lstm/OutputProjector 100×7 dlarray Network State Parameters Network/lstm/HiddenState 100×1 dlarray Network/lstm/CellState 100×1 dlarray Show all properties
Mini-Batch Predictors Preprocessing Function
The preprocessMiniBatchPredictors
function preprocesses a mini-batch of predictors by extracting the sequence data from the input cell array and truncating them along the second dimension so that they have the same length.
Note: Do not pad sequence data when doing the PCA step for projection as this can negatively impact the analysis. Instead, truncate mini-batches of data to have the same length or use mini-batches of size 1.
function X = preprocessMiniBatchPredictors(dataX) X = padsequences(dataX,2,Length="shortest"); end
Load the pretrained network in dlnetProjectedJapaneseVowels
.
load dlnetProjectedJapaneseVowels
View the network properties.
net
net = dlnetwork with properties: Layers: [4×1 nnet.cnn.layer.Layer] Connections: [3×2 table] Learnables: [9×3 table] State: [2×3 table] InputNames: {'sequenceinput'} OutputNames: {'softmax'} Initialized: 1 View summary with summary.
View the network layers. The network has two projected layers.
net.Layers
ans = 4×1 Layer array with layers: 1 'sequenceinput' Sequence Input Sequence input with 12 dimensions 2 'lstm' Projected Layer Projected LSTM with 100 hidden units 3 'fc' Projected Layer Projected fully connected layer with output size 9 4 'softmax' Softmax softmax
Unpack the projected layers.
netUnpacked = unpackProjectedLayers(net)
netUnpacked = dlnetwork with properties: Layers: [5×1 nnet.cnn.layer.Layer] Connections: [4×2 table] Learnables: [9×3 table] State: [2×3 table] InputNames: {'sequenceinput'} OutputNames: {'softmax'} Initialized: 1 View summary with summary.
View the unpacked network layers. The unpacked network has a projected LSTM layer and two fully connected layers in place of the projected layers.
netUnpacked.Layers
ans = 5×1 Layer array with layers: 1 'sequenceinput' Sequence Input Sequence input with 12 dimensions 2 'lstm' Projected LSTM Projected LSTM layer with 100 hidden units, an output projector size of 7, and an input projector size of 8 3 'fc_proj_in' Fully Connected 4 fully connected layer 4 'fc_proj_out' Fully Connected 9 fully connected layer 5 'softmax' Softmax softmax
Tips
Code generation does not support
ProjectedLayer
objects. To replaceProjectedLayer
objects in a neural network with the equivalent neural network that represents the projection, use theunpackProjectedLayers
function or set theUnpackProjectedLayers
option of thecompressNetworkUsingProjection
function to1
(true
).
Algorithms
To compress a deep learning network, you can use projected layers. A projected layer is a type of deep learning layer that enables compression by reducing the number of stored learnable parameters. The layer introduces learnable projector matrices Q, replaces multiplications of the form , where W is a learnable matrix, with the multiplication , and stores Q and instead of storing W. Projecting x into a lower dimensional space using Q typically requires less memory to store the learnable parameters and can have similarly strong prediction accuracy.
For some types of layers, you can represent a projected layer as a neural network
containing two or more layers with fewer learnable parameters. For example, you can
represent a projected convolution layer as three convolution layers that perform the input
projection, convolution, and the output projection operations independently. When you
compress a network using the compressNetworkUsingProjection
function, the software replaces layers that
support projection with ProjectedLayer
objects that contain the equivalent neural network. To replace ProjectedLayer
objects in a neural network with the equivalent neural network that represents the projection, use the unpackProjectedLayers
function or set the UnpackProjectedLayers
option of the compressNetworkUsingProjection
function to 1
(true
).
The compressNetworkUsingProjection
function supports projecting these layers:
The compressNetworkUsingProjection
function replaces projectable layers with
ProjectedLayer
objects. A ProjectedLayer
object contains information about the projection
operation and contains the neural network that represents the projection.
The neural network that represents the projection depends on the type of layer:
Original Layer | Network |
---|---|
| Network containing two or three convolution1dLayer objects |
convolution2dLayer | Network containing two or three convolution2dLayer objects |
fullyConnectedLayer | Network containing two fullyConnectedLayer objects |
lstmLayer | Network containing a single lstmProjectedLayer object |
gruLayer | Network containing a single gruProjectedLayer object |
Layers in a layer array or layer graph pass data to subsequent layers as formatted dlarray
objects.
The format of a dlarray
object is a string of characters in which each
character describes the corresponding dimension of the data. The format consists of one or
more of these characters:
"S"
— Spatial"C"
— Channel"B"
— Batch"T"
— Time"U"
— Unspecified
For example, you can describe 2-D image data that is represented as a 4-D array, where the
first two dimensions correspond to the spatial dimensions of the images, the third
dimension corresponds to the channels of the images, and the fourth dimension
corresponds to the batch dimension, as having the format "SSCB"
(spatial, spatial, channel, batch).
You can interact with these dlarray
objects in automatic differentiation
workflows, such as those for developing a custom layer, using a functionLayer
object, or using the forward
and predict
functions with
dlnetwork
objects.
To learn more about the supported input and output formats of a
ProjectedLayer
object, see the documentation for the layer given by the
OriginalClass
property.
References
[1] "Compressing Neural Networks Using Network Projection." Accessed July 20, 2023. https://www.mathworks.com/company/technical-articles/compressing-neural-networks-using-network-projection.html.
Version History
Introduced in R2023b
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: United States.
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)