predict
Compute deep learning network output for inference
Syntax
Description
Some deep learning layers behave differently during training and inference (prediction). For example, during training, dropout layers randomly set input elements to zero to help prevent overfitting, but during inference, dropout layers do not change the input.
___ = predict(___,
specifies additional options using one or more name-value arguments.Name=Value
)
Examples
Load a pretrained SqueezeNet neural network into the workspace.
[net,classNames] = imagePretrainedNetwork;
Read an image from a PNG file and classify it. To classify the image, first convert it to the data type single
.
im = imread("peppers.png");
figure
imshow(im)
X = single(im); scores = predict(net,X); [label,score] = scores2label(scores,classNames);
Display the image with the predicted label and corresponding score.
figure imshow(im) title(string(label) + " (Score: " + score + ")")
Input Arguments
Neural network, specified as one of these values:
dlnetwork
object — Neural network for custom training loop.TaylorPrunableNetwork
object — Neural network for custom pruning loop.
To prune a deep neural network, you require the Deep Learning Toolbox™ Model Compression Library support package. This support package is a free add-on that you can download using the Add-On Explorer. Alternatively, see Deep Learning Toolbox Model Compression Library.
Input data, each specified as one of these values:
Formatted
dlarray
objectUnformatted
dlarray
object (since R2023b)Numeric array (since R2023b)
Tip
Neural networks expect input data with a specific layout. For example, vector-sequence classification networks typically expect vector-sequence representations to be t-by-c arrays, where t and c are the number of time steps and channels of sequences, respectively. Neural networks typically have an input layer that specifies the expected layout of the data.
Most datastores and functions output data in the layout that the network expects. If
your data is in a different layout to what the network expects, then indicate that your
data has a different layout by using the InputDataFormats
option or
by specifying input data as a formatted dlarray
object. It is usually
easiest to adjust the InputDataFormats
training option than to
preprocess the input data.
For more information, see Deep Learning Data Formats.
To create a neural network that receives unformatted
data, use an inputLayer
object
and do not specify a format. To input unformatted data into a network directly, do not
specify the InputDataFormats
argument. (since R2025a)
Before R2025a: For neural networks that do not have input layers, you
must specify a format using the InputDataFormats
argument or use
formatted dlarray
objects as input.
Name-Value Arguments
Specify optional pairs of arguments as
Name1=Value1,...,NameN=ValueN
, where Name
is
the argument name and Value
is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose
Name
in quotes.
Example: Y = predict(net,X,InputDataFormats="CBT")
makes predictions
with sequence data that has format "CBT"
(channel, batch,
time).
Neural network outputs, specified as a string array or a cell array of character vectors of layer names or layer output paths. Specify the output using one of these forms:
"layerName"
, wherelayerName
is the name of a layer with a single output."layerName/outputName"
, wherelayerName
is the name of a layer andoutputName
is the name of the layer output."layerName1/.../layerNameK/layerName"
, wherelayerName1
, …,layerNameK
is a series of layers that contain subnetworks,layerName
is the name of a layer with a single output."layerName1/.../layerNameK/layerName/outputName"
, wherelayerName1
, …,layerNameK
is a series of layers that contain subnetworks,layerName
is the name of a layer, andoutputName
is the name of the layer output.
For layers with a single output, specify the layer name or path of the layer output. For example, for a ReLU layer with the name
"relu1"
and an output with the name"out"
, specify the layer name"relu1"
or the layer output path"relu1/out"
.For layers with multiple outputs, specify the path of the layer output. For example, for an LSTM layer with the name
"lstm1"
and outputs with the name"out"
,"hidden"
, and"cell"
, to specify the output with the name"out"
, specify the layer output path"lstm1/out"
.For layers inside a nested network, specify the path of the layer output through the nested layers.
If you do not specify the layers to extract outputs from, then, by default, the software uses the outputs specified by net.Outputs
.
Since R2023b
Description of the input data dimensions, specified as a string array, character vector, or cell array of character vectors.
If InputDataFormats
is "auto"
, then the software uses
the formats expected by the network input. Otherwise, the software uses the specified
formats for the corresponding network input.
A data format is a string of characters, where each character describes the type of the corresponding data dimension.
The characters are:
"S"
— Spatial"C"
— Channel"B"
— Batch"T"
— Time"U"
— Unspecified
For example, consider an array that represents a batch of sequences where the first,
second, and third dimensions correspond to channels, observations, and time steps,
respectively. You can describe the data as having the format "CBT"
(channel, batch, time).
You can specify multiple dimensions labeled "S"
or "U"
.
You can use the labels "C"
, "B"
, and
"T"
once each, at most. The software ignores singleton trailing
"U"
dimensions after the second dimension.
For a neural networks with multiple inputs net
, specify an array of
input data formats, where InputDataFormats(i)
corresponds to the
input net.InputNames(i)
.
For more information, see Deep Learning Data Formats.
To create a neural network that receives unformatted
data, use an inputLayer
object
and do not specify a format. To input unformatted data into a network directly, do not
specify the InputDataFormats
argument. (since R2025a)
Before R2025a: For neural networks that do not have input layers, you
must specify a format using the InputDataFormats
argument or use
formatted dlarray
objects as input.
Data Types: char
| string
| cell
Since R2023b
Description of the output data dimensions, specified as one of these values:
"auto"
— If the output data has the same number of dimensions as the input data, then thepredict
function uses the format specified byInputDataFormats
. If the output data has a different number of dimensions than the input data, then thepredict
function automatically permutes the dimensions of the output data so that they are consistent with the network input layers or theInputDataFormats
value.String, character vector, or cell array of character vectors — The
predict
function uses the specified data formats.
A data format is a string of characters, where each character describes the type of the corresponding data dimension.
The characters are:
"S"
— Spatial"C"
— Channel"B"
— Batch"T"
— Time"U"
— Unspecified
For example, consider an array that represents a batch of sequences where the first,
second, and third dimensions correspond to channels, observations, and time steps,
respectively. You can describe the data as having the format "CBT"
(channel, batch, time).
You can specify multiple dimensions labeled "S"
or "U"
.
You can use the labels "C"
, "B"
, and
"T"
once each, at most. The software ignores singleton trailing
"U"
dimensions after the second dimension.
For more information, see Deep Learning Data Formats.
Data Types: char
| string
| cell
Performance optimization, specified as one of these values:
"auto"
— Automatically apply a number of optimizations suitable for the input network and hardware resources."mex"
— Compile and execute a MEX function. This option is available only when using a GPU. You must store the input data or the network learnable parameters asgpuArray
objects. Using a GPU requires Parallel Computing Toolbox™ and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). If Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error."none"
— Disable all acceleration.
When you use the "auto"
or "mex"
option, the software
can offer performance benefits at the expense of an increased initial run time. Subsequent
calls to the function are typically faster. Use performance optimization when you call the
function multiple times using different input data.
When Acceleration
is "mex"
, the software generates and
executes a MEX function based on the model and parameters you specify in the function call.
A single model can have several associated MEX functions at one time. Clearing the model
variable also clears any MEX functions associated with that model.
When Acceleration
is
"auto"
, the software does not generate a MEX function.
The "mex"
option is available only when you use a GPU. You must have a
C/C++ compiler installed and the GPU Coder™ Interface for Deep Learning support package. Install the support package using the Add-On Explorer in
MATLAB®. For setup instructions, see Set Up Compiler (GPU Coder). GPU Coder is not required.
The "mex"
option has these limitations:
The
state
output argument is not supported.Only
single
precision is supported. The input data or the network learnable parameters must have underlying typesingle
.Networks with inputs that are not connected to an input layer are not supported.
Traced
dlarray
objects are not supported. This means that the"mex"
option is not supported inside a call todlfeval
.Not all layers are supported. For a list of supported layers, see Supported Layers (GPU Coder).
MATLAB Compiler™ does not support deploying your network when using the
"mex"
option.
For quantized networks, the "mex"
option requires a CUDA® enabled NVIDIA® GPU with compute capability 6.1, 6.3, or higher.
Output Arguments
Output data of network with multiple outputs, returned as a one of these values:
Formatted
dlarray
objectUnformatted
dlarray
object (since R2023b)Numeric array (since R2023b)
The data type matches the data type of the input data.
The order of the outputs Y1
, …, YN
match the
order of the outputs specified by the Outputs
argument.
For a classification neural network, the elements of the output correspond to the scores for
each class. The order of the scores matches the order of the categories in the training
data. For example, if you train the neural network using the categorical labels
TTrain
, then the order of the scores matches the order of the
categories given by categories(TTrain)
.
Updated network state, returned as a table.
The network state is a table with three columns:
Layer
– Layer name, specified as a string scalar.Parameter
– State parameter name, specified as a string scalar.Value
– Value of state parameter, specified as adlarray
object.
Layer states retain information calculated during the layer operation for use in subsequent forward passes of the layer. For example, LSTM layers contain cell states and hidden states, and batch normalization layers calculate running statistics.
For recurrent layers, such as LSTM layers, with the HasStateInputs
property set to 1
(true
), the state table does
not contain entries for the states of the layer.
Algorithms
To provide the best performance, deep learning using a GPU in
MATLAB is not guaranteed to be deterministic. Depending on your network architecture,
under some conditions you might get different results when using a GPU to train two identical
networks or make two predictions using the same network and data. If you require determinism
when performing deep learning operations using a GPU, use the deep.gpu.deterministicAlgorithms
function (since R2024b).
If you use the rng
function to set the same random number generator and seed, then predictions
made using the CPU are reproducible.
Extended Capabilities
Usage notes and limitations:
C++ code generation supports the following syntaxes:
Y = predict(net,X)
Y = predict(net,X1,...,XM)
[Y1,...,YN] = predict(__)
[Y1,...,YK] = predict(__,'Outputs',layerNames)
You can generate generic C/C++ code that does not depends on any third-party libraries for the syntax
[__,state] = predict(__)
Code generation supports tuning the variable
Value
of theState
property. Code generation does not support modifying variablesLayer
andParameter
of theState
property.Code generation supports these functions for the
State
property:For Simulink simulation, code generation does not support extracting and updating the
State
of adlnetwork
in a MATLAB Function Block. Instead, use a Stateful Predict or a Stateful Classify block.The input data
X
can only have a variable size on time ("T") dimension. Other data dimensions for the input dataX
must not have variable size. The size must be fixed at code generation time.Code generation does not support passing complex-valued input to the
predict
method ofdlnetwork
object.The
dlarray
input to thepredict
method must be asingle
datatype.
Usage notes and limitations:
GPU code generation supports the following syntaxes:
Y = predict(net,X)
Y = predict(net,X1,...,XM)
[Y1,...,YN] = predict(__)
[Y1,...,YK] = predict(__,'Outputs',layerNames)
You can generate plain CUDA code that is independent of deep learning libraries for the syntax
[__,state] = predict(__)
Code generation supports tuning the variable
Value
of theState
property. Code generation does not support modifying variablesLayer
andParameter
of theState
property.Code generation supports these functions for the
State
property:For Simulink simulation, code generation does not support extracting and updating the
State
of adlnetwork
in a MATLAB Function Block. Instead, use a Stateful Predict or a Stateful Classify block.The input data
X
can only have a variable size on time ("T") dimension. Other data dimensions for the input dataX
must not have variable size. The size must be fixed at code generation time.Code generation for TensorRT library does not support marking an input layer as an output by using the
[Y1,...,YK] = predict(__,'Outputs',layerNames)
syntax.Code generation does not support passing complex-valued input to the
predict
method ofdlnetwork
object.The
dlarray
input to thepredict
method must be asingle
datatype.
The predict
function
supports GPU array input with these usage notes and limitations:
This function runs on the GPU if you meet at least one of these conditions:
Any of the values of the network learnable parameters inside
net.Learnables.Value
aredlarray
objects with underlying data of typegpuArray
.The input argument
X
is adlarray
with underlying data of typegpuArray
.
For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Version History
Introduced in R2019bIf you specify unformatted data as input to the neural network and do not specify the InputDataFormats
argument, then the function passes the unformatted data to the network directly.
To create a neural network that receives unformatted data, use an inputLayer
object
and do not specify a format.
Make predictions using numeric arrays and unformatted dlarray
objects.
Specify the input and output data formats using the InputDataFormats
and OutputDataFormats
options, respectively.
For dlnetwork
objects, the state
output argument returned by the predict
function is
a table containing the state parameter names and values for each layer in the network.
Starting in R2021a, the state values are dlarray
objects.
This change enables better support when using AcceleratedFunction
objects. To accelerate deep learning functions that have frequently changing input values,
for example, an input containing the network state, the frequently changing values must be
specified as dlarray
objects.
In previous versions, the state values are numeric arrays.
In most cases, you will not need to update your code. If you have code that requires the
state values to be numeric arrays, then to reproduce the previous behavior, extract the data
from the state values manually using the extractdata
function with the dlupdate
function.
state = dlupdate(@extractdata,net.State);
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: United States.
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)