Main Content

predict

Compute deep learning network output for inference

Description

Some deep learning layers behave differently during training and inference (prediction). For example, during training, dropout layers randomly set input elements to zero to help prevent overfitting, but during inference, dropout layers do not change the input.

[Y1,...,YN] = predict(net,X1,...,XM) returns the network outputs Y1, …, YN for inference given the input data X1, …, XM and the neural network net.

example

[Y1,...,YN,state] = predict(___) also returns the updated network state.

___ = predict(___,Name=Value) specifies additional options using one or more name-value arguments.

Examples

collapse all

Load a pretrained SqueezeNet neural network into the workspace.

[net,classNames] = imagePretrainedNetwork;

Read an image from a PNG file and classify it. To classify the image, first convert it to the data type single.

im = imread("peppers.png");
figure
imshow(im)

Figure contains an axes object. The hidden axes object contains an object of type image.

X = single(im);
scores = predict(net,X);
[label,score] = scores2label(scores,classNames);

Display the image with the predicted label and corresponding score.

figure
imshow(im)
title(string(label) + " (Score: " + score + ")")

Figure contains an axes object. The hidden axes object with title bell pepper (Score: 0.89394) contains an object of type image.

Input Arguments

collapse all

Neural network, specified as one of these values:

To prune a deep neural network, you require the Deep Learning Toolbox™ Model Compression Library support package. This support package is a free add-on that you can download using the Add-On Explorer. Alternatively, see Deep Learning Toolbox Model Compression Library.

Input data, each specified as one of these values:

  • Formatted dlarray object

  • Unformatted dlarray object (since R2023b)

  • Numeric array (since R2023b)

Tip

Neural networks expect input data with a specific layout. For example, vector-sequence classification networks typically expect vector-sequence representations to be t-by-c arrays, where t and c are the number of time steps and channels of sequences, respectively. Neural networks typically have an input layer that specifies the expected layout of the data.

Most datastores and functions output data in the layout that the network expects. If your data is in a different layout to what the network expects, then indicate that your data has a different layout by using the InputDataFormats option or by specifying input data as a formatted dlarray object. It is usually easiest to adjust the InputDataFormats training option than to preprocess the input data.

For more information, see Deep Learning Data Formats.

To create a neural network that receives unformatted data, use an inputLayer object and do not specify a format. To input unformatted data into a network directly, do not specify the InputDataFormats argument. (since R2025a)

Before R2025a: For neural networks that do not have input layers, you must specify a format using the InputDataFormats argument or use formatted dlarray objects as input.

Name-Value Arguments

collapse all

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Example: Y = predict(net,X,InputDataFormats="CBT") makes predictions with sequence data that has format "CBT" (channel, batch, time).

Neural network outputs, specified as a string array or a cell array of character vectors of layer names or layer output paths. Specify the output using one of these forms:

  • "layerName", where layerName is the name of a layer with a single output.

  • "layerName/outputName", where layerName is the name of a layer and outputName is the name of the layer output.

  • "layerName1/.../layerNameK/layerName", where layerName1, …, layerNameK is a series of layers that contain subnetworks, layerName is the name of a layer with a single output.

  • "layerName1/.../layerNameK/layerName/outputName", where layerName1, …, layerNameK is a series of layers that contain subnetworks, layerName is the name of a layer, and outputName is the name of the layer output.

  • For layers with a single output, specify the layer name or path of the layer output. For example, for a ReLU layer with the name "relu1" and an output with the name "out", specify the layer name "relu1" or the layer output path "relu1/out".

  • For layers with multiple outputs, specify the path of the layer output. For example, for an LSTM layer with the name "lstm1" and outputs with the name "out", "hidden", and "cell", to specify the output with the name "out", specify the layer output path "lstm1/out".

  • For layers inside a nested network, specify the path of the layer output through the nested layers.

If you do not specify the layers to extract outputs from, then, by default, the software uses the outputs specified by net.Outputs.

Since R2023b

Description of the input data dimensions, specified as a string array, character vector, or cell array of character vectors.

If InputDataFormats is "auto", then the software uses the formats expected by the network input. Otherwise, the software uses the specified formats for the corresponding network input.

A data format is a string of characters, where each character describes the type of the corresponding data dimension.

The characters are:

  • "S" — Spatial

  • "C" — Channel

  • "B" — Batch

  • "T" — Time

  • "U" — Unspecified

For example, consider an array that represents a batch of sequences where the first, second, and third dimensions correspond to channels, observations, and time steps, respectively. You can describe the data as having the format "CBT" (channel, batch, time).

You can specify multiple dimensions labeled "S" or "U". You can use the labels "C", "B", and "T" once each, at most. The software ignores singleton trailing "U" dimensions after the second dimension.

For a neural networks with multiple inputs net, specify an array of input data formats, where InputDataFormats(i) corresponds to the input net.InputNames(i).

For more information, see Deep Learning Data Formats.

To create a neural network that receives unformatted data, use an inputLayer object and do not specify a format. To input unformatted data into a network directly, do not specify the InputDataFormats argument. (since R2025a)

Before R2025a: For neural networks that do not have input layers, you must specify a format using the InputDataFormats argument or use formatted dlarray objects as input.

Data Types: char | string | cell

Since R2023b

Description of the output data dimensions, specified as one of these values:

  • "auto" — If the output data has the same number of dimensions as the input data, then the predict function uses the format specified by InputDataFormats. If the output data has a different number of dimensions than the input data, then the predict function automatically permutes the dimensions of the output data so that they are consistent with the network input layers or the InputDataFormats value.

  • String, character vector, or cell array of character vectors — The predict function uses the specified data formats.

A data format is a string of characters, where each character describes the type of the corresponding data dimension.

The characters are:

  • "S" — Spatial

  • "C" — Channel

  • "B" — Batch

  • "T" — Time

  • "U" — Unspecified

For example, consider an array that represents a batch of sequences where the first, second, and third dimensions correspond to channels, observations, and time steps, respectively. You can describe the data as having the format "CBT" (channel, batch, time).

You can specify multiple dimensions labeled "S" or "U". You can use the labels "C", "B", and "T" once each, at most. The software ignores singleton trailing "U" dimensions after the second dimension.

For more information, see Deep Learning Data Formats.

Data Types: char | string | cell

Performance optimization, specified as one of these values:

  • "auto" — Automatically apply a number of optimizations suitable for the input network and hardware resources.

  • "mex" — Compile and execute a MEX function. This option is available only when using a GPU. You must store the input data or the network learnable parameters as gpuArray objects. Using a GPU requires Parallel Computing Toolbox™ and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). If Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error.

  • "none" — Disable all acceleration.

When you use the "auto" or "mex" option, the software can offer performance benefits at the expense of an increased initial run time. Subsequent calls to the function are typically faster. Use performance optimization when you call the function multiple times using different input data.

When Acceleration is "mex", the software generates and executes a MEX function based on the model and parameters you specify in the function call. A single model can have several associated MEX functions at one time. Clearing the model variable also clears any MEX functions associated with that model.

When Acceleration is "auto", the software does not generate a MEX function.

The "mex" option is available only when you use a GPU. You must have a C/C++ compiler installed and the GPU Coder™ Interface for Deep Learning support package. Install the support package using the Add-On Explorer in MATLAB®. For setup instructions, see Set Up Compiler (GPU Coder). GPU Coder is not required.

The "mex" option has these limitations:

  • The state output argument is not supported.

  • Only single precision is supported. The input data or the network learnable parameters must have underlying type single.

  • Networks with inputs that are not connected to an input layer are not supported.

  • Traced dlarray objects are not supported. This means that the "mex" option is not supported inside a call to dlfeval.

  • Not all layers are supported. For a list of supported layers, see Supported Layers (GPU Coder).

  • MATLAB Compiler™ does not support deploying your network when using the "mex" option.

For quantized networks, the "mex" option requires a CUDA® enabled NVIDIA® GPU with compute capability 6.1, 6.3, or higher.

Output Arguments

collapse all

Output data of network with multiple outputs, returned as a one of these values:

  • Formatted dlarray object

  • Unformatted dlarray object (since R2023b)

  • Numeric array (since R2023b)

The data type matches the data type of the input data.

The order of the outputs Y1, …, YN match the order of the outputs specified by the Outputs argument.

For a classification neural network, the elements of the output correspond to the scores for each class. The order of the scores matches the order of the categories in the training data. For example, if you train the neural network using the categorical labels TTrain, then the order of the scores matches the order of the categories given by categories(TTrain).

Updated network state, returned as a table.

The network state is a table with three columns:

  • Layer – Layer name, specified as a string scalar.

  • Parameter – State parameter name, specified as a string scalar.

  • Value – Value of state parameter, specified as a dlarray object.

Layer states retain information calculated during the layer operation for use in subsequent forward passes of the layer. For example, LSTM layers contain cell states and hidden states, and batch normalization layers calculate running statistics.

For recurrent layers, such as LSTM layers, with the HasStateInputs property set to 1 (true), the state table does not contain entries for the states of the layer.

Update the state of a dlnetwork using the State property.

Algorithms

collapse all

Extended Capabilities

expand all

Version History

Introduced in R2019b

expand all