Main Content

dljacobian

Jacobian matrix deep learning operation

Since R2024b

    Description

    The Jacobian deep learning operation returns the Jacobian matrix for neural network and model function outputs with respect to the specified input data and operation dimension.

    jac = dljacobian(u,x,dim) returns the Jacobian matrix for the neural network or model function outputs u with respect to the data x for the specified operation dimension.

    example

    jac = dljacobian(u,x,dim,EnableHigherDerivatives=tf) also specifies whether to enable higher-order derivatives by tracing the backward pass.

    Examples

    collapse all

    Create a neural network.

    inputSize = [16 16 3];
    numOutputChannels = 5;
    
    layers = [
        imageInputLayer(inputSize)
        convolution2dLayer(3,64)
        reluLayer
        fullyConnectedLayer(numOutputChannels)
        softmaxLayer];
    
    net = dlnetwork(layers);

    Load the training data. For the purposes of this example, generate some random data.

    numObservations = 128;
    X = rand([inputSize numObservations]);
    X = dlarray(X,"SSCB");
    
    T = rand([numOutputChannels numObservations]);
    T = dlarray(T,"CB");

    Define a model loss function that takes the network and data as input and returns the loss, gradients of the loss with respect to the learnable parameters, and the Jacobian of the predictions with respect to the input data.

    function [loss,gradients,jac] = modelLoss(net,X,T)
    
    Y = forward(net,X);
    loss = l1loss(Y,T);
    
    X = stripdims(X);
    Y = stripdims(Y);
    
    jac = dljacobian(Y,X,1);
    gradients = dlgradient(loss,net.Learnables);
    
    end

    Evaluate the model loss function using the dlfeval function.

    [loss,gradients,jac] = dlfeval(@modelLoss,net,X,T);

    View the size of the Jacobian.

    size(jac)
    ans = 1×5
    
         5    16    16     3   128
    
    

    Input Arguments

    collapse all

    Neural network or model function outputs, specified as a traced dlarray matrix.

    When evaluating a function with automatic differentiation enabled, the software traces the input dlarray objects. Contexts in which the software traces dlarray include:

    • Inside loss functions that the trainnet function evaluates

    • Inside forward functions that custom layers evaluate

    • Inside model and model loss functions that the dlfeval function evaluates

    The sizes of the dimensions not specified by the dim argument must match.

    Input data, specified as a traced dlarray object.

    When evaluating a function with automatic differentiation enabled, the software traces the input dlarray objects. Contexts in which the software traces dlarray include:

    • Inside loss functions that the trainnet function evaluates

    • Inside forward functions that custom layers evaluate

    • Inside model and model loss functions that the dlfeval function evaluates

    The sizes of the dimensions not specified by the dim argument must match.

    Operation dimension of u, specified as a positive integer.

    The dljacobian function treats the remaining dimensions of the data as independent batch dimensions.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Flag to enable higher-order derivatives, specified as one of these values:

    • Numeric or logical 1 (true) — Enable higher-order derivatives. Trace the backward pass so that the returned values can be used in further computations for subsequent calls to functions that compute derivatives using automatic differentiation (for example, dlgradient, dljacobian, dldivergence, and dllaplacian).

    • Numeric or logical 0 (false) — Disable higher-order derivatives. Do not trace the backward pass. When you want to compute only first-order derivatives, this option is usually quicker and requires less memory.

    Output Arguments

    collapse all

    Jacobian, returned as an unformatted dlarray object.

    The layout of jac depends on dim and the sizes of u and x.

    The output jac is an (N+1)-D array, where N is the number of dimensions of x. The size of the output jac is [szU,szX1,szX2,...,szXN], where szU corresponds to size(u,dim) and [szX1,szX2,...,szXN] is the size of x.

    Each element of jac represents the partial derivative of an element of u with respect to an element of x:

    • When dim is 1, jac(i,j1,j2,...,jn) corresponds to the partial derivative of u(i,jk) with respect to x(j1,j2,...,jN), where jk indexes into the batch dimension of x.

    • When dim is 2, jac(i,j1,j2,...,jn) corresponds to the partial derivative of u(jk,i) with respect to x(j1,j2,...,jN), where jk indexes into the batch dimension of x.

    Version History

    Introduced in R2024b