Main Content

coder.regenerateDeepLearningParameters

Regenerate files containing network learnables and states parameters

    Description

    networkFileNames = coder.regenerateDeepLearningParameters(net,parameterFile) returns a cell array of the file names containing the regenerated network learnables and states parameters. coder.regenerateDeepLearningParameters regenerates these files based on the learnables and states of the input SeriesNetwork or DAGNetworknetwork object.

    networkFileNames = coder.regenerateDeepLearningParameters(dlnet,parameterFile) returns a cell array of the file names containing the regenerated network learnables and states parameters. coder.regenerateDeepLearningParameters regenerates these files based on the learnables and states of the input dlnetwork object.

    example

    networkFileNames = coder.regenerateDeepLearningParameters(___,Name,Value) returns a cell array of the file names containing the regenerated network learnables and states parameters by using the options specified by one or more Name,Value pair arguments.

    Examples

    collapse all

    This example shows how to update learnable and state parameters of deep learning networks without regenerating code for the network.

    Write an entry-point function in MATLAB® that:

    1. Uses the coder.loadDeepLearningNetwork function to load a deep learning model. For more information, see Load Pretrained Networks for Code Generation.

    2. Calls predict (Deep Learning Toolbox) to predict the responses.

    function out = mLayer(in,matFile)
    
    myNet = coder.loadDeepLearningNetwork(coder.const(matFile));
    
    out = predict(myNet,in); 
    

    Create a simple network that requires input images of size 4-by-5-by-3.

    inputSize = [4 5 3];
    im = dlarray(rand(inputSize, 'single'), 'SSCB');
    
    outSize = 6;
    layers = [
        imageInputLayer(inputSize,'Name','input','Normalization','none')
        convolution2dLayer([3 3], 5, 'Name', 'conv-1')
        batchNormalizationLayer('Name', 'batchNorm')
        reluLayer('Name','relu1')
        transposedConv2dLayer([2 2], 5, 'Name', 'transconv')
        convolution2dLayer([2 2], 5, 'Name', 'conv2')
        reluLayer('Name','relu2')
        fullyConnectedLayer(outSize, 'Name', 'fc3')
        ];
    

    Create an initialized dlnetwork object from the layer graph.

    rng(0);
    dlnet1 = dlnetwork(layers);
    save('trainedNet.mat', 'dlnet1');
    

    To specify code generation parameters for cuDNN, set the DeepLearningConfig property to a coder.CuDNNConfig object that you create by using coder.DeepLearningConfig.

    cfg = coder.gpuConfig('mex');
    cfg.TargetLang = 'C++';
    cfg.DeepLearningConfig = coder.DeepLearningConfig('TargetLibrary', 'cudnn')
    cfg.DeepLearningConfig.AutoTuning = true;
    cfg.DeepLearningConfig.DataType = 'fp32';
    

    Run the codegen command. The codegen command generates CUDA® code from the mLayers.m MATLAB entry-point function.

    cnnMatFile = fullfile(pwd, 'trainedNet.mat');
    inputArgs = {im, coder.Constant(cnnMatFile)};
    
    codegen -config cfg mLayers -args inputArgs -report
    

    Call predict on the input image and compare the results with MATLAB.

    out = mLayer_mex(im,cnnMatFile)
    out_MATLAB = mLayer(im,cnnMatFile)
    
    out1 = 
    
      6(C) x 1(B) single dlarray
    
       -0.0064
       -0.1422
       -0.0897
        0.2223
        0.0329
        0.0365
    
    
    out_MATLAB = 
    
      6(C) x 1(B) single dlarray
    
       -0.0064
       -0.1422
       -0.0897
        0.2223
        0.0329
        0.0365
    

    Re-initialize dlnetwork to update learnables to different values.

    rng(10);
    dlnet2 = dlnetwork(layers);
    save('trainedNet.mat', 'dlnet2');
    

    Use the coder.regenerateDeepLearningParameters function to regenerate the bias files based on the new learnables and states of the network.

    codegenDir = fullfile(pwd, 'codegen/mex/mLayer');
    networkFileNames = (coder.regenerateDeepLearningParameters(dlnet2, codegenDir))'
    
    networkFileNames = 
    
      8×1 cell array
    
        {'cnn_trainedNet0_0_conv-1_b.bin'   }
        {'cnn_trainedNet0_0_conv-1_w.bin'   }
        {'cnn_trainedNet0_0_conv2_b.bin'    }
        {'cnn_trainedNet0_0_conv2_w.bin'    }
        {'cnn_trainedNet0_0_fc3_b.bin'      }
        {'cnn_trainedNet0_0_fc3_w.bin'      }
        {'cnn_trainedNet0_0_transconv_b.bin'}
        {'cnn_trainedNet0_0_transconv_w.bin'}
    

    Call predict on the input image and compare the results with MATLAB.

    clear mLayer_mex;
    outNew = mLayer_mex(im,cnnMatFile)
    outNew_MATLAB = mLayer(im,cnnMatFile)
    
    outNew = 
    
      6(C) x 1(B) single dlarray
    
        0.1408
       -0.0080
        0.0342
       -0.0065
        0.1843
        0.0799
    
    
    outNew_MATLAB = 
    
      6(C) x 1(B) single dlarray
    
        0.1408
       -0.0080
        0.0342
       -0.0065
        0.1843
        0.0799

    Input Arguments

    collapse all

    Trained network used during code generation, specified as a SeriesNetwork (Deep Learning Toolbox) or a DAGNetwork (Deep Learning Toolbox) object. You can use a pretrained network (for example, by using the googlenet function) or by training your own network using trainNetwork (Deep Learning Toolbox).

    Network for custom training loops used during code generation, specified as a dlnetwork (Deep Learning Toolbox) object.

    Path to the folder containing the generated network parameter information file. During code generation, the software creates the networkParamsInfo_*.bin binary file that contains the network parameter information. By default, the code generator creates this file in the codegen folder.

    Name-Value Arguments

    Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

    Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

    Example: networkFileNames = coder.regenerateDeepLearningParameters(dlnet,"codegen",NetworkName="fooDL");

    Name of the C++ class for the network in the generated code, specified as a character vector or string.

    For MEX workflows, when the generated MEX and the associated codegen folder is moved from one location to another, coder.regenerateDeepLearningParameters cannot regenerate files containing network learnables and states parameters in the new location. Set the 'OverrideParameterFiles' parameter to true to allow the coder.regenerateDeepLearningParameters function to regenerate files containing network learnables and states parameters in the original codegen location.

    This parameter has no effect for non-MEX workflows.

    Output Arguments

    collapse all

    File names of the regenerated network learnables and states parameters, returned as a cell array.

    Limitations

    Only the network learnables and states can be updated by using the coder.regenerateDeepLearningParameters function. For modifications that the code generator does not support, an error message is thrown. For example, using coder.regenerateDeepLearningParameters after changing the scale factor of a leaky ReLU layer throws the following error message as scale factor is not a network learnable.

    Network architecture has been modified since the last code generation. Unable 
    to accommodate the provided network in the generated code. Regenerate code 
    for the provided network to reflect changes in the network. For more 
    information, see Limitations to Regenerating Network Parameters After Code Generation.
    

    Version History

    Introduced in R2021b