Main Content

loss

Loss for regression neural network

    Description

    example

    L = loss(Mdl,Tbl,ResponseVarName) returns the regression loss for the trained regression neural network Mdl using the predictor data in table Tbl and the response values in the ResponseVarName table variable.

    L is returned as a scalar value that represents the mean squared error (MSE) by default.

    L = loss(Mdl,Tbl,Y) returns the regression loss for the model Mdl using the predictor data in table Tbl and the response values in vector Y.

    L = loss(Mdl,X,Y) returns the regression loss for the trained regression neural network Mdl using the predictor data X and the corresponding response values in Y.

    L = loss(___,Name,Value) specifies options using one or more name-value arguments in addition to any of the input argument combinations in previous syntaxes. For example, you can specify that columns in the predictor data correspond to observations, specify the loss function, or supply observation weights.

    Examples

    collapse all

    Calculate the test set mean squared error (MSE) of a regression neural network model.

    Load the patients data set. Create a table from the data set. Each row corresponds to one patient, and each column corresponds to a diagnostic variable. Use the Systolic variable as the response variable, and the rest of the variables as predictors.

    load patients
    tbl = table(Age,Diastolic,Gender,Height,Smoker,Weight,Systolic);

    Separate the data into a training set tblTrain and a test set tblTest by using a nonstratified holdout partition. The software reserves approximately 30% of the observations for the test data set and uses the rest of the observations for the training data set.

    rng("default") % For reproducibility of the partition
    c = cvpartition(size(tbl,1),"Holdout",0.30);
    trainingIndices = training(c);
    testIndices = test(c);
    tblTrain = tbl(trainingIndices,:);
    tblTest = tbl(testIndices,:);

    Train a regression neural network model using the training set. Specify the Systolic column of tblTrain as the response variable. Specify to standardize the numeric predictors.

    Mdl = fitrnet(tblTrain,"Systolic", ...
        "Standardize",true);

    Calculate the test set MSE. Smaller MSE values indicate better performance.

    testMSE = loss(Mdl,tblTest,"Systolic")
    testMSE = 49.9595
    

    Perform feature selection by comparing test set losses and predictions. Compare the test set metrics for a regression neural network model trained using all the predictors to the test set metrics for a model trained using only a subset of the predictors.

    Load the sample file fisheriris.csv, which contains iris data including sepal length, sepal width, petal length, petal width, and species type. Read the file into a table.

    fishertable = readtable('fisheriris.csv');

    Separate the data into a training set trainTbl and a test set testTbl by using a nonstratified holdout partition. The software reserves approximately 30% of the observations for the test data set and uses the rest of the observations for the training data set.

    rng("default")
    c = cvpartition(size(fishertable,1),"Holdout",0.3);
    trainTbl = fishertable(training(c),:);
    testTbl = fishertable(test(c),:);

    Train one regression neural network model using all the predictors in the training set, and train another model using all the predictors except PetalWidth. For both models, specify PetalLength as the response variable, and standardize the predictors.

    allMdl = fitrnet(trainTbl,"PetalLength","Standardize",true);
    subsetMdl = fitrnet(trainTbl,"PetalLength ~ SepalLength + SepalWidth + Species", ...
        "Standardize",true);

    Compare the test set mean squared error (MSE) of the two models. Smaller MSE values indicate better performance.

    allMSE = loss(allMdl,testTbl)
    allMSE = 0.0856
    
    subsetMSE = loss(subsetMdl,testTbl)
    subsetMSE = 0.0881
    

    For each model, compare the test set predicted petal lengths to the true petal lengths. Plot the predicted petal lengths along the vertical axis and the true petal lengths along the horizontal axis. Points on the reference line indicate correct predictions.

    tiledlayout(2,1)
    
    % Top axes
    ax1 = nexttile;
    allPredictedY = predict(allMdl,testTbl);
    plot(ax1,testTbl.PetalLength,allPredictedY,".")
    hold on
    plot(ax1,testTbl.PetalLength,testTbl.PetalLength)
    hold off
    xlabel(ax1,"True Petal Length")
    ylabel(ax1,"Predicted Petal Length")
    title(ax1,"All Predictors")
    
    % Bottom axes
    ax2 = nexttile;
    subsetPredictedY = predict(subsetMdl,testTbl);
    plot(ax2,testTbl.PetalLength,subsetPredictedY,".")
    hold on
    plot(ax2,testTbl.PetalLength,testTbl.PetalLength)
    hold off
    xlabel(ax2,"True Petal Length")
    ylabel(ax2,"Predicted Petal Length")
    title(ax2,"Subset of Predictors")

    Figure contains 2 axes objects. Axes object 1 with title All Predictors contains 2 objects of type line. Axes object 2 with title Subset of Predictors contains 2 objects of type line.

    Because both models seems to perform well, with predictions scattered near the reference line, consider using the model trained using all predictors except PetalWidth.

    Input Arguments

    collapse all

    Trained regression neural network, specified as a RegressionNeuralNetwork model object or CompactRegressionNeuralNetwork model object returned by fitrnet or compact, respectively.

    Sample data, specified as a table. Each row of Tbl corresponds to one observation, and each column corresponds to one predictor variable. Optionally, Tbl can contain an additional column for the response variable. Tbl must contain all of the predictors used to train Mdl. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.

    • If Tbl contains the response variable used to train Mdl, then you do not need to specify ResponseVarName or Y.

    • If you trained Mdl using sample data contained in a table, then the input data for loss must also be in a table.

    • If you set 'Standardize',true in fitrnet when training Mdl, then the software standardizes the numeric columns of the predictor data using the corresponding means and standard deviations.

    Data Types: table

    Response variable name, specified as the name of a variable in Tbl. The response variable must be a numeric vector.

    If you specify ResponseVarName, then you must specify it as a character vector or string scalar. For example, if the response variable is stored as Tbl.Y, then specify ResponseVarName as 'Y'. Otherwise, the software treats all columns of Tbl, including Tbl.Y, as predictors.

    Data Types: char | string

    Response data, specified as a numeric vector. The length of Y must be equal to the number of observations in X or Tbl.

    Data Types: single | double

    Predictor data, specified as a numeric matrix. By default, loss assumes that each row of X corresponds to one observation, and each column corresponds to one predictor variable.

    Note

    If you orient your predictor matrix so that observations correspond to columns and specify 'ObservationsIn','columns', then you might experience a significant reduction in computation time.

    The length of Y and the number of observations in X must be equal.

    If you set 'Standardize',true in fitrnet when training Mdl, then the software standardizes the numeric columns of the predictor data using the corresponding means and standard deviations.

    Data Types: single | double

    Name-Value Arguments

    Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside quotes. You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

    Example: loss(Mdl,Tbl,"Response","Weights","W") specifies to use the Response and W variables in the table Tbl as the response values and observation weights, respectively.

    Loss function, specified as 'mse' or a function handle.

    • 'mse' — Weighted mean squared error.

    • Function handle — To specify a custom loss function, use a function handle. The function must have this form:

      lossval = lossfun(Y,YFit,W)

      • The output argument lossval is a floating-point scalar.

      • You specify the function name (lossfun).

      • Y is a length n numeric vector of observed responses, where n is the number of observations in Tbl or X.

      • YFit is a length n numeric vector of corresponding predicted responses.

      • W is an n-by-1 numeric vector of observation weights.

    Example: 'LossFun',@lossfun

    Data Types: char | string | function_handle

    Predictor data observation dimension, specified as 'rows' or 'columns'.

    Note

    If you orient your predictor matrix so that observations correspond to columns and specify 'ObservationsIn','columns', then you might experience a significant reduction in computation time. You cannot specify 'ObservationsIn','columns' for predictor data in a table.

    Data Types: char | string

    Observation weights, specified as a nonnegative numeric vector or the name of a variable in Tbl. The software weights each observation in X or Tbl with the corresponding value in Weights. The length of Weights must equal the number of observations in X or Tbl.

    If you specify the input data as a table Tbl, then Weights can be the name of a variable in Tbl that contains a numeric vector. In this case, you must specify Weights as a character vector or string scalar. For example, if the weights vector W is stored as Tbl.W, then specify it as 'W'.

    By default, Weights is ones(n,1), where n is the number of observations in X or Tbl.

    If you supply weights, then loss computes the weighted regression loss and normalizes weights to sum to 1.

    Data Types: single | double | char | string

    Introduced in R2021a