Main Content

resubLoss

Resubstitution classification loss for discriminant analysis classifier

Description

L = resubLoss(Mdl) returns the Classification Loss L by resubstitution for the trained discriminant analysis classifier Mdl using the training data stored in Mdl.X and the corresponding true class labels stored in Mdl.Y. By default, resubLoss uses the loss, meaning the loss computed for the data used by fitcdiscr to create Mdl.

example

L = resubLoss(___,LossFun=lossf) returns the resubstitution loss using a built-in or custom loss function.

The classification loss (L) is a resubstitution quality measure, and is returned as a numeric scalar. Its interpretation depends on the loss function (lossf), but in general, better classifiers yield smaller classification loss values.

Examples

collapse all

Compute the resubstituted classification error for the Fisher iris data.

Create a classification model for the Fisher iris data.

load fisheriris
mdl = fitcdiscr(meas,species);

Compute the resubstituted classification error.

L = resubLoss(mdl)
L =
    0.0200

Input Arguments

collapse all

Discriminant analysis classifier, specified as a ClassificationDiscriminant model object trained with fitcdiscr.

Loss function, specified as a built-in loss function name or a function handle.

The following table describes the values for the built-in loss functions. Specify one using the corresponding character vector or string scalar.

ValueDescription
"binodeviance"Binomial deviance
"classifcost"Observed misclassification cost
"classiferror"Misclassified rate in decimal
"exponential"Exponential loss
"hinge"Hinge loss
"logit"Logistic loss
"mincost"Minimal expected misclassification cost (for classification scores that are posterior probabilities)
"quadratic"Quadratic loss

"mincost" is appropriate for classification scores that are posterior probabilities. Discriminant analysis classifiers return posterior probabilities as classification scores by default (see predict).

Specify your own function using function handle notation. Suppose that n is the number of observations in X, and K is the number of distinct classes (numel(Mdl.ClassNames)). Your function must have the signature

lossvalue = lossfun(C,S,W,Cost)
where:

  • The output argument lossvalue is a scalar.

  • You specify the function name (lossfun).

  • C is an n-by-K logical matrix with rows indicating the class to which the corresponding observation belongs. The column order corresponds to the class order in Mdl.ClassNames.

    Create C by setting C(p,q) = 1, if observation p is in class q, for each row. Set all other elements of row p to 0.

  • S is an n-by-K numeric matrix of classification scores. The column order corresponds to the class order in Mdl.ClassNames. S is a matrix of classification scores, similar to the output of predict.

  • W is an n-by-1 numeric vector of observation weights. If you pass W, the software normalizes the weights to sum to 1.

  • Cost is a K-by-K numeric matrix of misclassification costs. For example, Cost = ones(K) - eye(K) specifies a cost of 0 for correct classification and 1 for misclassification.

Example: LossFun="binodeviance"

Example: LossFun=@lossf

Data Types: char | string | function_handle

More About

collapse all

Version History

Introduced in R2011b

expand all