Main Content

fitSVMPosterior

Fit posterior probabilities

Description

ScoreSVMModel = fitSVMPosterior(SVMModel) returns ScoreSVMModel, which is a trained, support vector machine (SVM) classifier containing the optimal score-to-posterior-probability transformation function for two-class learning.

The software fits the appropriate score-to-posterior-probability transformation function using the SVM classifier SVMModel, and by cross validation using the stored predictor data (SVMModel.X) and the class labels (SVMModel.Y). The transformation function computes the posterior probability that an observation is classified into the positive class (SVMModel.Classnames(2)).

  • If the classes are inseparable, then the transformation function is the sigmoid function.

  • If the classes are perfectly separable, the transformation function is the step function.

  • In two-class learning, if one of the two classes has a relative frequency of 0, then the transformation function is the constant function. fitSVMPosterior is not appropriate for one-class learning.

  • If SVMModel is a ClassificationSVM classifier, then the software estimates the optimal transformation function by 10-fold cross validation as outlined in [1]. Otherwise, SVMModel must be a ClassificationPartitionedModel classifier. SVMModel specifies the cross-validation method.

  • The software stores the optimal transformation function in ScoreSVMModel.ScoreTransform.

example

ScoreSVMModel = fitSVMPosterior(SVMModel,Tbl,ResponseVarName) returns a trained support vector classifier containing the transformation function from the trained, compact SVM classifier SVMModel. The software estimates the score transformation function using predictor data in the table Tbl and class labels Tbl.ResponseVarName.

ScoreSVMModel = fitSVMPosterior(SVMModel,Tbl,Y) returns a trained support vector classifier containing the transformation function from the trained, compact SVM classifier SVMModel. The software estimates the score transformation function using predictor data in the table Tbl and class labels Y.

ScoreSVMModel = fitSVMPosterior(SVMModel,X,Y) returns a trained support vector classifier containing the transformation function from the trained, compact SVM classifier SVMModel. The software estimates the score transformation function using predictor data X and class labels Y.

example

ScoreSVMModel = fitSVMPosterior(___,Name,Value) uses additional options specified by one or more Name,Value pair arguments provided SVMModel is a ClassificationSVM classifier. For example, you can specify the number of folds to use in k-fold cross validation.

example

[ScoreSVMModel,ScoreTransform] = fitSVMPosterior(___) additionally returns the transformation function parameters (ScoreTransform) using any of the input arguments in the previous syntaxes.

example

Examples

collapse all

Load Fisher's iris data set. Train the classifier using the petal lengths and widths, and remove the virginica species from the data.

load fisheriris
classKeep = ~strcmp(species,'virginica');
X = meas(classKeep,3:4);
y = species(classKeep);

gscatter(X(:,1),X(:,2),y);
title('Scatter Diagram of Iris Measurements')
xlabel('Petal length')
ylabel('Petal width')
legend('Setosa','Versicolor')

Figure contains an axes object. The axes object with title Scatter Diagram of Iris Measurements, xlabel Petal length, ylabel Petal width contains 2 objects of type line. One or more of the lines displays its values using only markers These objects represent Setosa, Versicolor.

The classes are perfectly separable. Therefore, the score transformation function is a step function.

Train an SVM classifier using the data. Cross validate the classifier using 10-fold cross validation (the default).

rng(1);
CVSVMModel = fitcsvm(X,y,'CrossVal','on');

CVSVMModel is a trained ClassificationPartitionedModel SVM classifier.

Estimate the step function that transforms scores to posterior probabilities.

[ScoreCVSVMModel,ScoreParameters] = fitSVMPosterior(CVSVMModel);
Warning: Classes are perfectly separated. The optimal score-to-posterior transformation is a step function.

fitSVMPosterior does the following:

  • Uses the data that the software stored in CVSVMModel to fit the transformation function

  • Warns whenever the classes are separable

  • Stores the step function in ScoreCSVMModel.ScoreTransform

Display the score function type and its parameter values.

ScoreParameters
ScoreParameters = struct with fields:
                        Type: 'step'
                  LowerBound: -0.8431
                  UpperBound: 0.6897
    PositiveClassProbability: 0.5000

ScoreParameters is a structure array with four fields:

  • The score transformation function type (Type)

  • The score corresponding to the negative class boundary (LowerBound)

  • The score corresponding to the positive class boundary (UpperBound)

  • The positive class probability (PositiveClassProbability)

Since the classes are separable, the step function transforms the score to either 0 or 1, which is the posterior probability that an observation is a versicolor iris.

Load the ionosphere data set.

load ionosphere

The classes of this data set are not separable.

Train an SVM classifier. Cross validate using 10-fold cross validation (the default). It is good practice to standardize the predictors and specify the class order.

rng(1) % For reproducibility
CVSVMModel = fitcsvm(X,Y,'ClassNames',{'b','g'},'Standardize',true,...
    'CrossVal','on');
ScoreTransform = CVSVMModel.ScoreTransform
ScoreTransform = 
'none'

CVSVMModel is a trained ClassificationPartitionedModel SVM classifier. The positive class is 'g'. The ScoreTransform property is none.

Estimate the optimal score function for mapping observation scores to posterior probabilities of an observation being classified as 'g'.

[ScoreCVSVMModel,ScoreParameters] = fitSVMPosterior(CVSVMModel);
ScoreTransform = ScoreCVSVMModel.ScoreTransform
ScoreTransform = 
'@(S)sigmoid(S,-9.481373e-01,-1.218931e-01)'
ScoreParameters
ScoreParameters = struct with fields:
         Type: 'sigmoid'
        Slope: -0.9481
    Intercept: -0.1219

ScoreTransform is the optimal score transform function. ScoreParameters contains the score transformation function, slope estimate, and the intercept estimate.

You can estimate test-sample, posterior probabilities by passing ScoreCVSVMModel to kfoldPredict.

Estimate positive class posterior probabilities for the test set of an SVM algorithm.

Load the ionosphere data set.

load ionosphere

Train an SVM classifier. Specify a 20% holdout sample. It is good practice to standardize the predictors and specify the class order.

rng(1) % For reproducibility
CVSVMModel = fitcsvm(X,Y,'Holdout',0.2,'Standardize',true,...
    'ClassNames',{'b','g'});

CVSVMModel is a trained ClassificationPartitionedModel cross-validated classifier.

Estimate the optimal score function for mapping observation scores to posterior probabilities of an observation being classified as 'g'.

ScoreCVSVMModel = fitSVMPosterior(CVSVMModel);

ScoreSVMModel is a trained ClassificationPartitionedModel cross-validated classifier containing the optimal score transformation function estimated from the training data.

Estimate the out-of-sample positive class posterior probabilities. Display the results for the first 10 out-of-sample observations.

[~,OOSPostProbs] = kfoldPredict(ScoreCVSVMModel);
indx = ~isnan(OOSPostProbs(:,2));
hoObs = find(indx); % Holdout observation numbers
OOSPostProbs = [hoObs, OOSPostProbs(indx,2)];
table(OOSPostProbs(1:10,1),OOSPostProbs(1:10,2),...
    'VariableNames',{'ObservationIndex','PosteriorProbability'})
ans=10×2 table
    ObservationIndex    PosteriorProbability
    ________________    ____________________

            6                   0.17375     
            7                   0.89638     
            8                 0.0076573     
            9                   0.91602     
           16                  0.026709     
           22                4.6069e-06     
           23                   0.90241     
           24                2.4119e-06     
           38                0.00042666     
           41                   0.86429     

Input Arguments

collapse all

Trained SVM classifier, specified as a ClassificationSVM, CompactClassificationSVM, or ClassificationPartitionedModel classifier.

If SVMModel is a ClassificationSVM classifier, then you can set optional name-value pair arguments.

If SVMModel is a CompactClassificationSVM classifier, then you must input predictor data X and class labels Y.

Sample data used to train the model, specified as a table. Each row of Tbl corresponds to one observation, and each column corresponds to one predictor variable. Optionally, Tbl can contain additional columns for the response variable and observation weights. Tbl must contain all of the predictors used to train SVMModel. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.

If Tbl contains the response variable used to train SVMModel, then you do not need to specify ResponseVarName or Y.

If you trained SVMModel using sample data contained in a table, then the input data for fitSVMPosterior must also be in a table.

If you set 'Standardize',true in fitcsvm when training SVMModel, then the software standardizes the columns of the predictor data using the corresponding means in SVMModel.Mu and the standard deviations in SVMModel.Sigma.

Data Types: table

Predictor data used to estimate the score-to-posterior-probability transformation function, specified as a matrix.

Each row of X corresponds to one observation (also known as an instance or example), and each column corresponds to one variable (also known as a feature).

The length of Y and the number of rows in X must be equal.

If you set 'Standardize',true in fitcsvm when training SVMModel, then the software fits the transformation function parameter estimates using standardized data.

Data Types: double | single

Response variable name, specified as the name of a variable in Tbl. If Tbl contains the response variable used to train SVMModel, then you do not need to specify ResponseVarName.

If you specify ResponseVarName, then you must do so as a character vector or string scalar. For example, if the response variable is stored as Tbl.Response, then specify ResponseVarName as 'Response'. Otherwise, the software treats all columns of Tbl, including Tbl.Response, as predictors.

The response variable must be a categorical, character, or string array, logical or numeric vector, or cell array of character vectors. If the response variable is a character array, then each element must correspond to one row of the array.

Data Types: char | string

Class labels used to estimate the score-to-posterior-probability transformation function, specified as a categorical, character, or string array, a logical or numeric vector, or a cell array of character vectors.

If Y is a character array, then each element must correspond to one class label.

The length of Y and the number of rows in X must be equal.

Data Types: categorical | char | string | logical | single | double | cell

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Example: 'KFold',8 performs 8-fold cross validation when SVMModel is a ClassificationSVM classifier.

Cross-validation partition used to compute the transformation function, specified as the comma-separated pair consisting of 'CVPartition' and a cvpartition partition object as created by cvpartition. You can use only one of these four options at a time for creating a cross-validated model: 'KFold', 'Holdout', 'Leaveout', or 'CVPartition'.

The crossval name-value pair argument of fitcsvm splits the data into subsets using cvpartition.

Example: Suppose you create a random partition for 5-fold cross-validation on 500 observations by using cvp = cvpartition(500,'KFold',5). Then, you can specify the cross-validated model by using 'CVPartition',cvp.

Fraction of the data for holdout validation used to compute the transformation function, specified as the comma-separated pair consisting of 'Holdout' and a scalar value in the range (0,1). Holdout validation tests the specified fraction of the data and uses the remaining data for training.

You can use only one of these four options at a time for creating a cross-validated model: 'KFold', 'Holdout', 'Leaveout', or 'CVPartition'.

Example: 'Holdout',0.1

Data Types: double | single

Number of folds to use when computing the transformation function, specified as the comma-separated pair consisting of 'KFold' and a positive integer value greater than 1.

You can use only one of these four options at a time for creating a cross-validated model: 'KFold', 'Holdout', 'Leaveout', or 'CVPartition'.

Example: 'KFold',8

Data Types: single | double

Leave-one-out cross-validation flag indicating whether to use leave-one-out cross-validation to compute the transformation function, specified as the comma-separated pair consisting of 'Leaveout' and 'on' or 'off'. Use leave-one-out cross-validation by specifying 'Leaveout','on'.

You can use only one of these four options at a time for creating a cross-validated model: 'KFold', 'Holdout', 'Leaveout', or 'CVPartition'.

Example: 'Leaveout','on'

Output Arguments

collapse all

Trained SVM classifier containing the estimated score transformation function, returned as a ClassificationSVM, CompactClassificationSVM, or ClassificationPartitionedModel classifier.

The ScoreSVMModel classifier type is the same as the SVMModel classifier type.

To estimate posterior probabilities, pass ScoreSVMModel and predictor data to predict. If you set 'Standardize',true in fitcsvm to train SVMModel, then predict standardizes the columns of X using the corresponding means in SVMModel.Mu and standard deviations in SVMModel.Sigma.

Optimal score-to-posterior-probability transformation function parameters, specified as a structure array. If field Type is:

  • sigmoid, then ScoreTransform has these fields:

    • Slope — The value of A in the sigmoid function

    • Intercept — The value of B in the sigmoid function

  • step, then ScoreTransform has these fields:

    • PositiveClassProbability: the value of π in the step function. π represents:

      • The probability that an observation is in the positive class.

      • The posterior probability that a score is in the interval (LowerBound,UpperBound).

    • LowerBound: the value maxyn=1sn in the step function. It represents the lower bound of the interval that assigns the posterior probability of being in the positive class PositiveClassProbability to scores. Any observation with a score less than LowerBound has posterior probability of being the positive class 0.

    • UpperBound: the value minyn=+1sn in the step function. It represents the upper bound of the interval that assigns the posterior probability of being in the positive class PositiveClassProbability. Any observation with a score greater than UpperBound has posterior probability of being the positive class 1.

  • constant, then ScoreTransform.PredictedClass contains the name of the class prediction.

    This result is the same as SVMModel.ClassNames. The posterior probability of an observation being in ScoreTransform.PredictedClass is always 1.

More About

collapse all

Sigmoid Function

The sigmoid function that maps score sj corresponding to observation j to the positive class posterior probability is

P(sj)=11+exp(Asj+B).

If the value of the Type field of ScoreTransform is sigmoid, then parameters A and B correspond to the fields Scale and Intercept of ScoreTransform, respectively.

Step Function

The step function that maps score sj corresponding to observation j to the positive class posterior probability is

P(sj)={0;s<maxyk=1skπ;maxyk=1sksjminyk=+1sk1;sj>minyk=+1sk,

where:

  • sj is the score of observation j.

  • +1 and –1 denote the positive and negative classes, respectively.

  • π is the prior probability that an observation is in the positive class.

If the value of the Type field of ScoreTransform is step, then the quantities maxyk=1sk and minyk=+1sk correspond to the fields LowerBound and UpperBound of ScoreTransform, respectively.

Constant Function

The constant function maps all scores in a sample to posterior probabilities 1 or 0.

If all observations have posterior probability 1, then they are expected to come from the positive class.

If all observations have posterior probability 0, then they are not expected to come from the positive class.

Tips

  • This process describes one way to predict positive class posterior probabilities.

    1. Train an SVM classifier by passing the data to fitcsvm. The result is a trained SVM classifier, such as SVMModel, that stores the data. The software sets the score transformation function property (SVMModel.ScoreTransformation) to none.

    2. Pass the trained SVM classifier SVMModel to fitSVMPosterior or fitPosterior. The result, such as, ScoreSVMModel, is the same trained SVM classifier as SVMModel, except the software sets ScoreSVMModel.ScoreTransformation to the optimal score transformation function.

    3. Pass the predictor data matrix and the trained SVM classifier containing the optimal score transformation function (ScoreSVMModel) to predict. The second column in the second output argument of predict stores the positive class posterior probabilities corresponding to each row of the predictor data matrix.

      If you skip step 2, then predict returns the positive class score rather than the positive class posterior probability.

  • After fitting posterior probabilities, you can generate C/C++ code that predicts labels for new data. Generating C/C++ code requires MATLAB® Coder™. For details, see Introduction to Code Generation.

Algorithms

If you re-estimate the score-to-posterior-probability transformation function, that is, if you pass an SVM classifier to fitPosterior or fitSVMPosterior and its ScoreTransform property is not none, then the software:

  • Displays a warning

  • Resets the original transformation function to 'none' before estimating the new one

References

[1] Platt, J. “Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods”. In: Advances in Large Margin Classifiers. Cambridge, MA: The MIT Press, 2000, pp. 61–74.

Version History

Introduced in R2014a