Documentation

# predict

Classify observations using multiclass error-correcting output codes (ECOC) model

## Syntax

``label = predict(Mdl,X)``
``label = predict(Mdl,X,Name,Value)``
``````[label,NegLoss,PBScore] = predict(___)``````
``````[label,NegLoss,PBScore,Posterior] = predict(___)``````

## Description

example

````label = predict(Mdl,X)` returns a vector of predicted class labels (`label`) for the predictor data in the table or matrix `X`, based on the trained multiclass error-correcting output codes (ECOC) model `Mdl`. The trained ECOC model can be either full or compact.```

example

````label = predict(Mdl,X,Name,Value)` uses additional options specified by one or more name-value pair arguments. For example, you can specify the posterior probability estimation method, decoding scheme, and verbosity level.```

example

``````[label,NegLoss,PBScore] = predict(___)``` uses any of the input argument combinations in the previous syntaxes and additionally returns: An array of negated average binary losses (`NegLoss`). For each observation in `X`, `predict` assigns the label of the class yielding the largest negated average binary loss (or, equivalently, the smallest average binary loss).An array of positive-class scores (`PBScore`) for the observations classified by each binary learner. ```

example

``````[label,NegLoss,PBScore,Posterior] = predict(___)``` additionally returns posterior class probability estimates for the observations (`Posterior`).To obtain posterior class probabilities, you must set `'FitPosterior',true` when training the ECOC model using `fitcecoc`. Otherwise, `predict` throws an error.```

## Examples

collapse all

Load Fisher's iris data set. Specify the predictor data `X`, the response data `Y`, and the order of the classes in `Y`.

```load fisheriris X = meas; Y = categorical(species); classOrder = unique(Y); rng(1); % For reproducibility```

Train an ECOC model using SVM binary classifiers. Specify a 30% holdout sample, standardize the predictors using an SVM template, and specify the class order.

```t = templateSVM('Standardize',true); PMdl = fitcecoc(X,Y,'Holdout',0.30,'Learners',t,'ClassNames',classOrder); Mdl = PMdl.Trained{1}; % Extract trained, compact classifier```

`PMdl` is a `ClassificationPartitionedECOC` model. It has the property `Trained`, a 1-by-1 cell array containing the `CompactClassificationECOC` model that the software trained using the training set.

Predict the test-sample labels. Print a random subset of true and predicted labels.

```testInds = test(PMdl.Partition); % Extract the test indices XTest = X(testInds,:); YTest = Y(testInds,:); labels = predict(Mdl,XTest); idx = randsample(sum(testInds),10); table(YTest(idx),labels(idx),... 'VariableNames',{'TrueLabels','PredictedLabels'})```
```ans=10×2 table TrueLabels PredictedLabels __________ _______________ setosa setosa versicolor virginica setosa setosa virginica virginica versicolor versicolor setosa setosa virginica virginica virginica virginica setosa setosa setosa setosa ```

`Mdl` correctly labels all except one of the test-sample observations with indices `idx`.

Load Fisher's iris data set. Specify the predictor data `X`, the response data `Y`, and the order of the classes in `Y`.

```load fisheriris X = meas; Y = categorical(species); classOrder = unique(Y); % Class order rng(1); % For reproducibility```

Train an ECOC model using SVM binary classifiers and specify a 30% holdout sample. Standardize the predictors using an SVM template, and specify the class order.

```t = templateSVM('Standardize',true); PMdl = fitcecoc(X,Y,'Holdout',0.30,'Learners',t,'ClassNames',classOrder); Mdl = PMdl.Trained{1}; % Extract trained, compact classifier```

`PMdl` is a `ClassificationPartitionedECOC` model. It has the property `Trained`, a 1-by-1 cell array containing the `CompactClassificationECOC` model that the software trained using the training set.

SVM scores are signed distances from the observation to the decision boundary. Therefore, $\left(-\infty ,\infty \right)$ is the domain. Create a custom binary loss function that does the following:

• Map the coding design matrix (M) and positive-class classification scores (s) for each learner to the binary loss for each observation.

• Use linear loss.

• Aggregate the binary learner loss using the median.

You can create a separate function for the binary loss function, and then save it on the MATLAB® path. Or, you can specify an anonymous binary loss function. In this case, create a function handle (`customBL`) to an anonymous binary loss function.

`customBL = @(M,s)nanmedian(1 - bsxfun(@times,M,s),2)/2;`

Predict test-sample labels and estimate the median binary loss per class. Print the median negative binary losses per class for a random set of 10 test-sample observations.

```testInds = test(PMdl.Partition); % Extract the test indices XTest = X(testInds,:); YTest = Y(testInds,:); [label,NegLoss] = predict(Mdl,XTest,'BinaryLoss',customBL); idx = randsample(sum(testInds),10); classOrder```
```classOrder = 3x1 categorical array setosa versicolor virginica ```
```table(YTest(idx),label(idx),NegLoss(idx,:),'VariableNames',... {'TrueLabel','PredictedLabel','NegLoss'})```
```ans=10×3 table TrueLabel PredictedLabel NegLoss __________ ______________ __________________________________ setosa versicolor 0.18578 1.9878 -3.6736 versicolor virginica -1.3316 -0.12355 -0.044843 setosa versicolor 0.13896 1.926 -3.565 virginica virginica -1.5132 -0.38271 0.39588 versicolor versicolor -0.87218 0.74736 -1.3752 setosa versicolor 0.48406 1.9977 -3.9818 virginica virginica -1.9362 -0.67541 1.1117 virginica virginica -1.5788 -0.83318 0.91194 setosa versicolor 0.51021 2.1212 -4.1314 setosa versicolor 0.36128 2.0596 -3.9209 ```

The order of the columns corresponds to the elements of `classOrder`. The software predicts the label based on the maximum negated loss. The results indicate that the median of the linear losses might not perform as well as other losses.

Train an ECOC classifier using SVM binary learners. First predict the training-sample labels and class posterior probabilities. Then predict the maximum class posterior probability at each point in a grid. Visualize the results.

Load Fisher's iris data set. Specify the petal dimensions as the predictors and the species names as the response.

```load fisheriris X = meas(:,3:4); Y = species; rng(1); % For reproducibility```

Create an SVM template. Standardize the predictors, and specify the Gaussian kernel.

`t = templateSVM('Standardize',true,'KernelFunction','gaussian');`

`t` is an SVM template. Most of its properties are empty. When the software trains the ECOC classifier, it sets the applicable properties to their default values.

Train the ECOC classifier using the SVM template. Transform classification scores to class posterior probabilities (which are returned by `predict` or `resubPredict`) using the `'FitPosterior'` name-value pair argument. Specify the class order using the `'ClassNames'` name-value pair argument. Display diagnostic messages during training by using the `'Verbose'` name-value pair argument.

```Mdl = fitcecoc(X,Y,'Learners',t,'FitPosterior',true,... 'ClassNames',{'setosa','versicolor','virginica'},... 'Verbose',2);```
```Training binary learner 1 (SVM) out of 3 with 50 negative and 50 positive observations. Negative class indices: 2 Positive class indices: 1 Fitting posterior probabilities for learner 1 (SVM). Training binary learner 2 (SVM) out of 3 with 50 negative and 50 positive observations. Negative class indices: 3 Positive class indices: 1 Fitting posterior probabilities for learner 2 (SVM). Training binary learner 3 (SVM) out of 3 with 50 negative and 50 positive observations. Negative class indices: 3 Positive class indices: 2 Fitting posterior probabilities for learner 3 (SVM). ```

`Mdl` is a `ClassificationECOC` model. The same SVM template applies to each binary learner, but you can adjust options for each binary learner by passing in a cell vector of templates.

Predict the training-sample labels and class posterior probabilities. Display diagnostic messages during the computation of labels and class posterior probabilities by using the `'Verbose'` name-value pair argument.

`[label,~,~,Posterior] = resubPredict(Mdl,'Verbose',1);`
```Predictions from all learners have been computed. Loss for all observations has been computed. Computing posterior probabilities... ```
`Mdl.BinaryLoss`
```ans = 'quadratic' ```

The software assigns an observation to the class that yields the smallest average binary loss. Because all binary learners are computing posterior probabilities, the binary loss function is `quadratic`.

Display a random set of results.

```idx = randsample(size(X,1),10,1); Mdl.ClassNames```
```ans = 3x1 cell array {'setosa' } {'versicolor'} {'virginica' } ```
```table(Y(idx),label(idx),Posterior(idx,:),... 'VariableNames',{'TrueLabel','PredLabel','Posterior'})```
```ans=10×3 table TrueLabel PredLabel Posterior ______________ ______________ ______________________________________ {'virginica' } {'virginica' } 0.0039316 0.0039864 0.99208 {'virginica' } {'virginica' } 0.017065 0.018261 0.96467 {'virginica' } {'virginica' } 0.014946 0.015854 0.9692 {'versicolor'} {'versicolor'} 2.2197e-14 0.87318 0.12682 {'setosa' } {'setosa' } 0.999 0.00025091 0.0007464 {'versicolor'} {'virginica' } 2.2195e-14 0.059423 0.94058 {'versicolor'} {'versicolor'} 2.2194e-14 0.97002 0.029983 {'setosa' } {'setosa' } 0.999 0.00024989 0.00074741 {'versicolor'} {'versicolor'} 0.0085637 0.98259 0.0088481 {'setosa' } {'setosa' } 0.999 0.00025012 0.00074719 ```

The columns of `Posterior` correspond to the class order of `Mdl.ClassNames`.

Define a grid of values in the observed predictor space. Predict the posterior probabilities for each instance in the grid.

```xMax = max(X); xMin = min(X); x1Pts = linspace(xMin(1),xMax(1)); x2Pts = linspace(xMin(2),xMax(2)); [x1Grid,x2Grid] = meshgrid(x1Pts,x2Pts); [~,~,~,PosteriorRegion] = predict(Mdl,[x1Grid(:),x2Grid(:)]);```

For each coordinate on the grid, plot the maximum class posterior probability among all classes.

```contourf(x1Grid,x2Grid,... reshape(max(PosteriorRegion,[],2),size(x1Grid,1),size(x1Grid,2))); h = colorbar; h.YLabel.String = 'Maximum posterior'; h.YLabel.FontSize = 15; hold on gh = gscatter(X(:,1),X(:,2),Y,'krk','*xd',8); gh(2).LineWidth = 2; gh(3).LineWidth = 2; title('Iris Petal Measurements and Maximum Posterior') xlabel('Petal length (cm)') ylabel('Petal width (cm)') axis tight legend(gh,'Location','NorthWest') hold off```

Train a multiclass ECOC model and estimate posterior probabilities using parallel computing.

Load the `arrhythmia` data set. Examine the response data `Y`, and determine the number of classes.

```load arrhythmia Y = categorical(Y); tabulate(Y)```
``` Value Count Percent 1 245 54.20% 2 44 9.73% 3 15 3.32% 4 15 3.32% 5 13 2.88% 6 25 5.53% 7 3 0.66% 8 2 0.44% 9 9 1.99% 10 50 11.06% 14 4 0.88% 15 5 1.11% 16 22 4.87% ```
`K = numel(unique(Y));`

Several classes are not represented in the data, and many of the other classes have low relative frequencies.

Specify an ensemble learning template that uses the GentleBoost method and 50 weak classification tree learners.

`t = templateEnsemble('GentleBoost',50,'Tree');`

`t` is a template object. Most of its properties are empty (`[]`). The software uses default values for all empty properties during training.

Because the response variable contains many classes, specify a sparse random coding design.

```rng(1); % For reproducibility Coding = designecoc(K,'sparserandom');```

Train an ECOC model using parallel computing. Specify a 15% holdout sample, and fit posterior probabilities.

`pool = parpool; % Invokes workers`
```Starting parallel pool (parpool) using the 'local' profile ... Connected to the parallel pool (number of workers: 6). ```
```options = statset('UseParallel',true); PMdl = fitcecoc(X,Y,'Learner',t,'Options',options,'Coding',Coding,... 'FitPosterior',true,'Holdout',0.15); Mdl = PMdl.Trained{1}; % Extract trained, compact classifier```

`PMdl` is a `ClassificationPartitionedECOC` model. It has the property `Trained`, a 1-by-1 cell array containing the `CompactClassificationECOC` model that the software trained using the training set.

The pool invokes six workers, although the number of workers might vary among systems.

Estimate posterior probabilities, and display the posterior probability of being classified as not having arrhythmia (class 1) given the data for a random set of test-sample observations.

```testInds = test(PMdl.Partition); % Extract the test indices XTest = X(testInds,:); YTest = Y(testInds,:); [~,~,~,posterior] = predict(Mdl,XTest,'Options',options); idx = randsample(sum(testInds),10); table(idx,YTest(idx),posterior(idx,1),... 'VariableNames',{'TestSampleIndex','TrueLabel','PosteriorNoArrhythmia'})```
```ans=10×3 table TestSampleIndex TrueLabel PosteriorNoArrhythmia _______________ _________ _____________________ 11 6 0.60631 41 4 0.23674 51 2 0.13802 33 10 0.43831 12 1 0.94332 8 1 0.97278 37 1 0.62807 24 10 0.96876 56 16 0.29375 30 1 0.64512 ```

## Input Arguments

collapse all

Full or compact multiclass ECOC model, specified as a `ClassificationECOC` or `CompactClassificationECOC` model object.

To create a full or compact ECOC model, see `ClassificationECOC` or `CompactClassificationECOC`.

Predictor data to be classified, specified as a numeric matrix or table.

Each row of `X` corresponds to one observation, and each column corresponds to one variable.

• For a numeric matrix:

• The variables that constitute the columns of `X` must have the same order as the predictor variables that train `Mdl`.

• If you train `Mdl` using a table (for example, `Tbl`), then `X` can be a numeric matrix if `Tbl` contains all numeric predictor variables. To treat numeric predictors in `Tbl` as categorical during training, identify categorical predictors using the `CategoricalPredictors` name-value pair argument of `fitcecoc`. If `Tbl` contains heterogeneous predictor variables (for example, numeric and categorical data types) and `X` is a numeric matrix, then `predict` throws an error.

• For a table:

• `predict` does not support multicolumn variables and cell arrays other than cell arrays of character vectors.

• If you train `Mdl` using a table (for example, `Tbl`), then all predictor variables in `X` must have the same variable names and data types as the predictor variables that train `Mdl` (stored in `Mdl.PredictorNames`). However, the column order of `X` does not need to correspond to the column order of `Tbl`. Both `Tbl` and `X` can contain additional variables (response variables, observation weights, and so on), but `predict` ignores them.

• If you train `Mdl` using a numeric matrix, then the predictor names in `Mdl.PredictorNames` and the corresponding predictor variable names in `X` must be the same. To specify predictor names during training, see the `PredictorNames` name-value pair argument of `fitcecoc`. All predictor variables in `X` must be numeric vectors. `X` can contain additional variables (response variables, observation weights, and so on), but `predict` ignores them.

### Note

If `Mdl.BinaryLearners` contains linear or kernel classification models (`ClassificationLinear` or `ClassificationKernel` model objects), then you cannot specify sample data in a table. Instead, pass a matrix of predictor data.

When training `Mdl`, assume that you set `'Standardize',true` for a template object specified in the `'Learners'` name-value pair argument of `fitcecoc`. In this case, for the corresponding binary learner `j`, the software standardizes the columns of the new predictor data using the corresponding means in `Mdl.BinaryLearner{j}.Mu` and standard deviations in `Mdl.BinaryLearner{j}.Sigma`.

Data Types: `table` | `double` | `single`

### Name-Value Pair Arguments

Specify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside quotes. You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`.

Example: `predict(Mdl,X,'BinaryLoss','quadratic','Decoding','lossbased')` specifies a quadratic binary learner loss function and a loss-based decoding scheme for aggregating the binary losses.

Binary learner loss function, specified as the comma-separated pair consisting of `'BinaryLoss'` and a built-in loss function name or function handle.

• This table describes the built-in functions, where yj is a class label for a particular binary learner (in the set {–1,1,0}), sj is the score for observation j, and g(yj,sj) is the binary loss formula.

ValueDescriptionScore Domaing(yj,sj)
`'binodeviance'`Binomial deviance(–∞,∞)log[1 + exp(–2yjsj)]/[2log(2)]
`'exponential'`Exponential(–∞,∞)exp(–yjsj)/2
`'hamming'`Hamming[0,1] or (–∞,∞)[1 – sign(yjsj)]/2
`'hinge'`Hinge(–∞,∞)max(0,1 – yjsj)/2
`'linear'`Linear(–∞,∞)(1 – yjsj)/2
`'logit'`Logistic(–∞,∞)log[1 + exp(–yjsj)]/[2log(2)]
`'quadratic'`Quadratic[0,1][1 – yj(2sj – 1)]2/2

The software normalizes binary losses so that the loss is 0.5 when yj = 0. Also, the software calculates the mean binary loss for each class.

• For a custom binary loss function, for example `customFunction`, specify its function handle `'BinaryLoss',@customFunction`.

`customFunction` has this form:

`bLoss = customFunction(M,s)`
where:

• `M` is the K-by-L coding matrix stored in `Mdl.CodingMatrix`.

• `s` is the 1-by-L row vector of classification scores.

• `bLoss` is the classification loss. This scalar aggregates the binary losses for every learner in a particular class. For example, you can use the mean binary loss to aggregate the loss over the learners for each class.

• K is the number of classes.

• L is the number of binary learners.

For an example of passing a custom binary loss function, see Predict Test-Sample Labels of ECOC Model Using Custom Binary Loss Function.

The default `BinaryLoss` value depends on the score ranges returned by the binary learners. This table describes some default `BinaryLoss` values based on the given assumptions.

AssumptionDefault Value
All binary learners are SVMs or either linear or kernel classification models of SVM learners.`'hinge'`
All binary learners are ensembles trained by `AdaboostM1` or `GentleBoost`.`'exponential'`
All binary learners are ensembles trained by `LogitBoost`.`'binodeviance'`
All binary learners are linear or kernel classification models of logistic regression learners. Or, you specify to predict class posterior probabilities by setting `'FitPosterior',true` in `fitcecoc`.`'quadratic'`

To check the default value, use dot notation to display the `BinaryLoss` property of the trained model at the command line.

Example: `'BinaryLoss','binodeviance'`

Data Types: `char` | `string` | `function_handle`

Decoding scheme that aggregates the binary losses, specified as the comma-separated pair consisting of `'Decoding'` and `'lossweighted'` or `'lossbased'`. For more information, see Binary Loss.

Example: `'Decoding','lossbased'`

Number of random initial values for fitting posterior probabilities by Kullback-Leibler divergence minimization, specified as the comma-separated pair consisting of `'NumKLInitializations'` and a nonnegative integer scalar.

If you do not request the fourth output argument (`Posterior`) and set `'PosteriorMethod','kl'` (the default), then the software ignores the value of `NumKLInitializations`.

For more details, see Posterior Estimation Using Kullback-Leibler Divergence.

Example: `'NumKLInitializations',5`

Data Types: `single` | `double`

Predictor data observation dimension, specified as the comma-separated pair consisting of `'ObservationsIn'` and `'columns'` or `'rows'`. `Mdl.BinaryLearners` must contain `ClassificationLinear` models.

### Note

If you orient your predictor matrix so that observations correspond to columns and specify `'ObservationsIn','columns'`, you can experience a significant reduction in execution time.

Estimation options, specified as the comma-separated pair consisting of `'Options'` and a structure array returned by `statset`.

To invoke parallel computing:

• You need a Parallel Computing Toolbox™ license.

• Specify `'Options',statset('UseParallel',true)`.

Posterior probability estimation method, specified as the comma-separated pair consisting of `'PosteriorMethod'` and `'kl'` or `'qp'`.

• If `PosteriorMethod` is `'kl'`, then the software estimates multiclass posterior probabilities by minimizing the Kullback-Leibler divergence between the predicted and expected posterior probabilities returned by binary learners. For details, see Posterior Estimation Using Kullback-Leibler Divergence.

• If `PosteriorMethod` is `'qp'`, then the software estimates multiclass posterior probabilities by solving a least-squares problem using quadratic programming. You need an Optimization Toolbox™ license to use this option. For details, see Posterior Estimation Using Quadratic Programming.

• If you do not request the fourth output argument (`Posterior`), then the software ignores the value of `PosteriorMethod`.

Example: `'PosteriorMethod','qp'`

Verbosity level, specified as the comma-separated pair consisting of `'Verbose'` and `0` or `1`. `Verbose` controls the number of diagnostic messages that the software displays in the Command Window.

If `Verbose` is `0`, then the software does not display diagnostic messages. Otherwise, the software displays diagnostic messages.

Example: `'Verbose',1`

Data Types: `single` | `double`

## Output Arguments

collapse all

Predicted class labels, returned as a categorical, character, logical, or numeric array, or a cell array of character vectors. The software predicts the classification of an observation by assigning the observation to the class yielding the largest negated average binary loss (or, equivalently, the smallest average binary loss).

`label` has the same data type as the class labels used to train `Mdl` and has the same number of rows as `X`. (The software treats string arrays as cell arrays of character vectors.)

If `Mdl.BinaryLearners` contains `ClassificationLinear` models, then `label` is an m-by-L matrix, where m is the number of observations in `X`, and L is the number of regularization strengths in the linear classification models (`numel(Mdl.BinaryLearners{1}.Lambda)`). The value `label(i,j)` is the predicted label of observation `i` for the model trained using regularization strength `Mdl.BinaryLearners{1}.Lambda(j)`.

Otherwise, `label` is a column vector of length m.

Negated average binary losses, returned as a numeric matrix or array.

• If `Mdl.BinaryLearners` contains `ClassificationLinear` models, then `NegLoss` is an m-by-K-by-L array.

• m is the number of observations in `X`.

• K is the number of distinct classes in the training data (`numel(Mdl.ClassNames)`).

• L is the number of regularization strengths in the linear classification models (`numel(Mdl.BinaryLearners{1}.Lambda)`).

`NegLoss(i,k,j)` is the negated average binary loss for observation `i`, corresponding to class `Mdl.ClassNames(k)`, for the model trained using regularization strength `Mdl.BinaryLearners{1}.Lambda(j)`.

• Otherwise, `NegLoss` is an m-by-K matrix.

Positive-class scores for each binary learner, returned as a numeric matrix or array.

• If `Mdl.BinaryLearners` contains `ClassificationLinear` models, then `PBScore` is an m-by-B-by-L array.

• m is the number of observations in `X`.

• B is the number of binary learners (`numel(Mdl.BinaryLearners)`).

• L is the number of regularization strengths in the linear classification models (`numel(Mdl.BinaryLearners{1}.Lambda)`).

`PBScore(i,b,j)` is the positive-class score for observation `i`, using binary learner `b`, for the model trained using regularization strength `Mdl.BinaryLearners{1}.Lambda(j)`.

• Otherwise, `PBScore` is an m-by-B matrix.

Posterior class probabilities, returned as a numeric matrix or array.

• If `Mdl.BinaryLearners` contains `ClassificationLinear` models, then `Posterior` is an m-by-K-by-L array. For dimension definitions, see `NegLoss`. `Posterior(i,k,j)` is the posterior probability that observation `i` comes from class `Mdl.ClassNames(k)`, for the model trained using regularization strength `Mdl.BinaryLearners{1}.Lambda(j)`.

• Otherwise, `Posterior` is an m-by-K matrix.

collapse all

### Binary Loss

A binary loss is a function of the class and classification score that determines how well a binary learner classifies an observation into the class.

Suppose the following:

• mkj is element (k,j) of the coding design matrix M (that is, the code corresponding to class k of binary learner j).

• sj is the score of binary learner j for an observation.

• g is the binary loss function.

• $\stackrel{^}{k}$ is the predicted class for the observation.

In loss-based decoding [Escalera et al.], the class producing the minimum sum of the binary losses over binary learners determines the predicted class of an observation, that is,

`$\stackrel{^}{k}=\underset{k}{\text{argmin}}\sum _{j=1}^{L}|{m}_{kj}|g\left({m}_{kj},{s}_{j}\right).$`

In loss-weighted decoding [Escalera et al.], the class producing the minimum average of the binary losses over binary learners determines the predicted class of an observation, that is,

`$\stackrel{^}{k}=\underset{k}{\text{argmin}}\frac{\sum _{j=1}^{L}|{m}_{kj}|g\left({m}_{kj},{s}_{j}\right)}{\sum _{j=1}^{L}|{m}_{kj}|}.$`

Allwein et al. suggest that loss-weighted decoding improves classification accuracy by keeping loss values for all classes in the same dynamic range.

This table summarizes the supported loss functions, where yj is a class label for a particular binary learner (in the set {–1,1,0}), sj is the score for observation j, and g(yj,sj).

ValueDescriptionScore Domaing(yj,sj)
`'binodeviance'`Binomial deviance(–∞,∞)log[1 + exp(–2yjsj)]/[2log(2)]
`'exponential'`Exponential(–∞,∞)exp(–yjsj)/2
`'hamming'`Hamming[0,1] or (–∞,∞)[1 – sign(yjsj)]/2
`'hinge'`Hinge(–∞,∞)max(0,1 – yjsj)/2
`'linear'`Linear(–∞,∞)(1 – yjsj)/2
`'logit'`Logistic(–∞,∞)log[1 + exp(–yjsj)]/[2log(2)]
`'quadratic'`Quadratic[0,1][1 – yj(2sj – 1)]2/2

The software normalizes binary losses such that the loss is 0.5 when yj = 0, and aggregates using the average of the binary learners [Allwein et al.].

Do not confuse the binary loss with the overall classification loss (specified by the `'LossFun'` name-value pair argument of the `loss` and `predict` object functions), which measures how well an ECOC classifier performs as a whole.

## Algorithms

collapse all

The software can estimate class posterior probabilities by minimizing the Kullback-Leibler divergence or by using quadratic programming. For the following descriptions of the posterior estimation algorithms, assume that:

• mkj is the element (k,j) of the coding design matrix M.

• I is the indicator function.

• ${\stackrel{^}{p}}_{k}$ is the class posterior probability estimate for class k of an observation, k = 1,...,K.

• rj is the positive-class posterior probability for binary learner j. That is, rj is the probability that binary learner j classifies an observation into the positive class, given the training data.

### Posterior Estimation Using Kullback-Leibler Divergence

By default, the software minimizes the Kullback-Leibler divergence to estimate class posterior probabilities. The Kullback-Leibler divergence between the expected and observed positive-class posterior probabilities is

`$\Delta \left(r,\stackrel{^}{r}\right)=\sum _{j=1}^{L}{w}_{j}\left[{r}_{j}\mathrm{log}\frac{{r}_{j}}{{\stackrel{^}{r}}_{j}}+\left(1-{r}_{j}\right)\mathrm{log}\frac{1-{r}_{j}}{1-{\stackrel{^}{r}}_{j}}\right],$`

where ${w}_{j}=\sum _{{S}_{j}}{w}_{i}^{\ast }$ is the weight for binary learner j.

• Sj is the set of observation indices on which binary learner j is trained.

• ${w}_{i}^{\ast }$ is the weight of observation i.

The software minimizes the divergence iteratively. The first step is to choose initial values ${\stackrel{^}{p}}_{k}^{\left(0\right)};\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,...,K$ for the class posterior probabilities.

• If you do not specify `'NumKLIterations'`, then the software tries both sets of deterministic initial values described next, and selects the set that minimizes Δ.

• ${\stackrel{^}{p}}_{k}^{\left(0\right)}=1/K;\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,...,K.$

• ${\stackrel{^}{p}}_{k}^{\left(0\right)};\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,...,K$ is the solution of the system

`${M}_{01}{\stackrel{^}{p}}^{\left(0\right)}=r,$`

where M01 is M with all mkj = –1 replaced with 0, and r is a vector of positive-class posterior probabilities returned by the L binary learners [Dietterich et al.]. The software uses `lsqnonneg` to solve the system.

• If you specify `'NumKLIterations',c`, where `c` is a natural number, then the software does the following to choose the set ${\stackrel{^}{p}}_{k}^{\left(0\right)};\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,...,K$, and selects the set that minimizes Δ.

• The software tries both sets of deterministic initial values as described previously.

• The software randomly generates `c` vectors of length K using `rand`, and then normalizes each vector to sum to 1.

At iteration t, the software completes these steps:

1. Compute

`${\stackrel{^}{r}}_{j}^{\left(t\right)}=\frac{\sum _{k=1}^{K}{\stackrel{^}{p}}_{k}^{\left(t\right)}I\left({m}_{kj}=+1\right)}{\sum _{k=1}^{K}{\stackrel{^}{p}}_{k}^{\left(t\right)}I\left({m}_{kj}=+1\cup {m}_{kj}=-1\right)}.$`

2. Estimate the next class posterior probability using

`${\stackrel{^}{p}}_{k}^{\left(t+1\right)}={\stackrel{^}{p}}_{k}^{\left(t\right)}\frac{\sum _{j=1}^{L}{w}_{j}\left[{r}_{j}I\left({m}_{kj}=+1\right)+\left(1-{r}_{j}\right)I\left({m}_{kj}=-1\right)\right]}{\sum _{j=1}^{L}{w}_{j}\left[{\stackrel{^}{r}}_{j}^{\left(t\right)}I\left({m}_{kj}=+1\right)+\left(1-{\stackrel{^}{r}}_{j}^{\left(t\right)}\right)I\left({m}_{kj}=-1\right)\right]}.$`

3. Normalize ${\stackrel{^}{p}}_{k}^{\left(t+1\right)};\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,...,K$ so that they sum to 1.

4. Check for convergence.

For more details, see [Hastie et al.] and [Zadrozny].

### Posterior Estimation Using Quadratic Programming

Posterior probability estimation using quadratic programming requires an Optimization Toolbox license. To estimate posterior probabilities for an observation using this method, the software completes these steps:

1. Estimate the positive-class posterior probabilities, rj, for binary learners j = 1,...,L.

2. Using the relationship between rj and ${\stackrel{^}{p}}_{k}$ [Wu et al.], minimize

`$\sum _{j=1}^{L}{\left[-{r}_{j}\sum _{k=1}^{K}{\stackrel{^}{p}}_{k}I\left({m}_{kj}=-1\right)+\left(1-{r}_{j}\right)\sum _{k=1}^{K}{\stackrel{^}{p}}_{k}I\left({m}_{kj}=+1\right)\right]}^{2}$`

with respect to ${\stackrel{^}{p}}_{k}$ and the restrictions

`$\begin{array}{l}0\le {\stackrel{^}{p}}_{k}\le 1\\ \sum _{k}{\stackrel{^}{p}}_{k}=1.\end{array}$`

The software performs minimization using `quadprog`.

## References

[1] Allwein, E., R. Schapire, and Y. Singer. “Reducing multiclass to binary: A unifying approach for margin classiﬁers.” Journal of Machine Learning Research. Vol. 1, 2000, pp. 113–141.

[2] Dietterich, T., and G. Bakiri. “Solving Multiclass Learning Problems Via Error-Correcting Output Codes.” Journal of Artificial Intelligence Research. Vol. 2, 1995, pp. 263–286.

[3] Escalera, S., O. Pujol, and P. Radeva. “On the decoding process in ternary error-correcting output codes.” IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 32, Issue 7, 2010, pp. 120–134.

[4] Escalera, S., O. Pujol, and P. Radeva. “Separability of ternary codes for sparse designs of error-correcting output codes.” Pattern Recognition. Vol. 30, Issue 3, 2009, pp. 285–297.

[5] Hastie, T., and R. Tibshirani. “Classification by Pairwise Coupling.” Annals of Statistics. Vol. 26, Issue 2, 1998, pp. 451–471.

[6] Wu, T. F., C. J. Lin, and R. Weng. “Probability Estimates for Multi-Class Classification by Pairwise Coupling.” Journal of Machine Learning Research. Vol. 5, 2004, pp. 975–1005.

[7] Zadrozny, B. “Reducing Multiclass to Binary by Coupling Probability Estimates.” NIPS 2001: Proceedings of Advances in Neural Information Processing Systems 14, 2001, pp. 1041–1048.