Main Content

predict

Predict labels using k-nearest neighbor classification model

Description

example

label = predict(mdl,X) returns a vector of predicted class labels for the predictor data in the table or matrix X, based on the trained k-nearest neighbor classification model mdl. See Predicted Class Label.

example

[label,score,cost] = predict(mdl,X) also returns:

  • A matrix of classification scores (score) indicating the likelihood that a label comes from a particular class. For k-nearest neighbor, scores are posterior probabilities. See Posterior Probability.

  • A matrix of expected classification cost (cost). For each observation in X, the predicted class label corresponds to the minimum expected classification costs among all classes. See Expected Cost.

Examples

collapse all

Create a k-nearest neighbor classifier for Fisher's iris data, where k = 5. Evaluate some model predictions on new data.

Load the Fisher iris data set.

load fisheriris
X = meas;
Y = species;

Create a classifier for five nearest neighbors. Standardize the noncategorical predictor data.

mdl = fitcknn(X,Y,'NumNeighbors',5,'Standardize',1);

Predict the classifications for flowers with minimum, mean, and maximum characteristics.

Xnew = [min(X);mean(X);max(X)];
[label,score,cost] = predict(mdl,Xnew)
label = 3x1 cell
    {'versicolor'}
    {'versicolor'}
    {'virginica' }

score = 3×3

    0.4000    0.6000         0
         0    1.0000         0
         0         0    1.0000

cost = 3×3

    0.6000    0.4000    1.0000
    1.0000         0    1.0000
    1.0000    1.0000         0

The second and third rows of the score and cost matrices have binary values, which means all five nearest neighbors of the mean and maximum flower measurements have identical classifications.

Train k-nearest neighbor classifiers for various k values, and compare the decision boundaries of the classifiers.

Load the fisheriris data set.

load fisheriris

The data set contains length and width measurements from the sepals and petals of three species of iris flowers. Remove the sepal lengths and widths, and all observed setosa irises.

inds = ~strcmp(species,'setosa');
X = meas(inds,3:4);
species = species(inds); 

Create a binary label variable y. The label is 1 for a virginica iris and 0 for versicolor.

y = strcmp(species,'virginica');

Train the k-nearest neighbor classifier. Specify 5 as the number of nearest neighbors to find, and standardize the predictor data.

EstMdl = fitcknn(X,y,'NumNeighbors',5,'Standardize',1)
EstMdl = 
  ClassificationKNN
             ResponseName: 'Y'
    CategoricalPredictors: []
               ClassNames: [0 1]
           ScoreTransform: 'none'
          NumObservations: 100
                 Distance: 'euclidean'
             NumNeighbors: 5


EstMdl is a trained ClassificationKNN classifier. Some of its properties appear in the Command Window.

Plot the decision boundary, which is the line that distinguishes between the two iris species based on their features.

x1 = min(X(:,1)):0.01:max(X(:,1));
x2 = min(X(:,2)):0.01:max(X(:,2));
[x1G,x2G] = meshgrid(x1,x2);
XGrid = [x1G(:),x2G(:)];
pred = predict(EstMdl,XGrid);

figure
gscatter(XGrid(:,1),XGrid(:,2),pred,[1,0,0;0,0.5,1])
hold on
plot(X(y == 0,1),X(y == 0,2),'ko', ...
    X(y == 1,1),X(y == 1,2),'kx')
xlabel('Petal length (cm)')
ylabel('Petal width (cm)')
title('{\bf 5-Nearest Neighbor Classifier Decision Boundary}')
legend('Versicolor Region','Virginica Region', ...
    'Sampled Versicolor','Sampled Virginica', ...
    'Location','best')
axis tight
hold off

Figure contains an axes object. The axes object with title blank 5-Nearest blank Neighbor blank Classifier blank Decision blank Boundary, xlabel Petal length (cm), ylabel Petal width (cm) contains 4 objects of type line. One or more of the lines displays its values using only markers These objects represent Versicolor Region, Virginica Region, Sampled Versicolor, Sampled Virginica.

The partition between the red and blue regions is the decision boundary. If you change the number of neighbors k, then the boundary changes.

Retrain the classifier using k = 1 (default value for NumNeighbors of fitcknn) and k = 20.

EstMdl1 = fitcknn(X,y);
pred1 = predict(EstMdl1,XGrid);

EstMdl20 = fitcknn(X,y,'NumNeighbors',20);
pred20 = predict(EstMdl20,XGrid);

figure
gscatter(XGrid(:,1),XGrid(:,2),pred1,[1,0,0;0,0.5,1])
hold on
plot(X(y == 0,1),X(y == 0,2),'ko', ...
    X(y == 1,1),X(y == 1,2),'kx')
xlabel('Petal length (cm)')
ylabel('Petal width (cm)')
title('{\bf 1-Nearest Neighbor Classifier Decision Boundary}')
legend('Versicolor Region','Virginica Region', ...
    'Sampled Versicolor','Sampled Virginica', ...
    'Location','best')
axis tight
hold off

Figure contains an axes object. The axes object with title blank 1-Nearest blank Neighbor blank Classifier blank Decision blank Boundary, xlabel Petal length (cm), ylabel Petal width (cm) contains 4 objects of type line. One or more of the lines displays its values using only markers These objects represent Versicolor Region, Virginica Region, Sampled Versicolor, Sampled Virginica.

figure
gscatter(XGrid(:,1),XGrid(:,2),pred20,[1,0,0;0,0.5,1])
hold on
plot(X(y == 0,1),X(y == 0,2),'ko', ...
    X(y == 1,1),X(y == 1,2),'kx')
xlabel('Petal length (cm)')
ylabel('Petal width (cm)')
title('{\bf 20-Nearest Neighbor Classifier Decision Boundary}')
legend('Versicolor Region','Virginica Region', ...
    'Sampled Versicolor','Sampled Virginica', ...
    'Location','best')
axis tight
hold off

Figure contains an axes object. The axes object with title blank 20-Nearest blank Neighbor blank Classifier blank Decision blank Boundary, xlabel Petal length (cm), ylabel Petal width (cm) contains 4 objects of type line. One or more of the lines displays its values using only markers These objects represent Versicolor Region, Virginica Region, Sampled Versicolor, Sampled Virginica.

The decision boundary seems to linearize as k increases. This linearization happens because the algorithm down-weights the importance of each input with increasing k. When k = 1, the algorithm correctly predicts the species of almost all training samples. When k = 20, the algorithm has a higher misclassification rate within the training set. You can find an optimal value of k by using the OptimizeHyperparameters name-value argument of fitcknn. For an example, see Optimize Fitted KNN Classifier.

Input Arguments

collapse all

k-nearest neighbor classifier model, specified as a ClassificationKNN object.

Predictor data to be classified, specified as a numeric matrix or table.

Each row of X corresponds to one observation, and each column corresponds to one variable.

  • For a numeric matrix:

    • The variables that make up the columns of X must have the same order as the predictor variables used to train mdl.

    • If you train mdl using a table (for example, Tbl), then X can be a numeric matrix if Tbl contains all numeric predictor variables. k-nearest neighbor classification requires homogeneous predictors. Therefore, to treat all numeric predictors in Tbl as categorical during training, set 'CategoricalPredictors','all' when you train using fitcknn. If Tbl contains heterogeneous predictors (for example, numeric and categorical data types) and X is a numeric matrix, then predict throws an error.

  • For a table:

    • predict does not support multicolumn variables and cell arrays other than cell arrays of character vectors.

    • If you train mdl using a table (for example, Tbl), then all predictor variables in X must have the same variable names and data types as those used to train mdl (stored in mdl.PredictorNames). However, the column order of X does not need to correspond to the column order of Tbl. Both Tbl and X can contain additional variables (response variables, observation weights, and so on), but predict ignores them.

    • If you train mdl using a numeric matrix, then the predictor names in mdl.PredictorNames and corresponding predictor variable names in X must be the same. To specify predictor names during training, see the PredictorNames name-value pair argument of fitcknn. All predictor variables in X must be numeric vectors. X can contain additional variables (response variables, observation weights, and so on), but predict ignores them.

If you set 'Standardize',true in fitcknn to train mdl, then the software standardizes the columns of X using the corresponding means in mdl.Mu and standard deviations in mdl.Sigma.

Data Types: double | single | table

Output Arguments

collapse all

Predicted class labels for the observations (rows) in X, returned as a categorical array, character array, logical vector, vector of numeric values, or cell array of character vectors. label has length equal to the number of rows in X.

For each observation, the label is the class with minimal expected cost. For an observation with NaN scores, the function classifies the observation into the majority class, which makes up the largest proportion of the training labels.

See Predicted Class Label.

Predicted class scores or posterior probabilities, returned as a numeric matrix of size n-by-K. n is the number of observations (rows) in X, and K is the number of classes (in mdl.ClassNames). score(i,j) is the posterior probability that observation i in X is of class j in mdl.ClassNames. See Posterior Probability.

Data Types: single | double

Expected classification costs, returned as a numeric matrix of size n-by-K. n is the number of observations (rows) in X, and K is the number of classes (in mdl.ClassNames). cost(i,j) is the cost of classifying row i of X as class j in mdl.ClassNames. See Expected Cost.

Data Types: single | double

Algorithms

collapse all

Predicted Class Label

predict classifies by minimizing the expected misclassification cost:

y^=argminy=1,...,Kj=1KP^(j|x)C(y|j),

where:

  • y^ is the predicted classification.

  • K is the number of classes.

  • P^(j|x) is the posterior probability of class j for observation x.

  • C(y|j) is the cost of classifying an observation as y when its true class is j.

Posterior Probability

Consider a vector (single query point) xnew and a model mdl.

  • k is the number of nearest neighbors used in prediction, mdl.NumNeighbors.

  • nbd(mdl,xnew) specifies the k nearest neighbors to xnew in mdl.X.

  • Y(nbd) specifies the classifications of the points in nbd(mdl,xnew), namely mdl.Y(nbd).

  • W(nbd) specifies the weights of the points in nbd(mdl,xnew).

  • prior specifies the priors of the classes in mdl.Y.

If the model contains a vector of prior probabilities, then the observation weights W are normalized by class to sum to the priors. This process might involve a calculation for the point xnew, because weights can depend on the distance from xnew to the points in mdl.X.

The posterior probability p(j|xnew) is

p(j|xnew)=inbdW(i)1Y(X(i))=jinbdW(i).

Here, 1Y(X(i))=j is 1 when mdl.Y(i) = j, and 0 otherwise.

True Misclassification Cost

Two costs are associated with KNN classification: the true misclassification cost per class and the expected misclassification cost per observation.

You can set the true misclassification cost per class by using the 'Cost' name-value pair argument when you run fitcknn. The value Cost(i,j) is the cost of classifying an observation into class j if its true class is i. By default, Cost(i,j) = 1 if i ~= j, and Cost(i,j) = 0 if i = j. In other words, the cost is 0 for correct classification and 1 for incorrect classification.

Expected Cost

Two costs are associated with KNN classification: the true misclassification cost per class and the expected misclassification cost per observation. The third output of predict is the expected misclassification cost per observation.

Suppose you have Nobs observations that you want to classify with a trained classifier mdl, and you have K classes. You place the observations into a matrix Xnew with one observation per row. The command

[label,score,cost] = predict(mdl,Xnew)

returns a matrix cost of size Nobs-by-K, among other outputs. Each row of the cost matrix contains the expected (average) cost of classifying the observation into each of the K classes. cost(n,j) is

i=1KP^(i|Xnew(n))C(j|i),

where

  • K is the number of classes.

  • P^(i|X(n)) is the posterior probability of class i for observation Xnew(n).

  • C(j|i) is the true misclassification cost of classifying an observation as j when its true class is i.

Alternative Functionality

Simulink Block

To integrate the prediction of a nearest neighbor classification model into Simulink®, you can use the ClassificationKNN Predict block in the Statistics and Machine Learning Toolbox™ library or a MATLAB® Function block with the predict function. For examples, see Predict Class Labels Using ClassificationKNN Predict Block and Predict Class Labels Using MATLAB Function Block.

When deciding which approach to use, consider the following:

  • If you use the Statistics and Machine Learning Toolbox library block, you can use the Fixed-Point Tool (Fixed-Point Designer) to convert a floating-point model to fixed point.

  • Support for variable-size arrays must be enabled for a MATLAB Function block with the predict function.

  • If you use a MATLAB Function block, you can use MATLAB functions for preprocessing or post-processing before or after predictions in the same MATLAB Function block.

Extended Capabilities

Version History

Introduced in R2012a