Main Content

predictorImportance

Estimates of predictor importance for classification ensemble of decision trees

Description

imp = predictorImportance(ens) computes estimates of predictor importance for ens by summing the estimates over all weak learners in the ensemble. imp has one element for each input predictor in the data used to train the ensemble. A high value indicates that the predictor is important for ens.

example

[imp,ma] = predictorImportance(ens) additionally returns a P-by-P matrix with predictive measures of association ma for P predictors, when the learners in ens contain surrogate splits. For more information, see Predictor Importance.

Note

You can compute predictor importance for ensembles of decision trees only.

example

Examples

collapse all

Estimate the predictor importance for all variables in the Fisher iris data.

Load Fisher"s iris data set.

load fisheriris

Train a classification ensemble using AdaBoostM2. Specify tree stumps as the weak learners.

t = templateTree(MaxNumSplits=1);
ens = fitcensemble(meas,species,Method="AdaBoostM2",Learners=t);

Estimate the predictor importance for all predictor variables.

imp = predictorImportance(ens)
imp = 1×4

    0.0004    0.0016    0.1266    0.0324

The first two predictors are not very important in the ensemble.

Estimate the predictor importance for all variables in the Fisher iris data for an ensemble where the trees contain surrogate splits.

Load Fisher's iris data set.

load fisheriris

Grow an ensemble of 100 classification trees using AdaBoostM2. Specify tree stumps as the weak learners, and also identify surrogate splits.

t = templateTree('MaxNumSplits',1,'Surrogate','on');
ens = fitcensemble(meas,species,'Method','AdaBoostM2','Learners',t);

Estimate the predictor importance and predictive measures of association for all predictor variables.

[imp,ma] = predictorImportance(ens)
imp = 1×4

    0.0674    0.0417    0.1582    0.1537

ma = 4×4

    1.0000         0         0         0
    0.0115    1.0000    0.0022    0.0054
    0.3186    0.2137    1.0000    0.6391
    0.0392    0.0073    0.1137    1.0000

The first two predictors show much more importance than the analysis in Estimate Predictor Importance.

Input Arguments

collapse all

Classification ensemble model, specified as a ClassificationEnsemble or ClassificationBaggedEnsemble model object trained with fitcensemble, or a CompactClassificationEnsemble model object created with compact.

You cannot compute predictor importance if any of the entries in ens.LearnerNames are 'knn' or 'discriminant'.

Output Arguments

collapse all

Predictor importance estimates, returned as a numeric row vector with the same number of elements as the number of predictors (columns) in ens.X. The entries are the estimates of Predictor Importance, with 0 representing the smallest possible importance.

Predictive measures of association, returned as a P-by-P matrix of Predictive Measure of Association values for P predictors. Element ma(I,J) is the predictive measure of association averaged over surrogate splits on predictor J for which predictor I is the optimal split predictor. predictorImportance averages this predictive measure of association over all weak learners in the ensemble.

More About

collapse all

Predictor Importance

predictorImportance estimates predictor importance for each learner in the ensemble ens and returns the weighted average imp computed using ens.TrainedWeight. The output imp has one element for each predictor.

predictorImportance computes importance measures of the predictors in a tree by summing changes in the node risk due to splits on every predictor, and then dividing the sum by the total number of branch nodes. The change in the node risk is the difference between the risk for the parent node and the total risk for the two children. For example, if a tree splits a parent node (for example, node 1) into two child nodes (for example, nodes 2 and 3), then predictorImportance increases the importance of the split predictor by

(R1R2R3)/Nbranch,

where Ri is the node risk of node i, and Nbranch is the total number of branch nodes. A node risk is defined as a node error or node impurity weighted by the node probability:

Ri = PiEi,

where Pi is the node probability of node i, and Ei is either the node error (for a tree grown by minimizing the twoing criterion) or node impurity (for a tree grown by minimizing an impurity criterion, such as the Gini index or deviance) of node i.

The estimates of predictor importance depend on whether you use surrogate splits for training.

  • If you use surrogate splits, predictorImportance sums the changes in the node risk over all splits at each branch node, including surrogate splits. If you do not use surrogate splits, then the function takes the sum over the best splits found at each branch node.

  • Estimates of predictor importance do not depend on the order of predictors if you use surrogate splits, but do depend on the order if you do not use surrogate splits.

Impurity and Node Error

A decision tree splits nodes based on either impurity or node error.

Impurity means one of several things, depending on your choice of the SplitCriterion name-value argument:

  • Gini's Diversity Index (gdi) — The Gini index of a node is

    1ip2(i),

    where the sum is over the classes i at the node, and p(i) is the observed fraction of classes with class i that reach the node. A node with just one class (a pure node) has Gini index 0; otherwise, the Gini index is positive. So the Gini index is a measure of node impurity.

  • Deviance ("deviance") — With p(i) defined the same as for the Gini index, the deviance of a node is

    ip(i)log2p(i).

    A pure node has deviance 0; otherwise, the deviance is positive.

  • Twoing rule ("twoing") — Twoing is not a purity measure of a node, but is a different measure for deciding how to split a node. Let L(i) denote the fraction of members of class i in the left child node after a split, and R(i) denote the fraction of members of class i in the right child node after a split. Choose the split criterion to maximize

    P(L)P(R)(i|L(i)R(i)|)2,

    where P(L) and P(R) are the fractions of observations that split to the left and right, respectively. If the expression is large, the split made each child node purer. Similarly, if the expression is small, the split made each child node similar to each other and, therefore, similar to the parent node. The split did not increase node purity.

  • Node error — The node error is the fraction of misclassified classes at a node. If j is the class with the largest number of training samples at a node, the node error is

    1 – p(j).

Predictive Measure of Association

The predictive measure of association is a value that indicates the similarity between decision rules that split observations. Among all possible decision splits that are compared to the optimal split (found by growing the tree), the best surrogate decision split yields the maximum predictive measure of association. The second-best surrogate split has the second-largest predictive measure of association.

Suppose xj and xk are predictor variables j and k, respectively, and jk. At node t, the predictive measure of association between the optimal split xj < u and a surrogate split xk < v is

λjk=min(PL,PR)(1PLjLkPRjRk)min(PL,PR).

  • PL is the proportion of observations in node t, such that xj < u. The subscript L stands for the left child of node t.

  • PR is the proportion of observations in node t, such that xju. The subscript R stands for the right child of node t.

  • PLjLk is the proportion of observations at node t, such that xj < u and xk < v.

  • PRjRk is the proportion of observations at node t, such that xju and xkv.

  • Observations with missing values for xj or xk do not contribute to the proportion calculations.

λjk is a value in (–∞,1]. If λjk > 0, then xk < v is a worthwhile surrogate split for xj < u.

Algorithms

Element ma(i,j) is the predictive measure of association averaged over surrogate splits on predictor j for which predictor i is the optimal split predictor. This average is computed by summing positive values of the predictive measure of association over optimal splits on predictor i and surrogate splits on predictor j, and dividing by the total number of optimal splits on predictor i, including splits for which the predictive measure of association between predictors i and j is negative.

Extended Capabilities

Version History

Introduced in R2011a