Main Content

precisionRecall

Get precision recall metrics of instance segmentation results

Since R2024b

Description

[precision,recall,scores] = precisionRecall(metrics) gets the precision, recall, and prediction score values for all classes in the ClassNames property and all overlap thresholds in the OverlapThreshold property of the instanceSegmentationMetrics object metrics.

[precision,recall,scores] = precisionRecall(metrics,Name=Value) specifies precision and recall evaluation options using one or more name-value arguments. For example, ClassNames=["cars" "people"] specifies to get the precision recall metrics for the cars and people classes.

Input Arguments

collapse all

instance segmentation performance metrics, specified as an instanceSegmentationMetrics object.

Name-Value Arguments

collapse all

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Example: [precision,recall,scores]=precisionRecall(metrics,ClassNames=[cars people]) specifies the precision recall metric to be evaluated for the cars and people classes.

Class names of detected objects, specified as an array of strings or a cell array of character vectors. By default, the precisionRecall function returns the precision recall metrics for all classes specified by the ClassNames property of the instanceSegmentationMetrics object.

Overlap threshold, or Intersection over Union (IoU) threshold, for which to get the precision recall metrics, specified as a numeric scalar or numeric vector of box overlap threshold values. By default, the precisionRecall object function returns the precision and recall metrics for all overlap thresholds specified by the OverlapThreshold property of the instanceSegmentationMetrics object. To learn more about how to use overlap thresholds to evaluate instance segmentation results, see Evaluate Network Performance Using Precision Recall at Multiple Overlap Thresholds.

Output Arguments

collapse all

Precision, returned as an M-by-N cell array. M is the number of classes in the ClassNames property, and N is the number of overlap thresholds in the OverlapThreshold property of the instanceSegmentationMetrics object metrics. Each element of the cell array contains a (numPredictions+1)-element numeric vector of precision values, sorted in descending order of prediction confidence scores. numPredictions is the number of predicted object masks.

Precision is the ratio of the number of true positives (TP) and the total number of predicted positives​. Larger precision scores indicate that more predicted object masks match ground truth objects. To learn more about the precision metric, see Compute Precision and Recall.

Recall, returned as an M-by-N cell array. M is the number of classes in the ClassNames property, and N is the number of overlap thresholds in the OverlapThreshold property of the instanceSegmentationMetrics object metrics. Each element of the cell array contains a (numPredictions+1)-element numeric vector of recall values, sorted in order of prediction confidence scores. numPredictions is the number of predicted object masks.

Recall is the ratio of true positives (TP) to the sum of true positives and false negatives (FN), where false negatives represent instances in the image that the model incorrectly classified as background or missed entirely. Larger recall scores indicate that the instance segmentation model is effectively identifying most of the actual instances within the images. To learn more about the recall metric, see Compute Precision and Recall.

Confidence score for each instance segmentation, returned as an M-by-1 cell array. M is the number of classes in the ClassNames property. Each element of the array is a (numPredictions+1)-element numeric vector. numPredictions is the number of predicted object masks.

More About

collapse all

Algorithms

collapse all

Version History

Introduced in R2024b