What is a good performance metric for pixels classification tasks

2 vues (au cours des 30 derniers jours)
Hi everyone,
I recently developed a machine-learning based algorithm to identify some specific regions of interest in a series of images. The resolution of images are all the same, but the size of the required regions of interest varies in this stack of images. After performing this classification task, I tried to validate the algorithm's performance using ROC curves.
The problem is that these curves cannot be used to compare how well the classification is done in different images because for those images in which the size of targeted regions are smaller than the rest of images, there will be a huge number of true negative pixels that can spuriously increase AUC values for these images regardless of how well the algorithm could identify true positive pixels. As a result, we may get higher AUC value for terrible classification outcome for images that have a small region of interest compared to those that have large region of interest and very good classification results.
Does anyone know how we can overcome this limitation of ROC curves?

Réponse acceptée

Constantino Carlos Reyes-Aldasoro
Modifié(e) : Constantino Carlos Reyes-Aldasoro le 7 Déc 2021
One good metric is the Jaccard Index, or the Intersection over Union. What this metric does is to divide true positives over the sum of true positives, false positives and false negatives and ignores the true negatives. The Dice index is basically the same but with a different calculation.
  2 commentaires
Memo Remo
Memo Remo le 7 Déc 2021
Dear Constantino,
Thank you very much for your help.
Kind regards

Connectez-vous pour commenter.

Plus de réponses (0)

Catégories

En savoir plus sur ROC - AUC dans Help Center et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by