Recommendation for Machine Learning Interpretability options for a SeriesNetwork object?
1 vue (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Hello –
I have a trained algorithm (i.e., LSTM) for time-series regression that is a SeriesNetwork object:
SeriesNetwork with properties:
Layers: [6×1 nnet.cnn.layer.Layer]
InputNames: {'sequenceinput'}
OutputNames: {'regressionoutput'}
I have used some canned routines for machine learning interpretability (e.g., shapley, lime, plotPartialDependence) that work great with some object types (e.g., RegressionSVM) but not with SeriesNetwork objects. The relevant functions I have read about appear to be for use with image classification, e.g., rather than time-series regression.
My question is thus: Can you recommend a machine learning interpretability function for use with a SeriesNetwork object built for regression? I am confident such a function exists, but I can’t seem to find it. Any and all help would be greatly appreciated.
Thank you in advance.
0 commentaires
Réponses (1)
Shivansh
le 8 Nov 2023
Modifié(e) : Shivansh
le 8 Nov 2023
Hi Bart,
I understand that you want to find a machine learning interpretability function for use with a SeriesNetwork object built for regression.
You can use “gradCam” function for time series models. You can refer to the following link for an example on classification model using time series.
The method is designed specifically for convolutional networks so it may not give good results for LSTMs.
Hope it helps!
0 commentaires
Voir également
Catégories
En savoir plus sur Gaussian Process Regression dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!