Main Content

An incremental learning model object completely specifies how functions implement incremental fitting and model performance evaluation. To configure, or prepare, an incremental learning model, create one by calling the object directly, either `incrementalClassificationLinear`

or `incrementalRegressionLinear`

, or by converting a traditionally trained model to one of the objects. The table lists the available objects, conversion functions, and supported learners for the corresponding machine learning problem.

Problem | Creation Method | Function | Supported Learners |
---|---|---|---|

Binary classification | Call object | `incrementalClassificationLinear` | Linear SVM Logistic regression |

Convert model |
| Linear SVM Logistic regression | |

Regression | Call object | `incrementalRegressionLinear` | Linear SVM regression Least squares |

Convert model |
| Linear SVM regression Least squares |

The creation method you choose depends on the information you have and your preferences.

**Convert Model**: Convert a traditionally trained model to an incremental learner to initialize a model for incremental learning.`incrementalLearner`

passes information the traditionally trained model learned from the data, which includes optimized coefficient estimates, data characteristics, and applicable hyperparameter values. However, to convert a traditionally trained model, you must have a set of labeled data to fit a model to.When you use

`incrementalLearner`

, you can specify all performance evaluation options and only training, model, and data options that are unknown during conversion. For more details, see Convert Traditionally Trained Model.**Call Object**: Create an incremental model to your specifications by calling the object directly. This method is flexible — you have the freedom to specify most options to your preferences, and the models provide reasonable default values. However, depending on your specifications, an estimation period might be required.For more details, see Convert Traditionally Trained Model.

Regardless of the incremental model creation method you decide to use, configurations you should consider include:

The linear model type, such as an SVM model

Linear model coefficient initial values

The objective function solver, such as standard SGD

Solver hyperparameter values, such as the learning rate of SGD solvers

Model performance evaluation settings, such as the performance metric to measure

Unlike other machine learning model objects, you can create an incremental learning model by calling the corresponding object directly without any knowledge about the data. The only information required to create a model directly is the machine learning problem, either classification or regression. For example, the following code creates a default incremental model for linear regression.

Mdl = incrementalRegressionLinear();

If you have information about the data to specify, or you want to configure the linear model, hyperparameters, solver, or performance evaluation settings, use name-value pair arguments when you call the object (all model properties are read-only; you cannot adjust them using dot notation). For example, the following pseudocode creates an incremental logistic regression model for binary classification, initializes the linear model coefficients `Beta`

and bias `Bias`

(obtained from prior knowledge of the problem), sets the performance metrics warm-up to `500`

observations.

Mdl = incrementalClassificationLinear('Learner','logistic',... 'Beta',Beta,'Bias',Bias,'MetricsWarmupPeriod',500);

The following tables briefly describe notable options for the major aspects of incremental learning . For more detail on all options, see `incrementalRegressionLinear`

or `incrementalClassificationLinear`

.

The following table contains notable model options and data characteristics.

Name | Description |
---|---|

`'Beta'` | Linear coefficients also serving as initial values for incremental fitting |

`'Bias'` | Model intercept also serving as an initial value for incremental fitting |

`'ClassNames'` | For classification, the expected class names in the observation labels |

`'Learner'` | Model type, such as linear SVM or least squares. |

To generate predictions and model loss, the model coefficients stored in the `Beta`

and `Bias`

properties must be non-empty. Furthermore, for classification problems, both class names stored in the `ClassNames`

property must be specified.

The following table contains notable training and solver options and properties.

Name | Description |
---|---|

`'EstimationPeriod'` | Pre-training estimation period |

`'Solver'` | Objective function optimization algorithm |

`'Standardize'` | Flag to standardize predictor data |

`'Lambda'` | Ridge penalty, a model hyperparameter that requires tuning for SGD optimization |

`'BatchSize'` | Mini-batch size, an SGD hyperparameter |

`'LearnRate'` | Learning rate, an SGD hyperparameter |

`'Mu'` | Read-only property containing predictor variable means |

`'Sigma'` | Read-only property containing predictor variable standard deviations |

The estimation period, specified by the number of observations in `EstimationPeriod`

, occurs before training begins (see incremental learning periods). During the estimation period, incremental fitting functions `fit`

and `updateMetricsAndFit`

compute quantities required for training when they are unknown. For example, if you set `'Standardize',true`

, incremental learning functions require predictor means and standard deviations to standardize the predictor data. Consequently, the incremental model requires a positive estimation period (the default is `1000`

).

The default solver is the adaptive scale-invariant solver `'scale-invariant'`

[2], which is hyperparameter-free and is insensitive to the predictor variable scales, which means that predictor data standardization is not required. You can specify standard or average SGD instead, `'sgd'`

or `'asgd'`

, but SGD is sensitive to predictor variable scales and it requires hyperparameter tuning, which can be difficult or impossible to do during incremental learning. If you plan to set an SGD solver:

Obtain labeled data.

Traditionally train a linear classification or regression model by calling

`fitclinear`

or`fitrlinear`

, respectively. Specify the SGD solver you plan to use for incremental learning, cross-validate to determine an appropriate set of hyperparameters, and standardize the predictor data.Train the model on the entire sample using the desired hyperparameter set.

Convert the resulting model to an incremental learner by using

`incrementalLearner`

.

Performance evaluation properties and options enable you to configure how and when model performance should be measured by incremental functions `updateMetrics`

or `updateMetricsAndFit`

. Regardless of the options you choose, first familiarize yourself with the incremental learning periods.

The following table contains all performance evaluation options and properties.

Name | Description |
---|---|

`'Metrics'` | List of performance metrics or loss functions to measure incrementally |

`'MetricsWarmupPeiod'` | Number of observations incremental model must be fit to before it tracks performance metrics |

`'MetricsWindowSize'` | Flag to standardize predictor data |

`'IsWarm'` | Read-only property indicating whether model is warm (measures performance metrics) |

`'Metrics'` | Read-only property containing a table of tracked window and cumulative metrics |

The metrics specified by the `'Metrics'`

name-value pair form a table stored in the `Metrics`

property of the model. For example, if you specify `'Metrics',["Metric1" "Metric2"]`

when you create an incremental model `Mdl`

, the `Metrics`

property is

>> Mdl.Metrics ans = 2×2 table Cumulative Window __________ ______ Metric1 NaN NaN Metric2 NaN NaN

Specify a positive metrics warm-up when you believe the model is of low quality and needs to be trained before functions `updateMetrics`

or `updateMetricsAndFit`

track performance metrics in the `Metrics`

property. In which case, the `IsWarm`

property is `false`

, and you must pass the incoming data and model to incremental fitting functions `fit`

or `updateMetricsAndFit`

.

When the incremental fitting functions process enough data to satisfy the estimation and metrics warm-up periods, the `IsWarm`

property becomes `true`

, and you can measure the model performance on incoming data and optionally train the model.

When the model is warm, `updateMetrics`

and `updateMetricsAndFit`

track all specified metrics cumulatively (from the start of evaluation) and within a window of observations specified by the `'MetricsWindowSize'`

name-value pair argument. Cumulative metrics reflect the model performance over the entire incremental learning history; after Performance Evaluation Period 1 starts, cumulative metrics are independent of evaluation period. they are independent of. Window metrics reflect the model performance only over the specified window size for each performance evaluation period.

`incrementalLearner`

enables you to initialize an incremental model using information learned from a traditionally trained model. The converted model can generate predictions and it is warm, which means that incremental learning functions can measure model performance metrics from the start of the data stream. In other words, estimation and performance metrics warm-up periods are not required for incremental learning.

To convert a traditionally trained model to an incremental learner, pass the model and any other options, specified by name-value pair arguments, to `incrementalLearner`

. For example, the following pseudocode initializes an incremental classification model by all information a linear SVM model for binary classification has learned from a batch of data.

Mdl = fitcsvm(X,Y); IncrementalMdl = incrementalLearner(Mdl,Name,Value);

`IncrementalMdl`

is incremental learner object associated with the learning problem.

Ease of incremental model creation and initialization is balanced by decreased flexibility — the software assumes estimated coefficients, hyperparameters values, and data characteristics learned during traditional training are appropriate for incremental learning. Therefore, you cannot set corresponding learned or tuned options when you call `incrementalLearner`

. This table lists notable read-only properties of `IncrementalMdl`

that `incrementalLearner`

transfers from `Mdl`

, or that the function infers from other values:

Property | Description |
---|---|

`Beta` | Linear model coefficients |

`Bias` | Model intercept |

`ClassNames` | For classification problems, class labels for binary classification |

`Epsilon` | For an SVM regression learner, half the width of the epsilon-insensitive band |

`Learner` | Linear model type |

`Mu` | For an SVM model object, predictor variable means |

`NumPredictors` | Number of predictor variables. For models that dummy-code categorical predictor variables, `NumPredictors` is `numel(Mdl.ExpandedPredictorNames)` and predictor variables expected during incremental learning correspond to the names. For more details, see Dummy Variables. |

`Prior` | For classification problems, prior class label distribution |

`ResponseTransform` | For regression problems, a function to apply to predicted responses |

`ScoreTransform` | For classification problems, a function to apply to classification scores. For example, if you configure an SVM model to compute posterior class probabilities, |

`Sigma` | For an SVM model object, predictor variable standard deviations |

**Note**

The

`NumTrainingObservations`

property of`IncrementalMdl`

does not include the observations used to train`Mdl`

.If you specify

`'Standardize',true`

when you train`Mdl`

,`IncrementalMdl`

is configured to standardize predictors during incremental learning by default.

The following conditions apply you convert a linear classification or regression model (`ClassificationLinear`

and `RegressionLinear`

, respectively):

Incremental fitting functions support ridge (L2) regularization only.

Incremental fitting functions support the specification of only one regularization value. Therefore, if you specify a regularization path (vector of regularization values) when you call

`fitclinear`

or`fitrlinear`

, choose the model associated with one penalty by passing it to`selectModels`

.If you solve the objective function by using standard or average SGD (

`'sgd'`

or`'asgd'`

for the`'Solver'`

name-value pair argument), the following conditions apply when you call`incrementalLearner`

:`incrementalLearner`

transfers the solver used to optimize`Mdl`

to`IncrementalMdl`

.You can specify the adaptive scale-invariant solver

`'scale-invariant'`

instead, but you cannot specify a different SGD solver.If you do not specify the adaptive scale-invariant solver,

`incrementalLearner`

transfers model and solver hyperparameter values to the incremental model object, such as the learning rate`LearnRate`

, mini-batch size`BatchSize`

, and ridge penalty`Lambda`

. The transferred properties are unmodifiable.

If you require more flexibility when you create an incremental model, you can call the object directly and initialize the model by individually setting learned information using name-value pair arguments. The following pseudocode show two examples:

Initialize an incremental classification model from the coefficients and class names learned by fitting a linear SVM model for binary classification to a batch of data

`Xc`

and`Yc`

.Initialize an incremental regression model from the coefficients learned by fitting a linear model to a batch of data

`Xr`

and`Yr`

.

% Classification Mdl = fitcsvm(Xc,Yc); IncrementalMdl = incrementalClassificationLinear('Beta',Mdl.Beta,... 'Bias',Mdl.Bias,'ClassNames',Mdl.ClassNames); % Regression Mdl = fitlm(Xr,Yr); Bias = Mdl.Coefficients.Estimate(1); Beta = Mdl.Coefficients.Estimate(2:end); IncrementalMdl = incrementalRegressionLinear('Learner','leastsquares',... 'Bias',Bias,'Beta',Beta);

`fit`

|`loss`

|`predict`

|`updateMetrics`

|`updateMetricsAndFit`