Main Content


Resume training ensemble


ens1 = resume(ens,nlearn)
ens1 = resume(ens,nlearn,Name,Value)


ens1 = resume(ens,nlearn) trains ens in every fold for nlearn more cycles. resume uses the same training options fitrensemble used to create ens, except for parallel training options. If you want to resume training in parallel, pass the 'Options' name-value pair.

ens1 = resume(ens,nlearn,Name,Value) trains ens with additional options specified by one or more Name,Value pair arguments.

Input Arguments


A cross-validated regression ensemble. ens is the result of either:

  • The fitrensemble function with a cross-validation name-value pair. The names are 'crossval', 'kfold', 'holdout', 'leaveout', or 'cvpartition'.

  • The crossval method applied to a regression ensemble.


A positive integer, the number of cycles for additional training of ens.

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.


Printout frequency, a positive integer scalar or 'off' (no printouts). When NPrint is a positive integer, displays a message to the command line after training NPrint folds.


For fastest training of some boosted decision trees, set NPrint to the default value 'off'. This tip holds when the classification Method is 'AdaBoostM1', 'AdaBoostM2', 'GentleBoost', or 'LogitBoost', or when the regression Method is 'LSBoost'.

Default: 'off'


Options for computing in parallel and setting random numbers, specified as a structure. Create the Options structure with statset.


You need Parallel Computing Toolbox™ to compute in parallel.

You can use the same parallel options for resume as you used for the original training. However, you can change the parallel options as needed. This table lists the option fields and their values.

Field NameValueDefault

Set this value to true to compute in parallel. Parallel ensemble training requires you to set the 'Method' name-value argument to 'Bag'. Parallel training is available only for tree learners, the default type for 'Bag'.


Set this value to true to run computations in parallel in a reproducible manner.

To compute reproducibly, set Streams to a type that allows substreams: 'mlfg6331_64' or 'mrg32k3a'.

StreamsSpecify this value as a RandStream object or cell array of such objects. Use a single object except when the UseParallel value is true and the UseSubstreams value is false. In that case, use a cell array that has the same size as the parallel pool.If you do not specify Streams, then resume uses the default stream or streams.

For dual-core systems and above, resume parallelizes training using Intel® Threading Building Blocks (TBB). Therefore, specifying the UseParallel option as true might not provide a significant speedup on a single computer. For details on Intel TBB, see

Example: 'Options',statset('UseParallel',true)

Output Arguments


The cross-validated regression ensemble ens, augmented with additional training.


expand all

Examine the cross-validation error after training a regression ensemble for more cycles.

Load the carsmall data set and select displacement, horsepower, and vehicle weight as predictors.

load carsmall
X = [Displacement Horsepower Weight];

Train a regression ensemble for 50 cycles.

ens = fitrensemble(X,MPG,'NumLearningCycles',50); 

Cross-validate the ensemble and examine the cross-validation error.

rng(10,'twister') % For reproducibility
cvens = crossval(ens);
L = kfoldLoss(cvens)
L = 27.9435

Train for 50 more cycles and examine the new cross-validation error.

cvens = resume(cvens,50);
L = kfoldLoss(cvens)
L = 28.7114

The additional training did not improve the cross-validation error.

Extended Capabilities