Gradient descent with adaptive learning rate backpropagation
net.trainFcn = 'traingda' sets the network
traingda is a network training function that updates weight and bias
values according to gradient descent with adaptive learning rate.
Training occurs according to
traingda training parameters, shown here
with their default values:
net.trainParam.epochs— Maximum number of epochs to train. The default value is 1000.
net.trainParam.goal— Performance goal. The default value is 0.
net.trainParam.lr— Learning rate. The default value is 0.01.
net.trainParam.lr_inc— Ratio to increase learning rate. The default value is 1.05.
net.trainParam.lr_dec— Ratio to decrease learning rate. The default value is 0.7.
net.trainParam.max_fail— Maximum validation failures. The default value is
net.trainParam.max_perf_inc— Maximum performance increase. The default value is
net.trainParam.min_grad— Minimum performance gradient. The default value is
net.trainParam.show— Epochs between displays (
NaNfor no displays). The default value is 25.
net.trainParam.showCommandLine— Generate command-line output. The default value is
net.trainParam.showWindow— Show training GUI. The default value is
net.trainParam.time— Maximum time to train in seconds. The default value is
trainedNet — Trained network
Trained network, returned as a
tr — Training record
Training record (
perf), returned as
a structure whose fields depend on the network training function
net.NET.trainFcn). It can include fields such as:
Training, data division, and performance functions and parameters
Data division indices for training, validation and test sets
Data division masks for training validation and test sets
Number of epochs (
num_epochs) and the best epoch (
A list of training state names (
Fields for each state name recording its value throughout training
Performances of the best network (
You can create a standard network that uses
cascadeforwardnet. To prepare a
custom network to be trained with
'traingda'. This sets
traingda’s default parameters.
net.trainParamproperties to desired values.
In either case, calling
train with the resulting network trains the
help feedforwardnet and
Gradient Descent with Adaptive Learning Rate Backpropagation
With standard steepest descent, the learning rate is held constant throughout training. The performance of the algorithm is very sensitive to the proper setting of the learning rate. If the learning rate is set too high, the algorithm can oscillate and become unstable. If the learning rate is too small, the algorithm takes too long to converge. It is not practical to determine the optimal setting for the learning rate before training, and, in fact, the optimal learning rate changes during the training process, as the algorithm moves across the performance surface.
You can improve the performance of the steepest descent algorithm if you allow the learning rate to change during the training process. An adaptive learning rate attempts to keep the learning step size as large as possible while keeping learning stable. The learning rate is made responsive to the complexity of the local error surface.
An adaptive learning rate requires some changes in the training procedure used by
traingd. First, the initial network output and error are calculated. At
each epoch new weights and biases are calculated using the current learning rate. New
outputs and errors are then calculated.
As with momentum, if the new error exceeds the old error by more than a predefined
max_perf_inc (typically 1.04), the new weights and biases are
discarded. In addition, the learning rate is decreased (typically by multiplying by
lr_dec = 0.7). Otherwise, the new weights, etc., are kept. If the new
error is less than the old error, the learning rate is increased (typically by multiplying
lr_inc = 1.05).
This procedure increases the learning rate, but only to the extent that the network can learn without large error increases. Thus, a near-optimal learning rate is obtained for the local terrain. When a larger learning rate could result in stable learning, the learning rate is increased. When the learning rate is too high to guarantee a decrease in error, it is decreased until stable learning resumes.
Backpropagation training with an adaptive learning rate is implemented with the function
traingda, which is called just like
for the additional training parameters
lr_inc. Here is how it is called to
train the previous two-layer network:
p = [-1 -1 2 2; 0 5 0 5]; t = [-1 -1 1 1]; net = feedforwardnet(3,'traingda'); net.trainParam.lr = 0.05; net.trainParam.lr_inc = 1.05; net = train(net,p,t); y = net(p)
traingda can train any network as long as its weight, net input, and
transfer functions have derivative functions.
Backpropagation is used to calculate derivatives of performance
with respect to the weight and bias variables
X. Each variable is adjusted
according to gradient descent:
dX = lr*dperf/dX
At each epoch, if performance decreases toward the goal, then the learning rate is
increased by the factor
lr_inc. If performance increases by more than the
max_perf_inc, the learning rate is adjusted by the factor
lr_dec and the change that increased the performance is not made.
Training stops when any of these conditions occurs:
The maximum number of
epochs(repetitions) is reached.
The maximum amount of
Performance is minimized to the
The performance gradient falls below
Validation performance (validation error) has increased more than
max_failtimes since the last time it decreased (when using validation).
Introduced before R2006a