Configure Shallow Neural Network Inputs and Outputs
This topic is part of the design workflow described in Workflow for Neural Network Design.
After a neural network has been created, it must be configured. The configuration step consists of examining input and target data, setting the network's input and output sizes to match the data, and choosing settings for processing inputs and outputs that will enable best network performance. The configuration step is normally done automatically, when the training function is called. However, it can be done manually, by using the configuration function. For example, to configure the network you created previously to approximate a sine function, issue the following commands:
p = -2:.1:2; t = sin(pi*p/2); net1 = configure(net,p,t);
You have provided the network with an example set of inputs and targets (desired network
outputs). With this information, the configure
function can set the network input and output sizes to match the
data.
After the configuration, if you look again at the weight between layer 1 and layer 2, you can see that the dimension of the weight is 1 by 10. This is because the target for this network is a scalar.
net1.layerWeights{2,1} Neural Network Weight delays: 0 initFcn: (none) initConfig: .inputSize learn: true learnFcn: 'learngdm' learnParam: .lr, .mc size: [1 10] weightFcn: 'dotprod' weightParam: (none) userdata: (your custom info)
In addition to setting the appropriate dimensions for the weights, the configuration step
also defines the settings for the processing of inputs and outputs. The input processing can
be located in the inputs
subobject:
net1.inputs{1} Neural Network Input name: 'Input' feedbackOutput: [] processFcns: {'removeconstantrows', mapminmax} processParams: {1x2 cell array of 2 params} processSettings: {1x2 cell array of 2 settings} processedRange: [1x2 double] processedSize: 1 range: [1x2 double] size: 1 userdata: (your custom info)
Before the input is applied to the network, it will be processed by two functions:
removeconstantrows
and mapminmax
. These are discussed fully in Multilayer Shallow Neural Networks and Backpropagation Training so we won't address the particulars here. These processing functions
may have some processing parameters, which are contained in the subobject
net1.inputs{1}.processParam
. These have default values that you can
override. The processing functions can also have configuration settings that are dependent on the sample data. These are contained in
net1.inputs{1}.processSettings
and are set during the configuration
process. For example, the mapminmax
processing function normalizes the
data so that all inputs fall in the range [−1, 1]. Its configuration settings include the
minimum and maximum values in the sample data, which it needs to perform the correct
normalization. This will be discussed in much more depth in Multilayer Shallow Neural Networks and Backpropagation Training.
As a general rule, we use the term “parameter,” as in process parameters, training parameters, etc., to denote constants that have default values that are assigned by the software when the network is created (and which you can override). We use the term “configuration setting,” as in process configuration setting, to denote constants that are assigned by the software from an analysis of sample data. These settings do not have default values, and should not generally be overridden.
For more information, see also Understanding Shallow Network Data Structures.