I am working on my thesis in which I have 2 objectives.
Create a neural network and make predictions;
To control that network (impose a constant output and find out which input values make that possible);
This way I created a NARX network. In open-loop the mse is low but in closed-loop it is around 0.05. I would like to know if it was possible to decrease more the error in closed-loop since for what I want it is insufficient.
The second question is that if I put only one input value and make the prediction and then change that input value for another, the output is always the same. That is, every time I change the input the output is the same. I did this in order to find the input values that were closest to the reference value I wanted. However I can't determine why the output is always the same.
The code I used was this:
X = tonndata(input,true,false);
T = tonndata(target,true,false);
trainFcn = 'trainlm'; % Levenberg-Marquardt backpropagation.
inputDelays = 1:4;
feedbackDelays = 1:4;
hiddenLayerSize = 25;
net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize,'open',trainFcn);
% Prepare the Data for Training and Simulation
[x,xi,ai,t] = preparets(net,X,{},T);
% Setup Division of Data for Training, Validation, Testing
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% Train the Network
[net,tr] = train(net,x,t,xi,ai);
% Test the Network
y = net(x,xi,ai);
e = gsubtract(t,y);
performance = perform(net,t,y)
% Closed Loop Network
netc = closeloop(net);
netc.name = [net.name ' - Closed Loop'];
[xc,xic,aic,tc] = preparets(netc,X,{},T);
yc = netc(xc,xic,aic);
closedLoopPerformance = perform(net,tc,yc)
% New input
entrada = tonndata(input3,true,false);
saida = tonndata(target3,true,false);
% Prediction
next_value = sim(netc, entrada);
entrada = entrada(1:numel(next_value));
% Plot results
plot(1:numel(entrada), cell2mat(saida), 'b', 1:numel(entrada), cell2mat(next_value), 'r')
I appreciate any help