![](https://www.mathworks.com/matlabcentral/answers/uploaded_files/1514579/image.png)
Tune PI Controller Using Reinforcement Learning
6 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
嘻嘻
le 18 Oct 2023
Réponse apportée : Emmanouil Tzorakoleftherakis
le 23 Oct 2023
How is the initial value of the weight of this neural network determined? If I want to change my PI controller to a PID controller, do I just add another weight to this row that is initialGain = single([1e-3 2])?
This code is from the demo "Tune PI Controller Using Reinforcement Learning."
initialGain = single([1e-3 2]);
actorNet = [
featureInputLayer(numObs)
fullyConnectedPILayer(initialGain,'ActOutLyr')
];
actorNet = dlnetwork(actorNet);
actor = rlContinuousDeterministicActor(actorNet,obsInfo,actInfo);
Can my network be changed to look like the following:
actorNet= [
featureInputLayer(numObs)
fullyConnectedPILayer(randi([-60,60],1,3), 'Action')]
3 commentaires
Réponse acceptée
Emmanouil Tzorakoleftherakis
le 23 Oct 2023
I also replied to the other thread. The fullyConnectedPILayer is a custom layer provided in the example - you can open it and see how it's implemented. So you can certainly add a third weight for the D term, but you will most likely run into other issues (e.g. how to approximate the error derivative)
0 commentaires
Plus de réponses (0)
Voir également
Catégories
En savoir plus sur Function Approximation and Clustering dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!