![photo](/responsive_image/150/150/0/0/0/cache/matlabcentral/profiles/17352228_1579772908417_DEF.jpg)
PB75
Followers: 0 Following: 0
PhD Researcher in IC control and simulation
Statistiques
RANG
21 730
of 292 677
RÉPUTATION
2
CONTRIBUTIONS
24 Questions
8 Réponses
ACCEPTATION DE VOS RÉPONSES
33.33%
VOTES REÇUS
1
RANG
of 19 931
RÉPUTATION
N/A
CLASSEMENT MOYEN
0.00
CONTRIBUTIONS
0 Fichier
TÉLÉCHARGEMENTS
0
ALL TIME TÉLÉCHARGEMENTS
0
RANG
of 147 821
CONTRIBUTIONS
0 Problèmes
0 Solutions
SCORE
0
NOMBRE DE BADGES
0
CONTRIBUTIONS
0 Publications
CONTRIBUTIONS
0 Public Chaîne
CLASSEMENT MOYEN
CONTRIBUTIONS
0 Point fort
NOMBRE MOYEN DE LIKES
Feeds
Reinforcement Learning Toolbox Error in R2022a
Hi, I have an error when I run my RL code original done in R2021a, but flags an error in R2022a, any help would be great. ...
environ un an il y a | 0
Question
Calculating Work done by a Closed Cylinder in Simulink and Simscape
H All, I am attempting to calculate the heat work done by a cylinder in an internal combustion engine over a single cycle (V1->...
plus d'un an il y a | 1 réponse | 0
1
réponseSimscape Pneumatic Chamber with Control Signal for Initial Pressure
Hi All, Can anyone help on how to modify the pnuematic network to allow the control the initial pressure in the network, rather...
plus d'un an il y a | 0
Simulink Logic - Capturing a Signal Value at a Specific Point for Control
Hi Fangjun, As in the following? Using a DataStoreMemory block in the triggered subsystem?
plus d'un an il y a | 0
| A accepté
Question
Simulink Logic - Capturing a Signal Value at a Specific Point for Control
Hi, I am creating a controls model in Simulink and need to capture a simulink signal at a specific point in the simulation, and...
plus d'un an il y a | 2 réponses | 0
2
réponsesQuestion
Simscape Pneumatic Chamber with Control Signal for Initial Pressure
Hi, I am simulating a pneumatic chamber using a single Translational Mechanical Converter. I am happy with behavour of the cham...
plus d'un an il y a | 1 réponse | 0
1
réponseQuestion
Deep Network Designer - LSTM Training Data Post Processing and Creating a Datastore
Hi, I have been using the shipping example LSTM ROM as the base for my code to create a ROM using an LSTM. Now that I have the...
presque 2 ans il y a | 1 réponse | 0
1
réponseQuestion
Deep Learning Toolbox - Normalising a Cell prior to LSTM Training
Hi All, I am attempting to normalise my training data it improve the performance of the LSTM network. The data is collected in...
presque 2 ans il y a | 1 réponse | 0
1
réponseQuestion
Deep learning Toolbox - LSTM Training
Hi All, I am building an LSTM ROM to integrate into my Simscape model, which is using training and test data captured in ANSYS ...
presque 2 ans il y a | 1 réponse | 0
1
réponseQuestion
Signal Analyser - Time Values Query for Re-Sampling
Hi, I am attempting to re-sample a signal (and array of signals if possible) in the Signal Analyser app. The catpured data is i...
presque 2 ans il y a | 1 réponse | 0
1
réponseQuestion
Resampling a Data Array for LSTM Training
Hi, I am preparing data for training an LSTM network with a sequenceInputLayer. The data is from ANSYS which uses a variable st...
presque 2 ans il y a | 1 réponse | 0
1
réponseQuestion
Deep Learning Toolbox - Structuring the Training Data from Imported Data
Hi, I am attempting to create a ROM of a gas exchange process by training a LSTM network. I am using the ROM example LSTM ROM a...
presque 2 ans il y a | 1 réponse | 0
1
réponseReinforcement Learning with Parallel Computing Query
Hi Joss, I have done as you recommended, seems the issue may still be there running the .m script also, as alongside the error ...
presque 2 ans il y a | 0
Reinforcement Learning with Parallel Computing Query
Hi Joss, Thanks for taking the time to answer my question. I have un-checked the idle timeout in preferences, however, I enco...
presque 2 ans il y a | 0
Question
Reinforcement Learning with Parallel Computing Query
Hi All, I am attempting to get parallel computing enabled when I train my RL agent in R2022a. Forgive the basic question regard...
presque 2 ans il y a | 2 réponses | 0
2
réponsesQuestion
Reinforcement Learning Episode Manager not stopping training in R2022a
Hi, I have updated my install from R2021a to R2022a. Using the RL toolbox when running the episode manager with the following c...
presque 2 ans il y a | 1 réponse | 0
0
réponseReinforcement Learning Toolbox Error in R2022a
Hi, Can I get some help on the issue of this post? I cannot run my RL code thatw as created and runs in R2021a in R2022a. The ...
environ 2 ans il y a | 0
Question
Reinforcement Learning Toolbox Error in R2022a
Hi, I have been using the RL toolbox within R2021a, using a TD3 agent, with a fully connect network (NON LSTM) to control a PMS...
environ 2 ans il y a | 3 réponses | 0
3
réponsesQuestion
Simulated Signals in Parameter Estimator App for PMLSM Model Validation
HI, Have a quick question, have created a PMLSM model to validate experimental data. Currently attempting to estimate some of t...
presque 3 ans il y a | 1 réponse | 0
0
réponseQuestion
SimScape Pneumatics 3 way Connector Query
Hi All, I am building a Simscape model to create a digital twin of an experiment. It is a dual-acting pnuematic cylinder attach...
presque 3 ans il y a | 1 réponse | 0
0
réponseQuestion
Reinforcement Learning with Parallel Computing
Hi All, I have been training a TD3 RNN agent on my local PC for montrhs now, due to the long training period due to the perform...
presque 3 ans il y a | 1 réponse | 0
1
réponseQuestion
Deep Reinforcement Learning Toolbox with LSTM RNN
Hi, Currently I am using a LSTM RNN network in my TD3 agent model. I am using a LSTM network architecture based on the DDPG TD...
environ 3 ans il y a | 1 réponse | 0
0
réponseLinear PMSM Motor and Generator Model in Simulink and Simscape
Hi Hassan, The PMLSM simscape code permits is bi-directional operation, so you can motor the machine by applying a voltage to t...
environ 3 ans il y a | 1
Question
Compatible GPU's for RL Toolbox
Hi Everyone, I am researching upgrading my faculty PC to permit GPU parallel computing to speed up my Reinforcement Learning tr...
environ 3 ans il y a | 1 réponse | 0
1
réponseQuestion
Saving Trained RL Agent after Training
Hi All, I trained a RL agent, the environment output was acceptable, my plan was to initially validate the agent in the simulat...
environ 3 ans il y a | 1 réponse | 0
1
réponseQuestion
Deep Reinforcement Learning Reward Function for Reference Tracking
Hi All, Would like some advice on training a RL agent reward function for good reference tracking. My environment is a PMLSM, ...
environ 3 ans il y a | 1 réponse | 0
0
réponseQuestion
Visualisation of Simulink and Simscape Model Workflows
Hi All, I would like to add a visualisation to my Simulink-Simscape model during simulations, to aid debugging and for my upcom...
plus de 3 ans il y a | 1 réponse | 0
0
réponseOptimisation Tool GA Custom Plot
Hi Alan, Thanks for your reply. Error "Too many ouput arguments" was due to me putting the "aplotchange" function in the outpu...
plus de 3 ans il y a | 0
Question
Optimisation Tool GA Custom Plot
Hi All, Have searched through the previous questions posted on custom plots and output functions in optimisation toolbox usin...
plus de 3 ans il y a | 2 réponses | 0
2
réponsesQuestion
Plotting SimScape simlog Simulation data via Script
Hi, Using SimScape for my pnuematic actuator model. Logging data via simlog and then looking to export and plot the simulation...
presque 4 ans il y a | 1 réponse | 0