ali farid
Followers: 0 Following: 0
Statistiques
12 Questions
0 Réponses
RANG
223 448
of 295 448
RÉPUTATION
0
CONTRIBUTIONS
12 Questions
0 Réponses
ACCEPTATION DE VOS RÉPONSES
25.0%
VOTES REÇUS
1
RANG
of 20 227
RÉPUTATION
N/A
CLASSEMENT MOYEN
0.00
CONTRIBUTIONS
0 Fichier
TÉLÉCHARGEMENTS
0
ALL TIME TÉLÉCHARGEMENTS
0
RANG
of 153 872
CONTRIBUTIONS
0 Problèmes
0 Solutions
SCORE
0
NOMBRE DE BADGES
0
CONTRIBUTIONS
0 Publications
CONTRIBUTIONS
0 Public Chaîne
CLASSEMENT MOYEN
CONTRIBUTIONS
0 Point fort
NOMBRE MOYEN DE LIKES
Feeds
Question
Problem with single agent Simulink using RL toolbox
I am using RL toolbox to train a single agent with the following specifications: for type=1 % obsMat = [1 1];...
4 mois il y a | 1 réponse | 0
1
réponseQuestion
How to setup a multi-agent DDPG
Hi, I am trying to simulate a number of agents that collaboratively doing mapping. I designed the actor critic networks, but I...
4 mois il y a | 1 réponse | 0
1
réponseQuestion
Reinforcement Learning: competitive or collaborative options in MARL Matlab
Hello, I am trying to set up three explorer agents to explore the unknown area in collobrative or competive manners. I am won...
5 mois il y a | 1 réponse | 0
1
réponseQuestion
Problem with bus input of RL agent
I used a block diagram of a RL agent in Simulink which in a Matlab example was used, but I modified the inputs of RL agent and I...
9 mois il y a | 1 réponse | 0
1
réponseQuestion
Cannot propagate non-bus signal to block because the block has a bus object specified.
I have a Simulink model that observation was only an image, and I added two other vector to the observation in RL toolbox. Since...
9 mois il y a | 1 réponse | 0
1
réponseQuestion
Observation specification must be scalar if not created by bus2RLSpec.
I am using a RL system that is initially designed for one type of observation which is image. Recently I added two scalar observ...
10 mois il y a | 1 réponse | 1
1
réponseQuestion
A problem with RL toolbox: wrong size of inputs of actor network.
I have a problem with getSize which shows a wrong size, my input is a scalar with a size [1 1], but get size returns 2. I am usi...
10 mois il y a | 1 réponse | 0
1
réponseQuestion
Reinforcement Learning Error with two scalar inputs
I have a strange error from a critic network that has 3 inputs, image, and two scalars. But I see the following error: Error ...
10 mois il y a | 1 réponse | 0
0
réponseQuestion
Add scalar inputs to the actor network
I have a CNN based PPO actor critic, and it is working fine, but now I am trying to add three scalar values to the actor network...
10 mois il y a | 1 réponse | 0
1
réponseQuestion
Design an actor critic network for non-image inputs
I have a robot with 3 inputs including wind, and current location and the current action. I use this three inputs to predict the...
11 mois il y a | 1 réponse | 0
1
réponseQuestion
I see a zero mean reward for the first agent in multi-agent RL Toolbox
Hello, I have extended the PPO Coverage coverage path planning example of the Matlab for 5 agents. I can see now that always, I...
environ un an il y a | 1 réponse | 0
0
réponseQuestion
Replace RL type (PPO with DPPG) in a Matlab example
There is a Matlab example about coverage path planning using PPO reinforcement learning in the following link: https://www.math...
plus d'un an il y a | 1 réponse | 0