Effacer les filtres
Effacer les filtres

5G Handover with Reinforcement Learning, mismatch of input channels and observations in reinforcement learning representation

21 vues (au cours des 30 derniers jours)
Hello,
This is actually my final year project, i have 0 coding knowledge on RL before this.
I am trying to create a custom RL environment in MATLAB. In this environment, I have defined my observation space as rlNumericSpec([numUE*2 1]) because I have numUE user equipment, and each has 2 coordinates (x, y). And my actions are do or not do the handover.
This is what I get when I run the code:
Error using rl.representation.rlAbstractRepresentation/validateModelInputDimension
Model input sizes must match the dimensions specified in the corresponding
observation and action info specifications.
Error in rl.representation.rlQValueRepresentation (line 47)
validateModelInputDimension(this)
Error in rlQValueRepresentation (line 130)
Rep = rl.representation.rlQValueRepresentation(Model, ...
Error in train2test (line 53)
critic = rlQValueRepresentation(criticNetwork,env.getObservationInfo(),env.getActionInfo(),'Observation',{'state'},'Action',{'action'},criticOpts);
  1 commentaire
Lee Xing Wei
Lee Xing Wei le 12 Juil 2023
I have tried to chage my observation many times, still get the same error. Actually i want to get the UEpositions, UEvelocity and UEBSconnections as my observations

Connectez-vous pour commenter.

Réponses (1)

Emmanouil Tzorakoleftherakis
I suspect you did not set up your critic network properly. If you share that code snippet we can take a closer look. An alternative would be to use the default agent feature and let the software create a critic for you automatically based on provided observation and action spaces. Here is an example that assumes you want to create a DQN agent:
  3 commentaires
Emmanouil Tzorakoleftherakis
As I mentioned, you can create the agent using the default agent feature like this:
agent = rlDQNAgent(obsInfo,actInfo)
If you run this line, it won't create any errors.You can then check what the neural network look slike by doing:
critic = getCritic(agent);
criticNet = getModel(critic);
plot(criticNet);
That said, take another look at how you defined your action space. If your output is only 0 and 1, your action space is not defined correctly (it's currently a cell array of [0 1]).

Connectez-vous pour commenter.

Produits


Version

R2023a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by