how to find the optimal path to catch a target when the target is moving using q-learning

2 vues (au cours des 30 derniers jours)
hello i would like to find the optimal action of an agent when the target that he wants to catch is moving using q-learning . My problem that i facing is that i just want to find the optimal optimal next action and not the optimal path to a certain location because the target is moving
i have already build an easier scenario where the agent knows the path of the target and he just finds the optimal path for each move of the target and chooses the optimal path with the least steps but that is unrealistic. In reality the agent will only know the next move of the target and tries to get as much closer as he can in each step. The agent must get trained to find the best predicted move for the next action of the target in order to get closer to it . The actions that i use are up,down,left,right.
if you have any idea on how to make this possible , just let me know.
  3 commentaires
Aristi Christoforou
Aristi Christoforou le 6 Juil 2022
@Sam Chak i am using a q-value matrix . So i update my q-value matrix that has the possible action with the reward of each state and the temporal difference in each iteration ;
temporal_difference = rewardofaction + (gamma*maxofq) - old_q_value;
new_q_value = old_q_value + (alpha * temporal_difference);
q_values(ocs,action_index) = new_q_value;
Aristi Christoforou
Aristi Christoforou le 6 Juil 2022
i thought i can make the agent take the 4 possible actions in each state and measure the distance between the new state and the current state of the target and i will take the action with the minimum distance and put in the q-value table

Connectez-vous pour commenter.

Réponses (3)

Yatharth
Yatharth le 6 Juil 2022
You can try considering regions where your target currently is and where it isn't as states
Say your target is at [4,5] so you can assign region from
{ [3,4] , [3,5] , [3,6]
[4,4] , [4,5] , [4,6]
[5,4] , [5,5] , [5,6]} as State = 0 and similarly other regions as different state
and you train your agent to reach that particular region/state. Once you are in that region you can take greedy approch to reach the target
  1 commentaire
Aristi Christoforou
Aristi Christoforou le 7 Juil 2022
@yatharth thats a good idea but i want the agent and the target to move with the same speed .. when the target moves one step then the agent can move only one step . This sometimes will not work if the agent and the target are far away from each other because the agent wont be able to catch the target before the target goes to its goal. Thats why i need to somehow train the agent to find the optimal path and predict the next step of the target in order to catch it , and if the agent is far away from the target then another iteration should start where the agent's first position which is chosen randomly will be chosen again closer to the target this time ... but i dont know yet how to code this .. and how to predict the next action of the target in order to get closer to it

Connectez-vous pour commenter.


Sabiya Hussain
Sabiya Hussain le 29 Août 2022
Hello there i need help regarding a Q learning program which is an example of markov decision process of a Recycling robot

Sabiya Hussain
Sabiya Hussain le 29 Août 2022

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by