The task is a robot soccer strategy algorithm based on reinforcement learning. This strategy is for instructing autonomously a team of robots playing soccer game. The individual members does not have any personal intelligence, the global strategy casts their role and actions. These actions might lead the team to victory based on the successfulness of the strategy.
The input is the own and opponent players' position and orientation, and the ball's position and velocity. The output is the movement of every own players for every timestamp. There are two types of possible movements: turning around the player's middle axis with a given angular velocity in a given direction, or moving forward or backward with a given velocity. This output means quite low level type of signals, the tactical level is for transforming the strategy output, such as to kick the ball to position [x, y] with velocity v, or to dribble the ball towards [x, y], to that low level type output. This tactical level have been made partially in previous semesters' project laboratory, the task is to complete it with other high level functionality that the strategy needs and integrate them together.
This work is a part of a bigger project with more people involved to create the robot soccer physical realisation for the departement. This strategy is currently developped in a Matlab test framework, in simulation.
When the strategy is done, it needs to be compared in contest with other people's strategies to determine it's successfulness, possible defects, limitations and potential further developpent trends. A short enquiry about the possibility of adopting the strategy to the physical realisation.