Control of an AR Drone 2.0 with Neural Network in ROS Environment

OData support
Supervisor:
Csorvási Gábor
Department of Automation and Applied Informatics

Quadcopters are one of the most widely used unmanned miniature aerial vehicles, due to their great maneuverability and simplicity of construction.

The goal of this work is to implement the position control of a quadcopter. The control is realised using a neural network that is trained using reinforcement learning. Reinforcement learning is a form of machine learning, where the machine learns to solve a problem based on interaction with real or simulated environment.

The learning has been done in simulation that I programmed based on the dynamic model of the quadcopter and tested in ROS (Robot Operating System) environment. I used OpenAI Baseline’s TRPO (Trust Region Policy Optimization) and PPO (Proximal Policy Optimization) algorithms.

The learned controller is a fusion of a PD controller and a neural network and is able to reach a set goal point from a random staring state.

The performance of the system has been tested in simulation. In simulation, the neural network based controller supplemented by a PD controller has reached the goal significantly faster than the PD controller alone.

Downloads

Please sign in to download the files of this thesis.