Camera-based autonomous control of drones

OData support
Supervisor:
Budai Ádám
Department of Automation and Applied Informatics

With the development and commercial utilization of Unmanned Aerial Vehicles – also known as drones – an exciting and challenging area of research has appeared for engineers. The execution of several tasks – that used to require human resources – is possible now with the help of these devices, such as detecting forest fires [1], dealing with medical emergencies [2] or protecting wildlife [3] just to name a few. Their two most important traits are the ability to make exceptionally high-quality recordings and reach inaccessible locations. However, they still need significant human assistance in order to record in high quality, especially if we talk about a fleet of several drones (notwithstanding remarkable breakthroughs have been made in this area, some of them involving Hungarian researchers [4].)

In my paper, I am seeking a solution for how to use algorithms based on reinforcement learning to substitute – or at least minimize – the human factor and eventually develop drones that are capable of autonomous control and task execution. For this, I am designing a lightweight convolutional neural network that is able to sufficiently navigate the device based on solely the images provided by the drone’s camera. One of the most important aspects of my research is reducing the size of the network and minimizing the execution time (the time period between receiving the image and executing the order) in order to enable the technology to be used on a real device in a real environment. In the current stage of my research, the limits and learning speed of the network are being tested in a virtual environment developed by me with using well-known algorithms, while at the same time aiming to find the optimal structure as well.

During the learning period, the algorithm is constantly receiving images of a human shape and its task is to take small, few degrees wide steps in the virtual space to navigate itself into the desired position, then take photos from the expected angle, in our case from the front. 15 different human shapes are used in the learning period.

In the first chapter of my paper, the most important definitions about the reinforcement learning process and the operation of the convolutional neural networks are introduced. This is followed by the description of the already known solutions used for navigating drones, including their advantages and disadvantages. Finally, my design of the lightweight convolutional network is introduced together with the learning algorithms, the findings of the research and the further development possibilities.

[1] Wolfgang Krüll, Robert Tobera, Ingolf Willms, Helmut Essen, and Nora von Wahl. Early forest fire detection and verification using optical smoke, gas and microwave sensors. volume 45, pages 584-594. Procedia Engineering, 2012.

[2] A. Claesson, D. Fredman, L. Svensson, M. Ringh, J. Hollenberg, P. Nordberg, M. Rosenqvist, T. Djarv, S. Österberg, J. Lennartsson, et al. Unmanned aerial vehicles (drones) in out-of-hospital-cardiac-arrest. volume 24, page 124. Scandinavian journal of trauma, resuscitation and emergency medicine, 2016.

[3] Astrid Gynnild. The robot eye witness: Extending visual journalism through drone surveillance. volume 2, pages 334-343. Digital journalism, 2014.

[4] Gábor Vásárhelyi, Csaba Virágh, Gergő Somorjai, Tamás Nepusz, Agoston E. Eiben and Tamás Vicsek. Optimized flocking of autonomous drones in confined

Downloads

Please sign in to download the files of this thesis.