Nowadays, a global trend of automation takes place in almost all areas of our life. Even if a certain set of processes – which had required human intervention - has been successfully automated, in many fields, the total replacement of human intelligence and ingenuity has not been solved yet. The large amount of complex subtasks established claim for simplified system definitions and led to realization of target systems.
Even if the field of Computer Vision – as a typical instance of these sort of development – possesses a wide range of literature, the published algorithms provide solution only for simplified problems such as object tracking, pattern matching and object recognition. None of them is able to handle complex tasks, especially when specific requirements appear due to the nature of the current application. Hence, research engineers need to adapt and optimize the algorithms to the certain system.
The aim of this Thesis is to discuss a certain component of such a target system developed in MTA SZTAKI as part of the UAV Project. The Sense and Avoid (shortly SAA) system provides solution for safe maneuvering during autonomous flight, avoiding other aerial objects. I was involved to the design and realization of a compact, high-performance vision system, which is a key part of the entire SAA.
The first chapter presents the hardware-level programming of the Mobisense Systems MT9V034C/M CMOS cameras through I2C bus, and the implementation of data acquisition with both Mobisense Systems MBS270 embedded vision computer – used only for test purposes – and the Expartan6t FPGA card itself, as onboard image processing hardware.
In the second part of my study, the developed higher-level image processing algorithms are discussed, including an adaptive threshold method and an image segmentation algorithm based on optical flow. These parts are responsible for the detection of potentially dangerous object, either its alternative on the pictures is embedded into the image of sky or ground.
Finally, an image stabilizing method is described, as a result of fusion of visual sensors and the onboard Inertial Measurement Unit (IMU)