When we consider a problem where a mobile robot has to autonomously navigate within an unknown environment, without any collisions, is crucial that it possesses some kind of information about its surroundings. This information is the position and the distance from the robot of any given obstacle within the whole or a specific range of the environment the robot has to navigate in. The sufficient processing of a pair of images taken at the same time from different angles could provide stereoscopic depth field information, which can be used in the solution of the robot navigation problem.
The purpose of the thesis is to design and implement an instrument that is capable of controlling two camera sensors, and transferring the simultaneously acquired images from two different viewpoints to a personal computer for further processing, visualization and storage.
The architecture of the system has to support high flexibility and extendibility, together with mobility, which means, it has to be equipped with support for high-speed wireless communication, battery operation, and compact physical layout.
The work associated with the thesis comprises the establishment of the system plan, designing and implementing the hardware, and developing the software components for the embedded and the desktop environment respectively.
The hardware design consists of three major components of which two are the sensor boards containing the camera sensors and the optics, and one is a main unit that holds an ATmega32 microcontroller a Spartan-3 FPGA and a hi-speed double-channel USB interface chip. During the design process selecting the suitable components, designing the complex communication subsystem, and resolving signal integrity problems have obtained the most attention. The provess of developing the software components of the design is still in an initial phase.