# Dense stereo disparity computation with local and global methods

OData: XML JSON What's this?The geometrical reconstruction of a three-dimensional scene can be done using several sensing devices: accoustic sensors, laser sensors, or cameras. This document focuses on perception with cameras. The most critical step of the reconstruction process is matching between different views, i.e. finding the projection of a given spatial point in both images as accurately as possible. One major difficulty of this process is that the projections in the two images can be very different due to large differences in camera positions. The computational complexity of a pixel-by-pixel matching is huge, but it can be reduced by using several constraints. The shift of the matched points in a unified reference frame is called disparity. Disparities encode depth for any given pixel in one of the images (henceforth called reference image). If the camera system is in a so-called standard configuration, there are only horizontal shifts between the matched points, thus, the vertical coordinates of corresponding points must be the same. Consequently, the search for correspondences is simplified to a search along scanlines. Fortunately, views in general confguration can be transformed into views in a standard configuration by warping the two images with a suitable transformation. This reduces the complexity of the matching algorithm.

During the semester, my primary objective was to implement algorithms that compute the disparity for any pixel given in the reference image. Int he study, I summerize the main problems and methods of stereo computer vision, the applicable constraints at the search for correspondences, different matching algorithms and their properties. Some of these algorithms were chosen for implementation. They were coded in MATLAB. The imlemented algorithms had been tested and evaluated on typical image pairs used in the technical literature.

Whenever it is possible, I present different principles and algorithms of computer vision by using my own results from earlier semesters, as well, for example, corner detection, edge detection, some implemented optical flow methods.