Construction of depth images is required by many applications, for example the navigation of mobile robots, or the reconstruction of 3 dimensional models. To achieve good results the depth image has to be as accurate as possible, and straight lines have to appear as straight lines in the constructed image.
The Microsoft Kinect, besides other sensors such as a microphone array and accelerometer, includes a depth sensor and a colour camera, making it useful for the above applications. The Kinect uses structured light to create a depth image, which requires much cheaper equipment than the previously used Time-of-Flight cameras. Compared to stereo view depth reconstruction systems, the Kinect provides more information per frame, as not every point is matched in stereo systems. This means that the sensor itself is very cheap compared to previous ones, and because of this the tool is accessible for hobby projects and home uses.
My thesis involves describing the depth finding method used by Kinect and other sensors, and the development of a WPF based framework that can be used for capturing the colour and depth data measured by the Kinect, and provides methods for calibrating these images. I will also develop a method of depth calibration that can be used without any special tools.