As an approach to mapping and sensing static workspaces, this thesis develops a robust method, which is capable of reconstructing the volumetric space in real-time. This application is able to determine the position and orientation of any 3D objects captured by the camera, which makes it applicable in various situations and tasks.
The method operates with a moving depth camera, scanning the static workspace and the environment constantly throughout the process. The route of the camera movement is not necessary to be predefined, therefore it can also be moved by freehand, making the application easy to be integrated into various systems. In addition to the low integration costs, the low costs of the depth camera makes the method one of the cheapest volumetric reconstruction methods available.
Within the process of reconstruction, all the data gathered from the environment are stored in a single point cloud, which creates a highly detailed fused 3D model based on the depth values of each pixel from all the images. The point cloud becomes more precise over time, thanks to the overlapping and fusing of depth images.
The camera pose and orientation tracking algorithm has the key role in the reconstruction, in addition to the filtering of the point clouds and depth images. To make it more applicable, the method realizes the volumetric modelling of the point cloud in real-time and makes it possible to render the result. This functionality allows an easy integration easy various systems, such as onto robotic arms as a peripheral. As it is suitable to provide feedback to the robot's motion controller, it makes real-time path planning possible.
This paper is intended to introduce the design and implementation of the volumetric reconstruction method, emphasizing and explaining the decisions that were made throughout the development. As a personal evaluation of my work regarding the current state of the project based on my former expectations and time plan, I can state, that I have finished my tasks, the volumetric reconstruction is ready and able to work real-time with the available hardware. In the future, I continue working with the focus on upgrading and optimizing the camera tracking algorithm, and try using to use OpenGL language for GPU programming instead of CUDA.