Three dimensional visualization is gaining an increasing role in a wide range of medical applications. With the help of a 3D model, it is easier and faster to grab a deeper understanding of an object, than trying to assemble the virtual model of the object form 2D images.
However, mapping images made by a real camera to fit a texture onto an existing 3D model, or creating a 3D model form pictures taken by the camera are not easy tasks. Solving these tasks requires both image processing and image synthetization methods.
This thesis work presents the basics of 3D rendering and the concept of texturing through employing the “Semmelweis Scanner” device as a platform. The thesis describes an almost fully automated way to generate textures from images for an existing 3D model, and compares some of these techniques. It also explores some solutions that create not only the textures for an already existing 3D model, but also the model itself. It also describes the detection of easy-to-detect and hard-to-detect signature points on a hand, which are necessary to create the mentioned 3D reconstruction.