Virtual reality (VR) is a fascinating new multimedia platform, which recently became widely accessible due to the rapid development of VR devices. VR is a great technology for capturing memories and locations, because these devices can transport the viewer inside the recorded world, providing a very immersive experience. There's also a demand by users to create their own content - either with the ease of smartphone photography, or with the high-quality results from interchangeable lens cameras. The most widespread format of VR images is the full spherical panorama, which can be taken with special VR cameras, or can be stitched using multiple images from ordinary cameras. Other methods can create even more spectacular results, where the user has the ability to not only look around, but to also move within limits - in other words the user has six degrees of freedom (6DOF) inside the virtual world.
In my thesis I show the development of a software system for stitching multiple wide-angle images captured from a similar viewpoint into a single, high-resolution full spherical panorama. I present the complete process from shooting to playback, using state-of-the-art algorithms which effectively eliminate possible errors of the capture step - like movement or distortions - and synthesize seamless transitions between images. A point cloud is also an output of the automatic 3D scene reconstruction step in the stitching process; I utilize this point cloud in conjunction with the completed panorama in a custom application such that the interactive playback creates a 6DOF experience for the viewer in a VR headset.