This thesis is about the development of a control software running in embedded environment of a glassfree stereoscopic display system. Since the system performs image processing and stereo visualization in embedded environment while both are resource-intensive tasks, I need to optimize the usage of possibilities provided of the hardware. I designed the sofware architecture and the method of processing in mind with these requirements.
The task of the software is to generate the stereo image pairs based on a 3D model, and to adjust the shown perspective to the user’s position. The image of a camera device placed on a helmet can be used to achieve this.
The main processing hardware is an ARM-based application board with native camera interface and HDMI output. I also designed an FPGA-based display system for the stereoscopic visualization to split the output image and project to the canvas.
My measurements focused on the runtime speed of the modules of application and helped to increase it. At the end of the paper, the possibilities of further development of the system is presented.