A modern user interface should not only enable the users to communicate with the computer. It also should provide control in the most simple and ergonomic way. Currently input devices are fairly limited to the keyboard and mouse. Since they are not suitable for all tasks, various research groups have started developing alternative user interfaces. The target of this thesis is producing an interface like this, especially a gesture recognition-based navigation system, that able to replace the navigation functions of the traditional mouse.
After studying the relevant literature, I prepared the detailed system plan, then the implementation. During my work I used the Microsoft KINECT camera, which can capture a depth map in real time from the 3D scene. The program, written by me, uses this depth picture, and morphological image processing algorithms to track the user’s hand. The tracking includes position detection and gesture recognition too. These pre-defined gestures for example the opened palm, the clenched fist, or the “number two sign” (with thumb and index). The effects of these are the mouse movement, the left- and right click. An extra feature was also implemented, which allows us to manage a presentation just by sweeping with our hands in the air to navigate between the slides.
After the development, different tests have been executed, which showed the efficiency, simplicity and speed of the device. Many times it proved more comfortable and spectacular then the conventional mouse. The measurements also represent that, during what kind of use results enough good accuracy, and where can we face difficulties.