Though Microsoft’s Kinect technology only started to spread to the PC platform in the recent past, more and more developer starts to create new, more interactive applications. The reason is that the Kinect platform and the adherent Kinect SDK offers a lot of feature to create applications with interactive, gesture or speech based control abilities. These features can make an application’s control much easier, and (in most cases) much simplier. A few possibility of the platform: gesture detection and recognition, pusture recognition and speech recognition. All of them became available with the Kinect device, you don’t need another supplementary device (for example special googles or remote controllers).
My goal is to create a framework, which is process the data coming from Kinect (depth values, skeleton and joint positions) and capable of recognizing simple and complex gestures (linear movements, circle movements, and any combination of them), different postures, and also capable of recognizing some vocal commands.
A demonstrating application is also a part of my work. This is a WPF-based game, which can demonstrate all features described above.