In this document, I designed and implemented a gesture and speech controlled car onboard system. During the design process, the most important point was to be able to use it without looking away from the road much. For speech recognition, I integrated an already existing solution, with my own grammars for that. For gesture recognition I used a special hardware, the Leap Motion Controller. The recognition itself uses both built-in gestures, and gesture recognition code written by me. I created those feedbacks, which makes it possible to know where the user is, without looking at it.
I developed the application using an iterative model. During the development, I performed user tests, trying to detect which area needs the most focus. I analyzed the results, and then I constantly built the experience into the development. To support the testing, I built a simulator, which makes it possible to create a similar environment to the real world, so that I can have a better feedback from the testers about the necessary ways forward.