The subject of this paper is developing an eye tracking method which, with the help of computer vision algorithms, can locate the user's eye center. The goal is to locate the focus point on the computer screen using the information gained from this method.
The detecting process begins with face recognition and proceeds with locating the eye regions. Within the eye regions the pupil's position is calculated from image gradients. The point of gaze, based on the pupil's position, can be transitioned into the screen's coordination system with a calibration process.
The methods use in the application were written in C++ using the OpenCV open source library. In addition to locating the eyes it is possible to distinguish their horizontal and vercial position/direction. This way it becomes possible to detect which part of the computer screen the user is looking at.
The precision and functionality of the method was tested with different image resolutions and enviroments (source of light, background). The paper contains perfomance measurement (memory and cpu usage). Moreover the paper contains a short summary about the tools and possibilities of image based human-machine communication.