Today's modernised world has a growing demand for automatized real-time surveillance and tracking systems. These solutions allow us to replace security personnel with specialized, embedded image processing devices. To reach this goal, we need evaluation and decision techniques reflecting or getting above the human vision perception. The field of computer vision helps in the acquisition of relevant visual information from the reality through cameras, with methods based on mathematical algorithms.
A good example would be a system deployed in a garden, tracking stray animals and chase them off with a small water cannon. The system's goal is to acquire the spatial position of the animals with cameras in real-time. This paper specifies the implementation of the tracking systems simple image processing prototype module. With the use of OpenCV computer vision library I present the theory of stereo vision, background subtraction and blob detection, which are able to filter and localize moving objects. I implement a prototype software using these methods, which is tested on an ODROID single board computer with a stereo camera consisting of two PlayStation Eye cameras, chosen from multiple hardware alternatives. After that I examine the computational load of the algorithms and usability of the prototype with a slower embedded device compared to a PC.
Finally I propose some improvements for the future, through which more complex systems could be based on the prototype.