The rapid evolution of smartphones made them largely widespread and powerful, and this trend only seems to go on and on.
Because of this they are becoming more and more capable of executing such tasks that previously only desktop computers were able to do.
One possible use of this is using the smartphones as intelligent cameras, which is made possible by their computing power, advanced camera lenses, advanced operating systems, and image processing algorithms they can run.
When using many of such devices, we can connect them on network, and use them in a coordinated manner to create a complex system that can be capable of many functions related to computer vision.
One of such functions can be using these multiple, coordinated cameras to continuously track the 3D position of certain predefined objects in real-time.
My target was developing an Android application that is capable of tracking the position of a predefined marker by using the OpenCV computer vision library and sending the tracked data to a server through the network.
This application can be the base of the aforementioned complex system based on computer vision.
My tasks included measurements of the capabilities of this application when running on mobile devices, which consisted mainly of the examination of the timing and positioning precision that can be achieved when tracking the objects.
I also investigated different position estimation methods and how much they can improve the performance of the application by reducing the region of interest of the image processing without compromising the robustness of the system.