In our rapidly developing world, the number of available smartphones grows every day. The abandoned devices have a lot unused resources, which can be utilized in many different ways. One of the possibilities is creating a computer vision system, where the smartphones work as remote cameras and even as processing units too.
My project was creating a system where some pre-calibrated cameras are tracking table tennis balls and some previously un-calibrated cameras can calibrate themselves based on the 3D coordinates of the balls so that they can join to the system. First of all, the cameras have to be able to detect the balls in their own image. When this works well, the system has to match the detected balls from different cameras with each other which can be difficult if the balls are of the same color. After this, the 3D coordinates can be calculated from the intersection of the rays starting from the focus point of the cameras. Thereafter, the system has to match the 3D tracked balls with their projection on the un-calibrated cameras image. From these information, the cameras can calibrate themselves, and calculate their position and orientation. If we want to measure the accuracy of the application, some reference data is needed which can be compared to our output. So we set up a measurement, where two pre-calibrated cameras are tracking the balls and a robot moves the un-calibrated camera on a specified route. This camera is trying to calibrate itself with the information of the balls. The reference data can be obtained in different ways. We chose another calibration method to calculate the data. In this method, the camera searches for a chessboard on its image and it can calibrate itself from the information of the chessboards ideal shape and its projection.