Nowdays we, as a society grew so accustomed to modern technology, that we can hardly take a step without our smarphones or other intelligent devices, let alone imagine the world if they’d have never existed. We often forget what made us so willing to put more and more trust into the softwares which operate them, however the answer is quite simple. These softwares have been keeping up an incredible degree of reliability througout the ages. So how is this possible exactly? It is possible throug excessive software testing. With the help of testing programmers could keep the software working as desired, thus indirectly influence the relation of the users to the product. As technology advances and the demand for software reliablility increases, the demand for software testing quality also grows.
This finally brings us to code coverage, the subject of this paper. Code coverage is a software testing metric that helps us to quantify the extent of unit testing. Code coverage is usually measured automatically by special tools, wherein lies a problem. The code coverage metrics, which in theory should define the way of exactly measuring certain code structures, are not yet standardized, so each tool is free to interpret them however it sees fit. This kind of behaviour creates differences between measurements. The goal of this paper is to uncover as many differences and possible causes between tools as humanly possible, thus reduce the scale of uncertanity concerning measuring software testing quality.
Therefore the first part of the paper familiarizes the reader mainly with code coverage, its metrics, the process of its measurement, etc. While the secound part constructs a method for uncovering the forementioned differences and the reasons behind them. Ideally by the end of this paper one can get a picture of the three most popular currently used tools to measure coverage, the differences between their measurements, and their common causes.