The quality and reliability of today’s increasingly complex software receives more and more attention during their production. In order to be able to beat our competitors we need to invest at least as much effort into comprehensive testing as into our software’s precise design and implementation. During performance testing our systems are being put under the amount of load we want it to endure, then we process our measurements to see which components can be improved to reach the optimal results.
Throughout software development, by the extension of the software’s functionalities, some parts can show lower performance which we might only notice when our customers are dissatisfied or our measurements indicates that our products no longer satisfy their specifications.
The correction of software flaws are the more expensive the later we identify them therefore it is advisable to run tests frequently. If we do so we only have to search for the reason of the error in smaller changes which results in faster solutions. The technique of continuous integration supports the building and testing of our software frequently and automatically that contributes largely to making good quality software.
In my thesis, I present the basics of performance testing and continuous integration and then show the design and implementation of the system I made to encounter the previously mentioned issues by delivering a solution that fits into continuous integration. My system is capable of recognizing and visually presenting regression of performance tests. Also I implemented a Java framework that can execute time performance tests and provides a good starting point for further development.