Software testing is one of the most common way of software verification. Thorough testing is a resource demanding activity, thus, automation of its phases receives high priority in both academia and industry. This might either mean the automated execution of test cases (which is already widespread) or even involve the generation of test cases or test inputs.
There are several techniques that are capable of selecting test inputs based on the source code of the application under test, these are called code-based test input generator tools. In recent years several (mainly prototype) tools have been created based on these techniques and several attempts have been already made to put them in industrial practice. Experiences show that the available tools considerably vary in capabilities and readiness.
The further spread of test input generator tools requires the assessment and evaluation of their competencies. One possible method for this is to create a code base containing the language constructs that are commonly used. With the help of such a code base it is possible to investigate the tools and compare their capabilities.
In the thesis a framework is presented which supports the creation of such code bases, is able to perform test generation using five test input generator tools and to carry out automated evaluation. In addition, the research results achieved using the framework will be also discussed.