One of the big challenges of the mass education in computer science is the evaluation of the high amount of assignments: evaluating code quality, checking whether it follows the specification, and the thorough examination of code semantics. These evaluations require heavy manually work, which can be significantly decreased by computer support. Then the time saved can be used for education activities, where human presence is a lot more important. Students become more motivated by getting instant feedback and the possibility of fixing their work before the deadline, like to improve code quality, robustness.
There is a wide set of assessment tools available for programming projects, but for several reasons, they remain unknown or their licence does not allow modifications. Thus they are not used and a custom solution is implemented in many cases . Therefore I have been designed, implemented and introduced a framework with modular design and expandability in mind, which can solve most challenges of automatic task evaluation and can interact with LMS (Learning Management System).
The motivation of my project and the first application was evaluation of around 4000 assessments of more than 1000 students solved using various technologies (eg. SQL, Java, SOA and XML) during the Software laboratory 5 (Databases laboratory) course in two semesters. The system also provides a way to report detailed results, besides automatic evaluation.
The programming assignments of students were evaluated by instructors who have used the automatic assessment system and then have given the final grade manually. The difference between human and automation evaluation turned out to be zero to one grade in most cases.
Beside creating this framework, I have also helped the work of the system’s users too by creating tutorials and supporting them in designing and developing preprocessors, formal tests and so on. The system is still actively used (2017/2018 first semester) in their day-to-day evaluation tasks.