To evaluate complex queries on ever-growing amounts of data, a crucial goal of Database Management Systems is to provide scalable storage and optimise queries. In order to solve scalability issues, this led to the introduction of various NoSQL and NoSQL databases that concentrate on real-time query evaluations using different data storage structures and index techniques.
Similarly, scalability problems appear in model-driven engineering due to the increasing size of the model and the complexity of model transformations and validations.
The Train Benchmark project provides a framework to assess and compare the performance of various Database Management Systems and model-based implementations. The framework concentrates on using graph-based models, and it is suited to assess scalability by measuring query evaluation time via model transformations and validations. In terms of the Train Benchmark, a validation is defined by a query formulated in each tool’s query language, which seeks elements in the model that violate the well-formedness constraints.
The main goal of my thesis work is to extend the framework with an addition validation, furthermore, to facilitate a new tool’s implementation, which is a representative of the NewSQL movement, MemSQL. I also investigate the scalability of the new implementations by analysing the measurement results.
Finally, in order to support the analysis of the measurement results, I propose an interactive user interface for reporting.