Benchmarking Semantic Databases for Model Validation

OData support
Supervisor:
Szárnyas Gábor
Department of Measurement and Information Systems

In Model-Driven Engineering (MDE), different aspects in a system are described as models, from which the components of the system can be automatically generated (e.g. software code, test cases). To ensure their correctness, these models have to be validated. To prevent mistakes made in the design phase from escalating, the models are validated by checking with well-formedness constraints.

The big models contain a lot of elements – engineering models often have more than ten millions elements. Validating these models may take a long time, especially for complex validation queries. Because of this, it is a common practice to use incremental query evaluation, which means quicker query processing, in exchange for more memory.

The main goal of the Train Benchmark framework (developed in the Department of Measurement and Information Systems) is that the Database Management Systems using different models can be compared to each other in a meaningful way. Among other NoSQL databases, the tools using semantic data model became popular again and measuring their performances may be an important guideline to choose the appropriate one.

During my thesis I improved the Train Benchmark framework with implementations on new tools. These tools provide support for the most commonly used framework of the semantic web: the Resource Description Framework (RDF) and for its query language SPARQL.

Downloads

Please sign in to download the files of this thesis.