Natural language processing (NLP) is a branch of machine learning that provides us tools for analyzing raw text automatically.
In our modern world human-machine communication has become a widely required task. In order to meet it's requirements, it is inevitable to analyze
the text's meaning.
The main task of semantic parsing is to automatically build semantic representation from the input, so we can model the meaning
of raw texts. If we model meaning as directed graphs of concept and we can build them from syntax trees that represents the structure of
sentences, then we can define the whole process as one complex graph transformation.
The performance of such analyzers cannot be measured directly, only through concrete tasks, such as Machine comprehension,
Natural language inference, or Knowledge base population. Lexical inference has a very strong connection to deep semantic parsing,
where we want to augment the graph that represents the meaning of a concept or a sentence by taking each word's graph that models its meaning.
The main focus of this thesis is to give strong baselines for the tasks mentioned above, and to enchant existing systems
with the help of the semantic parser 4lang (Recski et al. 2016). We believe
the significance of these experiments lies in its
demonstration that inference based graph transformations are a powerful method for solving any semantic parsing related task.
The thesis includes the improving of the 4lang system with further inference based rules and various metrics related to semantic graphs. Furthermore it will demonstrate
the wrapping the 4lang software in Restful Wep Api microservices for easier usage.
The biggest result of this paper is achieved on the Machine Comprehension task, where we integrated our method into a state-of-the art system (Wang et al. 2018). Our preliminary results suggest 0.5 percentage point improvement.