During the operation and testing of a complex system a large amount of logs are generated within which lies the cause of an error. It is often tiresome and circumstantial for a developer to find this cause. In my thesis I will describe a method, which builds a statistical language model on the logs of the passing tests hoping, that this way the cause of error can be spotted. In my case the logs are generated during the observation of a telecommunication system via a network traffic monitoring tool. This traffic is intercepted as a conversation, which consists of sentences and words. One protocol message is a word and the traffic of a test is a sentence. I use the sentences to build a language model. When it comes to statistical model building it is always important to focus on the process of evaluation. I will introduce a few algorithms with which I tried to evaluate. The best evaluation algorithm gets to be integrated into a program, whose main task is to find anomalies among the probabilities of the words. These anomalies are probably the symptoms of an error. Highlighting these symptoms can help accelerate the debugging process. The complete software is able to detect 80% of the failures.