These days, when we have huge computing capacity at hand, and the achievements of artificial intelligence and machine learning are used in practice more frequently, it is nec-essary to guarantee its reliability. One of the conditions of reliability is to be able to ex-plain the behaviour of the tool we are using, even in extreme cases. It is also important for these explanations to be logical and understandable for human users.
In this thesis, first I am going to introduce the concept and types of explanation, and mathematical attempts to define it.
Because explanation generation, and the concept of the most probable explanation, are easier to illustrate via causal models, I am going to introduce one of its most popular form, the Bayesian network.
I am going to address the issue of the most probable configuration on structures with various complexity. The before mentioned structures in ascending order of complexity:
Markov chain, Poly-Tree Graph based, and General Graph structures.
After the description of the algorithm used for generating explanations, I am going to show some practical methods for constructing Bayesian networks. I am going to de-scribe the difficulties we face during the extraction of the necessary data for structure and parameter learning.
As part of this thesis I created an application that is capable of learning Bayesian net-work structure and parameters from sample matrices, and computing the most probable explanation in the before constructed network.