# Applying Incremental, Inductive Model Checking to Software

Software plays an ever increasing role in our lives. Nowadays, software is used even in safety critical fields where a potential failure can lead to significant consequences: software failures can cause fortunes to go to waste, airplanes to crash, or nuclear power plants to stop. Thus ensuring the correctness of such critical software is a crucial task.

Verification is the process of checking that the software operates as expected. Formal methods are often used to verify critical systems, which – as opposed to traditional verification methods (such as testing) – is not only able to show the presence of errors, but also to prove their absence. One of the most widely applied formal method is model checking, which constitutes traversing the state space of the software to prove or disprove formal requirements against the system.

IC3 is a state-of-the-art model checking algorithm, originally developed for checking hardware models. The unique property of IC3 is that it proves the correctness of the system in incremental steps by proving several simple lemmas about the system. If the system is correct, the proven lemmas collectively form a proof of that fact (they specify a safe overapproximation of the state space). If the system is not correct, the lemmas direct the search towards a counterexample, thereby making the traversal more efficient. The original algorithm is designed to check finite state systems, and is not effective for infinite state systems. Therefore the algorithm is still the subject of active research and has several extensions.

In my thesis I examine an extension to IC3 that uses (implicit) predicate abstraction to handle infinite state spaces efficiently. This method divides the state space into a finite number of partitions with respect to a set of predicates over the system variables. The partitions form a finite abstract state space, where each state is defined by which of the predicates hold and which do not. During its run, the algorithm learns new predicates to refine its abstract image of the system, while still maintaining the coarseness of the abstraction.

In order to check control flow automata which describe programs, I present a transformation that creates a symbolic transition system for a control flow automaton that behaves equivalently to the automaton. The correctness of the automaton can be checked by running the algorithm on this symbolic transition system.

I implement the algorithm and the transformation as part of the open source model checking framework Theta, which allows for additional ways of extending it. Finally, I evaluate the efficiency of the various configurations of the implemented algorithm.