One of the challenges of using machine learning in practical scenarios is their interpretability. The modern complex models behave like a black box, meaning that the input and output are interpretable, but the mapping between them is not. In other words, we know what the prediction the model gave, but don't why it decided in that way. If a test dataset of sufficient diversity and quantity is not available, it is plausible that we deem faulty models acceptable.
Solving this problem is one of the main goals of explanation generation, which aims to determine what input attributes or features the model based its prediction on.
LIME (Linear Model-Agnostic Explanations) is a versatile explanation generator framework, which provides capability to explain any machine learning model. This document reviews its basic algorithms and usage, furthermore examines its performance on a neural network. This network is trained on a genetic database consisting of many attributes and an artificially generated label. The large number of attributes provides a challange for the neural network, and showcases the performance of LIME when dealing with extreme parameters.