Framework for the evaluation of deep learning results

OData support
Supervisor:
Dr. Hullám Gábor István
Department of Measurement and Information Systems

After running a deep learning algorithm, the next step is the evaluation, which helps to decide whether we have created a robust, stable and accurate model or not. The evaluation has many types, in my thesis the evaluation is carried out with repeated K-fold cross validation. In order for the learning to be efficient, the dataset has to be transformed to a format that can be easily used by the framework that makes the evaluation. In the process of transformation, the comma separated values format has to be converted to a binary format and the missing values have to be filled up. I decided to use the Python’s serialization format, Pickle, for the binary format and I filled up the missing values with values that were drawn from the distribution of the features. The created framework calculates performance metrics from the results of the model for each cross validation step and enables their analysis and visualization. The main goal of this framework is to aid data analysts, as data analysis involves monotonously repeated tasks, some of which can be automated by this framework.

Thus this framework saves time and so there will be more time for more complex tasks. The framework performs data transformation, evaluation and visualization of results. The visualization is shown in a browser, providing a cross platform solution that allows the results to be viewed on a laptop and on a smart phone. The only prerequisite is to run a Bokeh server which is a Python library for interactive visualization used by the framework.

The framework is currently designed for classification tasks based on genetic data sets, but it can be modified to solve other problems and calculate and visualize other metrics.

Downloads

Please sign in to download the files of this thesis.