In the past 10 years, machine learning has experienced unprecedented growth in terms of research financing and people entering the field. As the performance of machine learning algorithms greatly correlates to the volume and quality of available training data, the ever-growing demand had led to a distinct breed of approaches that try to overcome these requirements. The utter goal of this study is to introduce the generative adversarial network model (GAN) and its multiple variations. The thesis will showcase GAN’s capabilities to synthesize never-seen, new data and even enhance previously generated data to produce realistic training instances.
Throughout the first chapter, I will present a gentle introduction to basic machine learning concepts mandatory for understanding the rest of the study. The second chapter will exhibit the family of GANs and will make an effort to outline the differences between their architecture and application. As a result of GAN’s complicated structure, these types of architectures may raise many problems during their application. Later in the document, I will address one of these problems and will examine existing solutions. As for the ending of chapter two, the section will discuss the malicious adversarial attacks on GAN networks and its types, as well as different ways in which GANs can be applied as regularization for classification models.
In the third and final chapter of my thesis, I conduct experiments on two major topics related to the training of GANs. The first one is mode collapse, in which case I am comparing the robustness of two special GANs which were specifically designed the way to increase the stability of the model. The second topic is the examination of the possibilities to prevent the successful application of the previously introduced adversarial attacks and to measure their effectiveness on a well-known and a self-made dataset.