Deep neural networks become more successful in natural language processing. One possible way of generating sequential text is by using Generative adversarial networks (GANs). GANs have become one of the most successful architectures in deep learning, especially in machine vision. The GANs have had a much smaller role in natural languages, thanks to their continuous nature, which makes difficult to access to the discrete symbols of natural language. The simplest application area of GANs is the generation from noise, text generation was only experimented in a few major languages.
My task was to study the GAN architecture, create trainable models that are capable of generating text, train them and analyze the results.
During writing my thesis I learned about the Variational Autoencoder and GAN architectures, I got to know the basics of natural language processing, I created three neural networks that are able to generate text including the GAN and after training them I analyzed and compared their outputs.