The paper presents the currently most successful generative deep learning models, with particular emphasis on the autoencoder-based VAE and $\beta$-VAE architectures. These can be used as generative models, but have also gained importance in representational learning. Through our work, we create and quantify how the representations of these models meet the informal requirements for such representations by creating different metrics. For this, we use a synthetic data set, Dsprite.
The programs for testing are implemented in Keras, Python built on the libraries of numpy/scikit-learn/tensorflow/matplotlib, and we run their models on high-performance graphics cards.
With this framework, we create a variety of VAE-based models and compare their representations to each other, based on our own metrics. Particular emphasis is placed on the representation effect of the $\beta$ hyperparameters of $\beta$-VAE nets, and on the relationship between the capability of reconstruction and latent space. For the visualization of representations, we generate several figures for each model and model set.