The most dynamically developing sector of machine learning is image recognition. The ever-increasing computing capacity and better data quality made it possible to create neural networks that are able to compete with human performance. For this reason, IT systems apply different learning algorithms increasingly. Despite their success, it has been recognized that these algorithms have several vulnerabilities. Therefore, the elimination of these vulnerabilities is the main challenge. The aim of my thesis is to introduce these attack surfaces and to present the effectiveness of white box attacks through my own test cases. For the implementation, I use convolutional neural networks to recognize the handwritten digits of the MNIST database. These neural networks are vulnerable to malicious inputs that are modified to yield erroneous model outputs, while appearing unchanged to human observers. Moreover, I review the mathematical background of the algorithms that generate adversarial examples, and finally I evaluate and present their effectiveness in various test cases.