Machine learning defense techniques against adversarial examples

OData support
Supervisor:
Lestyán Szilvia
Department of Networked Systems and Services

The vast amounts of data and computational power available today made it possible for state-of-the-art machine learning models to reach human level performance. For this reason, neural networks are implemented into systems where their decisions considered to be more reliable than humans’. However, recent studies pointed out that deep learning systems consistently misclassify adversarial examples formed by applying a small, carefully crafted perturbation to a naturally occurring image. These modifications of the inputs are often imperceptible to humans and are a serious threat to neural networks used for visual recognition. The primary objective of my thesis is to introduce efficient defense techniques that has the potential to prevent adversarial examples from fooling deep learning systems, and demonstrate their capabilities. I am also going to review how neural networks are trained, and the flaws in their learning process that makes the generation of adversarial inputs possible.

Downloads

Please sign in to download the files of this thesis.