Adversarial examples, theory and evidence in computer vision

avoin
Julkaisu on tekijänoikeussäännösten alainen. Teosta voi lukea ja tulostaa henkilökohtaista käyttöä varten. Käyttö kaupallisiin tarkoituksiin on kielletty.
Lataukset679

Verkkojulkaisu

DOI

Tiivistelmä

Adversarial examples are input samples modified with minimal perturbations. These samples cause misclassification in machine leaning models. This thesis is constructed like a survey: first we present a broad history of computer vision and its history with neural networks, then we proceed to discuss various adversarial attacks and defenses, and thirdly we detour to anomaly detection. Purpose of these sections is to give context to the analysis of theoretical frameworks of adversarial examples. Theoretical frameworks are analyzed and evidence for their claims is explored through other more practical sources. Practical sources didn’t discuss frameworks directly, rather they had their own presentation and evidence in them touched on other claims. Finally we conclude the analysis, categorization and proceed to suggest future research directions.

item.page.okmtext