Perturbation Analysis of Learning Algorithms: Generation of Adversarial Examples from Classification to Regression

Authors

E. R. Balda, A. Behboodi, R. Mathar,

Abstract

        Despite the tremendous success of deep neural networks in various learning problems, it has been observed that adding intentionally designed adversarial perturbations to inputs of these architectures leads to erroneous classification with high confidence in the prediction. In this work, we show that adversarial examples can be generated using a generic approach that relies on the perturbation analysis of learning algorithms. Formulated as a convex program, the proposed approach retrieves many current adversarial attacks as special cases. It is used to propose novel attacks against learning algorithms for classification and regression tasks under various new constraints with closed-form solutions in many instances. In particular, we derive new attacks against classification algorithms which are shown to be top-performing on various architectures. Although classification tasks have been the main focus of adversarial attacks, we use the proposed approach to generate adversarial perturbations for various regression tasks. Designed for single pixel and single subset attacks, these attacks are applied to autoencoding, image colorization and real-time object detection tasks, showing that adversarial perturbations can degrade equally gravely the output of regression tasks.

BibTEX Reference Entry 

@article{BaBeMa19,
	author = {Emilio Rafael Balda and Arash Behboodi and Rudolf Mathar},
	title = "Perturbation Analysis of Learning Algorithms: Generation of Adversarial Examples from Classification to Regression",
	journal = "{IEEE} Transactions on Signal Processing",
	volume = "PP",
	doi = 10.1109/TSP.2019.2943232,
	month = Sep,
	year = 2019,
	hsb = RWTH-2019-11737,
	}

Downloads

 Download paper  Download bibtex-file

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights there in are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.