On Generation of Adversarial Examples using Convex Programming
Authors
Abstract
It has been observed that deep learning architectures tend to make erroneous decisions with high reliability for particularly designed adversarial instances. In this work, we show that the perturbation analysis of these architectures provides a framework for generating adversarial instances by convex programming which, for classification tasks, is able to recover variants of existing non-adaptive adversarial methods. The proposed framework can be used for the design of adversarial noise under various desirable constraints and different types of networks. Moreover, this framework is capable of explaining various existing adversarial methods and can be used to derive new algorithms as well. We make use of these results to obtain novel algorithms. The experiments show the competitive performance of the obtained solutions, in terms of fooling ratio, when benchmarked with well-known adversarial methods.
BibTEX Reference Entry
@inproceedings{BaBeMa18b, author = {Emilio Rafael Balda and Arash Behboodi and Rudolf Mathar}, title = "On Generation of Adversarial Examples using Convex Programming", pages = "60-65", booktitle = "52nd Asilomar Conference on Signals, Systems, and Computers", address = {Pacific Grove, California, USA}, doi = 10.1109/ACSSC.2018.8645290, month = Oct, year = 2018, hsb = RWTH-2018-231198, }
Downloads
Download paper Download bibtex-file
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights there in are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.