Privacy Preserving Adversarial Perturbations for Neural Decoders

Machine Learning Diplomarbeiten (never - cancelled)


Emilio Balda,


Replacing conventional decoders with neural networks has become a popular research topic in recent years, due to the potentially low complexity of neural networks when compared with classical iterative decoders. Nevertheless, it has been shown that neural networks are particularly unstable when maliciously designed noise is added to their inputs. Such noise is known as adversarial noise or adversarial perturbation. ...

Research Area

Machine Learning for Communication Systems


Adversarial examples, deep learning, privacy


Download information as pdf.