Entwicklung und Analyse von Messmatrizen für Compressed Sensing auf Basis von Denoising Autoencodern

Compressed Sensing, Deep Learning Diplomarbeiten (some time before August 2017 - September 2017) t5t5t5t5t5

Betreuer

Arash Behboodi,

Abstract

Keywords Compressed sensing, denoising auto-encoders, sensing matrix Description Compressed Sensing (CS) is concerned with recovery of sparse signals using only few linear measure- ments. It aims at finding necessary and sufficient conditions for signal recovery by a low complexity solving algorithm, and also discusses the stability of the solution to measurement noise and its robustness to sparsity defect. One recurring method is the Basis Pursuit (BP) algorithm which is a ` 1 -norm minimization problem. A well known sufficient condition for recovery using BP, as well as other algorithms such as greedy methods and thresholding algorithms, is the restricted isometry property (RIP) property of the measurement matrix. It has been shown that sub-gaussian random matrices satisfy the RIP with overwhelming probability and can be used for recovery. It is not possible to verify whether a sensing matrix satisfies the RIP in a computationally efficient way. Moreover, in many applications, the signal belongs to a particular class where the sensing matrix cannot be chosen randomly. Therefore, it is desirable to find deterministic sensing matrices and to suggest other criteria for compressed sensing. There are other properties like mutual coherence and spark that characterize the capability of a sensing matrix for sparse recovery. In general constructing a good deterministic sensing matrix is still under active research. Goal It is important to see if we can design a good sensing matrix for a given data. Since the sparse recovery problem can be considered as an auto-encoder structure with first layer representing the sensing matrix, the idea is to use denoising autoencoders for tuning the weights of the first layer, i.e., the sensing matrix to have acceptable recovery. The first layer is therefore considered as purely a linear transformation without non-linearities. This involves finding a deep learning solver as the recovery algorithm (the last layer), analyzing theoretically the linear transformation of hidden layer, back propagating the error of last layer, and finally numerically evaluating its performance. Requirements • Strong background in optimization • Python programming