L1-regularisation recovers sparse vectors in R^m from n<<m inaccurate measurements

You need to know: Euclidean space {\mathbb R}^m, norms ||x||_2 = \sqrt{\sum\limits_{i=1}^m x_i^2} and ||x||_1 = \sum\limits_{i=1}^m |x_i| in {\mathbb R}^m, support of vector x \in {\mathbb R}^m (subset T\subset \{1,\dots,m\} such that x_i=0 for all i \not\in T), n \times m matrix, matrix multiplication, big O notation.

Background: For x_0 \in {\mathbb R}^m let y=Ax_0+e, where A is n \times m matrix (n<m) and e\in {\mathbb R}^n. Let x^* be a solution to the problem \min\limits_{x \in {\mathbb R}^m} ||x||_1 subject to ||Ax-y||_2 \leq \epsilon for some \epsilon \geq 0 (this is called l1-regularisation problem).

For T\subset \{1,\dots,m\}, let A_T be n \times |T| submatrix of A formed from the columns of A corresponding to the indices in T. For S>0, let \delta_S be the smallest number such that inequalities (1-\delta_S)||c||_2^2 \leq ||A_Tc||_2^2\leq (1+\delta_S)||c||_2^2 hold for all T with |T|\leq S and all c\in{\mathbb R}^{|T|}.

The Theorem: On 3rd March 2005, Emmanuel Candes, Justin Romberg, and Terence Tao submitted to arxiv and Communications on Pure and Applied Mathematics a paper in which they proved the following result. Let A be n \times m matrix (n<m), and let S be such that \delta_{3S}+3\delta_{4S} < 2. Then for any x_0 \in {\mathbb R}^m supported on T_0 with |T_0|\leq S and any e\in {\mathbb R}^n such that ||e||_2 \leq \epsilon, we have ||x^*-x_0||_2 \leq C_S \cdot \epsilon, where the constant C_S depends only on \delta_{4S}. For example, C_S \approx 8.82 for \delta_{4S}=\frac{1}{5} and C_S \approx 10.47 for \delta_{4S}=\frac{1}{4}.

Short context: Vectors x_0 \in {\mathbb R}^m with few non-zero entries are called sparse and may represent, for example, image or signal in various applications.  Vector y=Ax_0+e is the result of n measurements (given by rows of A) with error e. Given y, we then try to recover x_0 by solving the l1-regularisation problem. The Theorem states that its solution x^* is close to x_0. In particular, if the measurements are exact (\epsilon=0) then x^* = x_0. The authors show that the condition \delta_{3S}+3\delta_{4S} < 2 in the Theorem holds for almost all matrices A with unit-normed columns if n=O(S\log m), hence, with high probability, we can recover x_0 after just O(|T_0|\log m) random measurements. In an earlier work, Donoho proved the existence of a recovery procedure with similar properties, while the Theorem states that a concrete efficient algorithm (l1-regularisation) does the job, even for inaccurate measurements. The error C_S \cdot \epsilon in the Theorem is essentially optimal.

Links: Free arxiv version of the original paper is here, journal version is here.

Go to the list of all theorems

One thought on “L1-regularisation recovers sparse vectors in R^m from n<<m inaccurate measurements

Leave a comment