You need to know: Euclidean space with scalar product
, norms
,
, and
in
, support of vector
(subset
such that
for all
),
matrix, matrix multiplication, notation
for the transpose of matrix
, big O notation, linear programming, basic probability, mean, variance, independent identically distributed (i.i.d.) random variables, normal distribution
with mean
and variance
.
Background: A vector is called S-sparse if it is supported on
with
. For
let
, where A is
matrix (
) and
is a random error (noise) whose components
are i.i.d. and follow
. The Dantzig selector
is a solution to the problem
subject to
, where
is a constant.
For , let
be
submatrix of A formed from the columns of A corresponding to the indices in T. For
, let
be the smallest number such that inequalities
hold for all T with
and all
. Also, for
, such that
, let
be the smallest number such that inequality
holds for all disjoint sets
with
and
and all
,
.
The Theorem: On 5th June 2005, Emmanuel Candes and Terence Tao submitted to arxiv a paper in which they proved the following result. Let A be matrix (
), S be such that
,
be S-sparse, and
be the Dantzig selector with
for some
. Then inequality
, where
, holds with probability exceeding
.
Short context: In the previous paper, Candes, Romberg, and Tao developed an efficient procedure witch uses the results of a low number ( random measurements suffices) of measurements with arbitrary (random or non-random) error
, to recover S-sparse vector
with error
. With arbitrary
, this result is optimal. The Theorem, however, states that with random measurement error
(which often the case in the applications) we can recover
with even higher accuracy
with high probability using the Dantzig selector
, which can be computed using linear programming. The error in the Theorem is optimal up to the constant and
factors.
Links: Free arxiv version of the original paper is here, journal version is here.