You need to know: Euclidean space , norms
and
in
, support of vector
(subset
such that
for all
),
matrix, matrix multiplication, big O notation.
Background: For let
, where A is
matrix (
) and
. Let
be a solution to the problem
subject to
for some
(this is called
-regularisation problem).
For , let
be
submatrix of A formed from the columns of A corresponding to the indices in T. For
, let
be the smallest number such that inequalities
hold for all T with
and all
.
The Theorem: On 3rd March 2005, Emmanuel Candes, Justin Romberg, and Terence Tao submitted to arxiv and Communications on Pure and Applied Mathematics a paper in which they proved the following result. Let A be matrix (
), and let S be such that
. Then for any
supported on
with
and any
such that
, we have
, where the constant
depends only on
. For example,
for
and
for
.
Short context: Vectors with few non-zero entries are called sparse and may represent, for example, image or signal in various applications. Vector
is the result of n measurements (given by rows of A) with error e. Given y, we then try to recover
by solving the
-regularisation problem. The Theorem states that its solution
is close to
. In particular, if the measurements are exact (
) then
. The authors show that the condition
in the Theorem holds for almost all matrices A with unit-normed columns if
, hence, with high probability, we can recover
after just
random measurements. In an earlier work, Donoho proved the existence of a recovery procedure with similar properties, while the Theorem states that a concrete efficient algorithm (
-regularisation) does the job, even for inaccurate measurements. The error
in the Theorem is essentially optimal.
Links: Free arxiv version of the original paper is here, journal version is here.
One thought on “L1-regularisation recovers sparse vectors in R^m from n<<m inaccurate measurements”