You need to know: Euclidean space , orthonormal vectors in
, matrix, transpose
of matrix A, matrix multiplication, eigenvalue of a square matrix, rank of a matrix, basic probability theory, selecting uniformly at random.
Background: The singular value of matrix X is an eigenvalue of , and denote
the sum of all singular values of X. For positive integers
, and
, let
be a family of orthonormal vectors in
sampled uniformly at random among all such families. Let
be a similar random family of orthonormal vectors in
. Then
, where
are some (non-random) positive numbers, is a
matrix of rank r, which we call a random matrix from random orthogonal model.
The Theorem: On 29th May 2008, Emmanuel Candès and Benjamin Recht submitted to arxiv a paper in which they proved the following result. Let M be an matrix of rank r sampled from the random orthogonal model, and put
. Suppose we observe m entries of M with locations
sampled uniformly at random. Then there are constants C and c such that if
, the minimizer to the problem
is unique and equal to M with probability at least
.
Short context: Recovery of missing entries of low-rank matrices is central in many applications. The Theorem states that when we have only a small portion of random entries of random matrix of rank r, we can recover all other entries with high probability, by solving a simple convex optimisation problem, which can be done fast using existing software.
Links: Free arxiv version of the original paper is here, journal version is here.