The variance of the current across the characteristic in ASEP is of order t^(2/3)

You need to know: Set {\mathbb Z} of integers, probability, independent random variables, exponential distribution, expectation, variance. For x\in{\mathbb R}, denote [x] the largest integer in [0, x] if x \geq 0, and the smallest integer in [x, 0] if x\leq 0.

Background: At time t=0, let us put, for each x\in {\mathbb Z}, a particle in site x, independently and with the same probability \sigma. Then, for each particle z, independently generate a random variable t_z from exponential distribution with parameter 1, wait for time t_z, and then jump to an adjacent site right or left, with probabilities p and q=1-p, respectively, provided that the target site is unoccupied (and otherwise stay). This action is then repeated and performed in parallel for all particles. We call this asymmetric simple exclusion process (ASEP). For v\in{\mathbb R}, let J^{(v)}(t) = J^{(v)}_+(t) - J^{(v)}_-(t), where J^{(v)}_+(t) is the number of particles that began in (-\infty,0] at time 0 but lie in [[vt]+1,\infty) at time t, and J^{(v)}_-(t) is the number of particles that began in [1, \infty) at time 0 but lie in (-\infty,[vt]] at time t.

The Theorem: On 15st August 2006, Márton Balázs and Timo Seppäläinen submitted to arxiv a paper in which they proved that, for v=(p-q)(1-2\sigma), there exists positive constants t_0 and C, such that C^{-1}t^{2/3} \leq \text{Var}(J^{(v)}(t)) \leq Ct^{2/3} , for all t\geq t_0.

Short context: ASEP is one of the simplest models for diffusion processes – see here for its the 2-dimensional version. Random variable J^{(v)}(t) has the meaning of the total net particle current seen by an observer moving at speed v during time interval [0,t]. In 1994, Ferrari and Fontes proved that \lim\limits_{t \to \infty} \frac{\text{Var}(J^v(t))}{t} = \sigma(1-\sigma) |(p-q)(1-2\sigma)-v|, which implies that the variance grows linearly in time, except when v=(p-q)(1-2\sigma), which is called the characteristic speed. J^{(v)}(t) for this v is called the current across the characteristic, and The Theorem proves that its variance grows as t^{2/3}.

Links: Free arxiv version of the original paper is here, journal version is here. See also Section 10.5 of this book for an accessible description of the Theorem.

Go to the list of all theorems

Symmetric diagonally dominant linear systems can be solved in near linear time

You need to know: Euclidean space {\mathbb R}^n, matrix, transpose A^T of matrix (or vector) A, symmetric matrix, matrix multiplication, Moore-Penrose inverse A^{\dagger} of matrix A, randomised algorithm, expected running time, big O notation,

Background: A matrix A with entries a_{ij} is called weakly diagonally dominant if a_{ii} \geq \sum\limits_{j\neq i}|a_{ij}| for all i. For n \times n symmetric matrix A and x \in {\mathbb R}^n, the A-norm of x is ||x||_A = \sqrt{x^TAx}.

The Theorem: On 24th July 2006, Daniel Spielman and Shang-Hua Teng submitted to arxiv a paper in which they presented a randomised algorithm that on input a symmetric, weakly diagonally dominant n \times n matrix A with m nonzero entries, vector b \in {\mathbb R}^n, and \epsilon>0, produces an \tilde{x} such that \|{\tilde{x}} - A^{\dagger} {b} \|_{A} \leq \epsilon \|A^{\dagger} {b}\|_{A} in expected time O(m \log^{c}n \log (1/\epsilon)) for some constant c.

Short context: The task of solving symmetric diagonally dominant systems of linear equations arises as a subroutine in many applications, e.g. in various problem on graphs or when solving elliptic partial differential equations by numerical methods. The Theorem proves that such systems can by solved with any given accuracy in time linear in the input size, if we ignore the logarithmic factors. In later works, it has been used to
obtain substantial asymptotic improvements in the running time for various important problems in algorithmic graph theory.

Links: Free arxiv version of the original paper is here, journal version is here.

Go to the list of all theorems

Characterization of stability-preserving linear operators on polynomials

You need to know: Polynomials, degree of a polynomial, linear operator.

Background: Let {\cal P}_k, k=1,2, be the set of all polynomials in k variables with real coefficients. A polynomial P \in {\cal P}_1 of degree n is called stable (or hyperbolic) if it has n real roots (counting multiplicity). We say that stable polynomials P, Q \in {\cal P}_1 are interlacing if either \alpha_1 \leq \beta_1 \leq \alpha_2 \leq \beta_2 \leq \dots or \beta_1 \leq \alpha_1 \leq \beta_2 \leq \alpha_2 \leq \dots, where \alpha_1 \leq \alpha_2 \leq \dots \leq \alpha_n and \beta_1 \leq \beta_2 \leq \dots \leq \beta_m are roots of P(x) and Q(x), respectively.

A polynomial P \in {\cal P}_2 is stable if Q(t)=P(a+bt, c+dt)\in {\cal P}_1 is stable for any real a,b,c,d such that b>0 and d>0.  For a linear operator T:{\cal P}_1\to {\cal P}_1, let S_T:{\cal P}_2\to {\cal P}_2 be a linear operator such that S_T(x^ky^l) = T(x^k)y^l, for all k\geq 0 and l\geq 0. We say that linear operator T:{\cal P}_1\to {\cal P}_1 preserves stability if T(P) is a stable polynomial whenever P is stable.

The Theorem: On 18th July 2006, Julius Borcea and Petter Brändén submitted to arxiv a paper in which they proved that a linear operator T:{\cal P}_1\to {\cal P}_1 preserves stability if and only if either (a) T(x^k)=a_k P(x)+ b_k Q(x), where a_k,b_k, k=0,1,2,\dots are real numbers, and P(x) and Q(x) are some fixed (independent of k) stable interlacing polynomials; or (b) S_T[(x+y)^k]\in {\cal P}_2 is a stable polynomial for all k=0,1,2,\dots; or (c) S_T[(x-y)^k]\in {\cal P}_2 is a stable polynomial for all k=0,1,2,\dots.

Short context: Examples of linear operators T:{\cal P}_1\to {\cal P}_1 which preserves stability are differentiation and multiplication by a fixed stable polynomial. A fundamental long-standing open problem was to describe all such operators. In 1914, Pólya and Schur did this for operators such that T(x^k) = \lambda_k x^k, k\geq 0, where \{\lambda_k\}_{k=0}^\infty is a given sequence of numbers. The Theorem provides a characterization of all stability-preserving operators.

Links: Free arxiv version of the original paper is here, journal version is here. See also Section 9.8 of this book for an accessible description of the Theorem.

Go to the list of all theorems

The diagonal Ramsey numbers r(k+1,k+1) do not exceed C_m 4^k/k^m

You need to know: Graph, complete graph, binomial coefficient {n\choose k}=\frac{n!}{k!(n-k)!}.

Background: Let K_n denote the complete graph with n vertices. The (two-colour) Ramsey number r(k, l) is the smallest natural number n such that, in any red and blue colouring of the edges of K_n, we are guaranteed to find either a red K_n or a blue K_l. The Ramsey numbers r(k, k) are called diagonal.

The Theorem: On 11th July 2006, David Conlon submitted to the Annals of Mathematics a paper in which he proved the existence of a constant C>0 such that inequality r(k+1,k+1) \leq k^{-C\frac{\log k}{\log \log k}}{2k\choose k} holds for all integers k \geq 2.

Short context: In 1929, Ramsey proved that for any positive integers k,l exists n such that any red-blue edge colouring of K_n contains either red K_n or a blue K_l. This implies that numbers r(k, l) exist and finite. In 1935, Erdős and Szekeres proved that r(k+1,l+1) \leq {k+l\choose k}. In particular, r(k+1,k+1) \leq {2k\choose k}. In 1988, Thomason improved it to r(k+1,k+1) \leq k^{-1/2+A/\sqrt{\log k}}{2k\choose k}. The Theorem is a major improvement of the Thomason’s bound. In particular, it implies that for any m>0 there is a constant C_m>0 such that r(k+1,k+1) \leq C_m\frac{4^k}{k^m} for all k.

Links: Free arxiv version of the original paper is here, journal version is here. See also Section 9.11 of this book for an accessible description of the Theorem.

Go to the list of all theorems

There exists infinitely many groups G with no product-free subset of size 2|G|^(8/9)

You need to know: Group, finite group, notation |S| for the number of elements in a finite set S.

Background: A subset S of a finite group G is called product-free if it does not contain three elements x, y and z with x y = z.

The Theorem: On 5th July 2005, Timothy Gowers submitted to Combinatorics, Probability and Computing a paper in which he proved (by providing explicit examples) the existence of infinitely many groups G which have no product-free subsets of cardinality greater than 2|G|^{8/9}.

Short context: A well-known theorem of Erdős states that for every n-element set X of integers there is a subset S \subset X of size at least n/3 that is sum-free, that is, S does not contain integers x, y and z such that x+y=z. In 1985, Babai and Sós asked for a generalisation: does every finite group G contain a product-free subset of size c|G| for some c>0? As a partial result, they managed to prove the existence of product-free subsets of size c|G|^{4/7}, which was improved to c|G|^{11/14} by Kedlaya in 1997. The Theorem, however, shows that the answer to the Babai and Sós question is no.

Links: Free arxiv version of the original paper is here, journal version is here.

Go to the list of all theorems

The set of n=d exp(cd) i.i.d Gaussian points in R^d is linearly separable

You need to know: Basic probability theory, random vector in {\mathbb R}^d, independent and identically distributed (i.i.d.) random vectors, Gaussian distribution in {\mathbb R}^d, covariance matrix, non-singular matrix, overwhelming probability, hyperplane in {\mathbb R}^d.

Background: Let X be a random point cloud consisting of n points x_i, i= 1,\dots,n, sampled independently and identically from a Gaussian distribution in {\mathbb R}^d with non-singular covariance matrix. Set X is called linearly separable (or 1-convex) if every point in it can be separated from other points by a hyperplane.

The Theorem: On 5th July 2006, David Donoho and Jared Tanner submitted to the Journal of the AMS a paper in which they proved, among other results, that for any \epsilon>0, with overwhelming probability for large d<n, each subset of X of size at most \frac{d}{2e \log(n/d)}(1-\epsilon) can be separated from other points of X by a hyperplane.

Short context: If you select n random points on the plane, then, if n is not too small, you will likely to find a point which is in the convex hull of the other points, and therefore cannot be separated from them by a line. The theorem states that (at least for Gaussian distribution), the situation in high dimension is dramatically different: with very high probability, every group of \frac{d}{2e \log(n/d)}(1-\epsilon) points can be separated from others by a hyperplane. In particular, set of n points is linearly separable if \frac{d}{2e \log(n/d)}(1-\epsilon)>1, or n < d \exp(cd) for some c>0. The Theorem has many applications, to, for example, developing efficient error-correction mechanisms, sparse signal recovery, etc.

Links: Free arxiv version of the original paper is here, journal version is here.

Go to the list of all theorems

 

Every cubic form over integers in 14 variables has a non-trivial zero

You need to know: Polynomial in n variables.

Background: A cubic form over integers in n variables is the polynomial of the form P(x_1, x_2, \dots, x_n)=\sum\limits_{i=1}^n \sum\limits_{j=1}^n \sum\limits_{k=1}^n c_{ijk}x_ix_jx_k, where c_{ijk} are integer coefficients.

The Theorem: On 8th June 2006, Roger Heath-Brown submitted to Inventiones mathematicae a paper in which he proved that, for every cubic form P(x_1, x_2, \dots, x_n) over integers in n\geq 14 variables, there exists integers x^*_1, x^*_2, \dots, x^*_n, not all zero, such that P(x^*_1, x^*_2, \dots, x^*_n)=0.

Short context: The classical 1884 Theorem of Meyer states that any indefinite quadratic from (a quadratic from P(x_1, x_2, \dots, x_n)=\sum\limits_{i=1}^n \sum\limits_{j=1}^n c_{ij}x_ix_j is indefinite if it is less than 0 for some values of variables and greater than 0 for others) over integers in n\geq 5 variables has a non-trivial zero. All cubic forms are indefinite, and it is conjectured that they must have a non-trivial zero if n \geq 10. In 1963, Davenport proved this for n\geq 16. After more then 40 years with no further progress, the Theorem proves this for n\geq 14.

Links: The original paper is available here.

Go to the list of all theorems

There are (1+o(1))CN^2/log^4 N, C=0.4764… 4-term progressions of primes at most N

You need to know: Prime numbers, arithmetic progression, logarithm, limit, infinite product.

Background: We denote by o(1) any function f(N) such that \lim\limits_{N\to\infty} f(N)=0.

The Theorem: On 4th June 2006, Ben Green and Terence Tao submitted to arxiv a paper in which they proved, among other results, that the number of 4-tuples of primes p_1<p_2<p_3<p_4 \leq N which lie in arithmetic progression is (1+o(1))C\frac{N^2}{\log^4 N}, with C=\frac{3}{4}\prod\limits_{p\geq 5}\left(1-\frac{3p-1}{(p-1)^3}\right) \approx 0.4764, where the product is over all primes p\geq 5.

Short context: In a paper submitted in 2004, Green an Tao proved that, for any k, the set of primes contains infinitely many arithmetic progressions of length k. Moreover, they proved that there are at least (1+o(1))c_k\frac{N^2}{\log^k N} such progressions consisting of primes at most N, where c_k>0 are some small constants. The Theorem provides the exact asymptotic count of the 4-term progressions. Moreover, the authors formulated conjectures which imply the exact count of the k-term progressions for all k, and much more. They then proved these conjectures in later works.

Links: Free arxiv version of the original paper is here, journal version is here. See also Section 10.8 of this book for an accessible description of the Theorem.

Go to the list of all theorems

There exist quadratic polynomials that have a Julia set of positive area

You need to know: Basic complex analysis, set {\mathbb C} of complex numbers, absolute value |z| of complex number z, boundary of a set, area (that is, Lebesgue measure) of a set A \subset {\mathbb C}, the notion of (Lebesgue) almost all.

Background: For polynomial f:{\mathbb C} \to {\mathbb C}, let K_f \subset {\mathbb C} be the set of points z_0\in {\mathbb C} such that the sequence z_0, z_1=f(z_0), z_2=f(z_1), \dots, z_{n+1}=f(z_n), \dots is bounded, that is, |z_n|\leq B, \, \forall n for some B \in {\mathbb R}. The boundary J_f of K_f is called the Julia set of f.

The Theorem: On 18th May 2006, Xavier Buff and Arnaud Cheritat submitted to arxiv a paper in which they proved the existence of quadratic polynomials that have a Julia set of positive area.

Short context: Julia set is a fundamental concept in the theory of complex dynamics, because it consists of values z_0 such that an arbitrarily small perturbation can cause significant changes in the sequence of iterated function values. The Julia set is known to have zero area for many families of quadratic polynomials, including a family which contains almost all such polynomials. The Theorem proves, however, the existence of exceptional quadratic polynomials whose Julia set has positive area.

Links: Free arxiv version of the original paper is here, journal version is here.

Go to the list of all theorems

A version of the central limit theorem holds for convex sets

You need to know: Probability, random vector, uniform distribution, Euclidean space {\mathbb R}^n, inner product (x,y)=\sum\limits_{i=1}^n x_i y_i in {\mathbb R}^n, unit vector in {\mathbb R}^n, integration in {\mathbb R}^n, measurable sets, convex set in {\mathbb R}^n, compact set in {\mathbb R}^n, interior of a set, supremum.

Background: A convex body in {\mathbb R}^n is a compact, convex set K\subset{\mathbb R}^n with a non-empty interior.

The Theorem: On 29th April 2006, Boáz Klartag submitted to arxiv a paper in which he proved the existence of a sequence \epsilon_n converging to 0 for which the following holds: For any convex body K\subset{\mathbb R}^n, there exist a unit vector \theta in {\mathbb R}^n, t_0\in {\mathbb R}, and \sigma>0 such that \sup\limits_{A\subset {\mathbb R}}\left|\text{Prob}\{(X,\theta)\in A\}-\frac{1}{\sqrt{2\pi\sigma}}\int_A e^{-\frac{(t-t_0)^2}{2\sigma^2}}dt\right|\leq \epsilon_n, where X is a random vector uniformly distributed in K, and the supremum runs over all measurable sets A\subset {\mathbb R}.

Short context: Let X be a random vector that is distributed uniformly in a cube in {\mathbb R}^n with centre in the origin. Then components X_1, X_2, \dots, X_n of X are independent and identically distributed, and, by the classical central limit theorem, \frac{1}{\sqrt{n}}\sum\limits_{i=1}^n X_i is close to a normal distribution if n is large. Moreover, the same is true for (X,\theta), under some mild conditions on unit vector \theta. The Theorem states that a similar result holds if X is uniformly distributed in any convex body in {\mathbb R}^n. This is surprising because in this case the components X_i may be far from being independent.

Links: Free arxiv version of the original paper is here, journal version is here.

Go to the list of all theorems