Every set A of natural numbers with positive density contains B+C for some infinite sets B and C

You need to know: Notation {\mathbb N} be the set of positive integers, finite and infinite sets, notation |S| for the number of elements in finite set S, limit superior \limsup.

Background: Let A\subset {\mathbb N} be any set of natural numbers. We say that A has positive upper density if \limsup\limits_{n\to\infty}\frac{|A \cap \{1,2,\dots,n\}|}{n} > 0. The sum of sets B \subset {\mathbb N} and C \subset {\mathbb N} is B+C=\{b+c: b\in B, c\in C\}.

The Theorem: On 1st March 2018, Joel Moreira, Florian Richter, and Donald Robertson submitted to arxiv a paper in which they proved that any set A\subset {\mathbb N} of positive upper density contains B+C, where B and C are infinite subsets of {\mathbb N}.

Short context: The infinite version of famous Ramsey’s 1929 Theorem implies that if integers are partitioned into finitely many sets, then at least one of these sets contains B+C for infinite sets B and C. The Theorem proves a density version of this statement, which was known as the Erdős sumset conjecture.

Links: Free arxiv version of the original paper is here, journal version is here.

Go to the list of all theorems

The volume exponent of the first Grigorchuk group is equal to 0.7674…

You need to know: Group, generating set of a group, finitely generated group, subgroup generated by a set of elements of a group.

Background: Let T be the set of all finite strings of 0s and 1s together with the the empty string \emptyset. For x,y \in T, we write x<y if x is an initial segment of y. Let \text{Aut}(T) be the set of all length-preserving permutations \sigma: T \to T such that \sigma(x) < \sigma(y) whenever x<y. With composition operation (\sigma_1 \cdot \sigma_2)(x)=\sigma_1(\sigma_2(x)), \text{Aut}(T) forms a group. Let a,b,c,d be specific elements of \text{Aut}(T) defined recursively by the following relations, which hold for all strings x: a(0)=1, a(1)=0, b(0)=c(0)=d(0)=0, b(1)=c(1)=d(1)=1, a(0x)=1x, a(1x)=0x, b(0x)=0a(x), b(1x)=1c(x), c(0x)=0a(x), c(1x)=1d(x), d(0x)=0x, d(1x)=1b(x). Let G be a subgroup of \text{Aut}(T) generated by a,b,c,d. For any generating set S of G, let v_{G,S}(n) be the number of elements of G which are the product of at most n elements in S \cup S^{-1}.

The Theorem: On 25th February 2018, Anna Erschler and Tianyi Zheng submitted to arxiv a paper in which they proved that for any generating set S of G, we have \lim\limits_{n\to\infty}\frac{\log \log v_{G,S}(n)}{\log n}=\alpha_0, where \alpha_0=0.7674... is the constant defined by \alpha_0=\frac{\log 2}{\log \lambda_0}, where \lambda_0 is the positive solution to the equation x^3-x^2-2x-4=0.

Short context: The group G defined above is called the first Grigorchuk group. In 1984, Grigorchuk proved that \exp(cn^{1/2})\leq v_{G,S}(n) \leq \exp(Cn^\beta), where c,C are positive constants, and \beta=\log_{32}31 < 1. This implies that v_{G,S}(n) grows faster than any polynomial but slower than exponential function \exp(C'n). Such groups are called groups of intermediate growth, and their existence was a major open question asked by Milnor in 1968 and answered by Grigorchuk by constructing the group G. Grigorchuk, and later other authors, also constructed some other examples of groups of intermediate growth (see here and here), but the group G is the simplest and most studied example. From Grigorchuk’s bound it is clear that the limit \lim\limits_{n\to\infty}\frac{\log \log v_{G,S}(n)}{\log n} (which is called the volume exponent of G), if exists, is between 1/2 and \log_{32}31. The Theorem proves that this limit exists and computes it exactly.

Links: Free arxiv version of the original paper is here, journal version is here.

Go to the list of all theorems

The hot spots conjecture is true for all triangles in the plane

You need to know: Euclidean plane {\mathbb R}^2, smooth (that is, differentiable) function in 2 variables, first and second order partial derivatives, notation \Delta f = \frac{\partial^2 f}{\partial x_1^2}+\frac{\partial^2 f}{\partial x_2^2}, directional derivative \frac{\partial f}{\partial n} for a vector n, normal vector.

Background: Let \Omega \subset {\mathbb R}^2 be a domain in {\mathbb R}^2. In fact, we may assume that \Omega is a triangle. Let \partial\Omega denote the boundary of \Omega. The second Neumann eigenvalue \mu_2=\mu_2(\Omega) is the smallest positive real number such that there exists a not identically zero, smooth function u : \Omega \to {\mathbb R} that satisfies the equation \Delta u =-\mu_2 \cdot u on \Omega and the boundary condition \left.\frac{\partial u}{\partial n}\right|_{\partial \Omega} \equiv 0 at the smooth points of \partial\Omega, where n denotes the outward pointing normal vector (see here for an equivalent definition of \mu_2). A function u that satisfies these conditions is called the second Neumann eigenfunction for \Omega.

The Theorem: On 6th February 2018, Chris Judge and Sugata Mondal submitted to arxiv a paper in which they proved that, for any triangle \Omega, the second Neumann eigenfunction attains its extrema at the boundary of \Omega.

Short context: The conjecture that the second Neumann eigenfunction attains its extrema at the boundary of \Omega is known as the hot spot conjecture. In simple English, it states that if a flat piece of metal is given an initial heat distribution which then flows throughout the metal, then, after some time, the hottest point on the metal will lie on its boundary. The conjecture was proposed by Rauch in 1974. In 1999, Burdzy and Werner constructed a domain (with holes) for which the conjecture fails, but it still believed to be true for all convex domains. For triangles, the conjecture has been known to hold for obtuse triangles, right triangles, and acute triangles with at least one angle less than \pi/6. The Theorem confirms the conjecture for all triangles. See here for another partial result towards the conjecture.

Links: Free arxiv version of the original paper is here, journal version is here.

Go to the list of all theorems

There exist lattices with exponentially large kissing numbers

You need to know: Set {\mathbb Z} of integers, Euclidean space {\mathbb R}^n, basis for {\mathbb R}^n, norm ||x||=\sqrt{\sum_{i=1}^n x_i^2} of x=(x_1, \dots, x_n) \in {\mathbb R}^n.

Background: A lattice L in {\mathbb R}^n is a set of the form L =\left\{\left.\sum\limits_{i=1}^n a_i v_i\,\right\vert\, a_i \in {\mathbb Z}\right\}, where v_1, \dots, v_n is a basis for {\mathbb R}^n. Let \lambda_1(L) be the length of the shortest non-zero vector in L. The kissing number \tau(L) of L is the number of vectors of length \lambda_1(L) in L. The lattice kissing number \tau_n^l in dimension n is the maximum value of \tau(L) over all lattices L in {\mathbb R}^n.

The Theorem: On 3rd February 2018, Serge Vlăduţ submitted to arxiv a paper in which he proved the existence of constant c>0 such that \tau_n^l \geq e^{cn} for any n\geq 1.

Short context: The kissing number in {\mathbb R}^n is the highest number of equal nonoverlapping spheres in {\mathbb R}^n that can touch another sphere of the same size. Determining the kissing number in various dimensions is an active area of research, see here and here. The lattice kissing number corresponds to the case when spheres form a “regular pattern”. It was a long-standing open problem whether the lattice kissing number grows exponentially with dimension. The Theorem resolves this problem affirmatively.

Links: Free arxiv version of the original paper is here, journal version is here.

Go to the list of all theorems

The Pólya conjecture for the Neumann eigenvalues holds for the second eigenvalue

You need to know: Euclidean space {\mathbb R}^n, open subsets of {\mathbb R}^n,  bounded subsets of {\mathbb R}^n, notation |\Omega| for the volume of \Omega \subset {\mathbb R}^n, differentiable function on {\mathbb R}^n, its gradient \nabla u(x)=\left(\frac{\partial u}{\partial x_1}(x), \dots, \frac{\partial u}{\partial x_n}(x) \right), integral over \Omega \subset {\mathbb R}^n, normed space, subspace, dimension of a subspace, compactly embedded space.

Background: Let n\geq 2, and let \Omega \subset {\mathbb R}^n be a bounded open set. Let L^2(\Omega) be the set of functions u:\Omega\to{\mathbb R} with norm ||u||_2:=\left(\int_\Omega |u(x)|^2 dx\right)^{1/2}<\infty. Let H^1(\Omega) be the set of differentiable functions u:\Omega\to{\mathbb R} with norm ||u||_H := \left(\int_\Omega\left(|\nabla u(x)|^2 + |u(x)|^2\right)dx\right)^{1/2}<\infty. We say that domain \Omega is regular if H^1(\Omega) is compactly embedded in L^2(\Omega). For every integer k\geq 1, let {\cal S}_k be the family of all subspaces of dimension k in \{u\in H^1(\Omega) :\int_\Omega u(x) dx = 0\}, and let \mu_k(\Omega)=\min\limits_{S \in {\cal S}_k} \max\limits_{u\in S}\frac{\int_\Omega |\nabla u(x)|^2 dx}{\int_\Omega |u(x)|^2 dx}. If B \subset {\mathbb R}^n is a ball, the quantity \mu_2^*=2^{2/n}|B|^{2/n}\mu_1(B) is a constant which does not depend on B.

The Theorem: On 22nd January 2018, Dorin Bucur and Antoine Henrot submitted to Acta Mathematica a paper in which they proved that inequality |\Omega|^{2/n}\mu_2(\Omega) \leq \mu_2^* holds for every regular set \Omega \subset {\mathbb R}^n, with equality if \Omega is the union of two disjoint, equal balls.

Short context: The sequence \mu_k(\Omega) is known as “the spectrum of the Laplace operator with Neumann boundary conditions”, and is well-studied in mathematics with applications in physics. In 1950-th, Szegő and Weinberger proved that |\Omega|^{2/n}\mu_1(\Omega) is maximised when \Omega is the ball in {\mathbb R}^n. The Theorem solves the maximization problem for |\Omega|^{2/n}\mu_k(\Omega) for k=2.  As one of the applications, it immediately implies the k=2 case of important Polya conjecture for the Neumann eigenvalues, which states that \mu_k(\Omega) \leq 4\pi^2\left(\frac{k}{w_n|\Omega|}\right)^{2/n}, where w_n is the volume of the unit ball in {\mathbb R}^n.

Links: Free arxiv version of the original paper is here, journal version is here.

Go to the list of all theorems

The De Bruijn-Newman constant is non-negative

You need to know: Sum of infinite series, integration. Notations: {\mathbb R} is the set of real numbers, {\mathbb C} is the set of complex numbers.

Background: Let \Phi: {\mathbb R} \to {\mathbb R} be a function defined as a sum of infinite series: \Phi(u) = \sum\limits_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3\pi n^2 e^{5u})\exp(-\pi n^2 e^{4u}). For each t\in{\mathbb R}, let H_t: {\mathbb C} \to {\mathbb C} be a function given by H_t(z) = \int\limits_0^\infty e^{tu^2}\Phi(u)\cos(zu)du. Newman showed that there exists a finite constant \Lambda (known as the de Bruijn–Newman constant) such that the zeros of H_t are all real if and only if t\geq \Lambda.

The Theorem: On 18th January 2018, Brad Rodgers and Terence Tao submitted to arxiv a paper in which they proved that \Lambda \geq 0.

Short context: The importance of the de Bruijn–Newman constant is that it is connected to the famous Riemann hypothesis, see here. Specifically, the Riemann hypothesis is equivalent to the inequality \Lambda \leq 0. In 1976, Newman conjectured the complementary bound \Lambda \geq 0, and noted that this conjecture states that if the Riemann hypothesis is true, it is only “barely so”. The Theorem proves the Newman’s conjecture.

Links: Free arxiv version of the original paper is here, journal version is here.

Go to the list of all theorems

The Strassen’s direct sum conjecture is false: the rank of tensors is not additive with respect to the direct sum

You need to know: Only basic math for the Theorem. However, you need to know matrices, linear and bilinear systems of equations, and the concept of infinite field to fully understand the context section.

Background: Let I,J,K be finite sets. Let I \times J \times K be the set of triples (i,j,k) with i\in I, j\in J, and k\in K. A 3-dimensional tensor with real entries (or just “tensor” for short) is a function T: I \times J \times K \to {\mathbb R}. A tensor T is called decomposable, if there exist real numbers a_i, \, i\in I, and b_j, \, j\in J, and c_k, \, k\in K, such that T(i,j,k)=a_ib_jc_k for all (i,j,k)\in I \times J \times K.  The rank \text{rank}(T) of a tensor T is the smallest r for which T can be written as a sum of r decomposable tensors. The direct sum T=T_1 \oplus T_2 of tensors T_1: I_1 \times J_1 \times K_1 \to {\mathbb R} and T_2: I_2 \times J_2 \times K_2 \to {\mathbb R} (with disjoint indexing sets I_1, I_2, J_1, J_2, K_1, K_2) is the tensor T: I \times J \times K \to {\mathbb R}, where I=I_1 \cup I_2, J=J_1 \cup J_2, K=K_1 \cup K_2, such that (i) T(i,j,k) = T_1(i,j,k) if i\in I_1, j\in J_1, k\in K_1, (ii) T(i,j,k) = T_2(i,j,k) if i\in I_2, j\in J_2, k\in K_2, and (iii) T(i,j,k) = 0 for all other (i,j,k) \in I \times J \times K.

The Theorem: On 22nd December 2017, Yaroslav Shitov submitted to arxiv a paper in which he proved the existence of 3-dimensional tensor T_1 and T_2 with real entries such that \text{rank}(T_1 \oplus T_2) < \text{rank}(T_1) + \text{rank}(T_2).

Short context: Tensors are natural generalizations of matrices and are central in many applications. In particular, 3-dimensional tensors serve as natural representation of bilinear systems of equations in the same way as matrices are used to represent linear systems. In 1973, Strassen conjectured that the complexity (that is, number of multiplicative operations needed for solution) of the union of two bilinear systems depending on different variables is equal to the sum of the complexities of both systems. In the language of tensors, the conjecture states that \text{rank}(T_1 \oplus T_2) = \text{rank}(T_1) + \text{rank}(T_2) for any tensors T_1, T_2, and it became known as the Strassen’s direct sum conjecture. A similar statement is true for matrices, and was widely believed to be true for tensors. The Theorem, however, provides a counterexample. In fact, Shitov also disproved this conjecture for tensors whose entries are in an arbitrary infinite field.

Links: Free arxiv version of the original paper is here, journal version is here.

Go to the list of all theorems

A typical 1-Lipschitz image of a purely n-unrectiable subset of R^k has H^n dimension 0

You need to know: Euclidean space {\mathbb R}^n, norm ||x||_n=\sqrt{\sum_{i=1}^nx_i^2} of x=(x_1,\dots,x_n)\in {\mathbb R}^n, countable set, n-dimensional Hausdorff measure H^n (see here), metric space, dense open set in a metric space.

Background: A function f:{\mathbb R}^n \to {\mathbb R}^m is called K-Lipschitz if ||f(x)-f(y)||_m \leq K||x-y||_n for all x,y\in {\mathbb R}^n. The set \{y\in {\mathbb R}^m\,|\,y=f(x)\} for some K-Lipschitz f:{\mathbb R}^n \to {\mathbb R}^m is called Lipschitz image of {\mathbb R}^n. A subset S\subset {\mathbb R}^m is called n-rectifiable if it can be covered by a set of H^n measure zero and a countable number of Lipschitz images of {\mathbb R}^n. A set S is purely n-unrectifiable if all of its n-rectifiable subsets have H^n measure zero. Let \text{Lip}_1(n.m) be the set of all bounded 1-Lipschitz functions f:{\mathbb R}^n \to {\mathbb R}^m equipped with the norm ||f||=\sup\limits_{x \in {\mathbb R}^n}||f(x)||_m. A subset {\cal F}\subseteq \text{Lip}_1(n.m) is called residual if it contains a countable intersection of dense open sets.

The Theorem: On 19th December 2017, David Bate submitted to arxiv a paper in which he proved that for any purely n-unrectifiable S\subset {\mathbb R}^k with H^n(S)<\infty, the set of all f\in\text{Lip}_1(k.m) with H^n(f(S))=0 is residual in \text{Lip}_1(k.m). Conversely, if S is not purely n-unrectifiable, then the set of f\in\text{Lip}_1(k.m) with H^n(f(S))>0 is residual.

Short context: Rectifiable and purely unrectifiable sets are a central object of study in geometric measure theory. The term “residual” is just a formalisation of intuitive concept of “almost all” Lipschitz functions. Hence, the Theorem states that for “almost every”  function f\in\text{Lip}_1(k.m), we have H^n(f(S))=0 if and only if set S is purely n-unrectifiable. This gives a complete and useful characterisation of such sets. In fact, Bate derived a similar characterisation of purely unrectifiable sets is general metric spaces.

Links: Free arxiv version of the original paper is here, journal version is here.

Go to the list of all theorems

Systems of polynomial equations can be solved in expected quasilinear time

You need to know: Complex numbers, polynomials in complex variable, notation {\mathbb C}^n for set of vectors z=(z_1, \dots, z_n) with complex z_i, algorithm, probabilistic algorithm, average running time of an algorithm, big O notation, small o notation.

Background: Let n>0 be a positive integer. Let H be the space of all systems f = [f_1,\dots ,f_n]:{\mathbb C}^n \to {\mathbb C}^n on n polynomial equations in n unknowns. For f \in H, denote V_{\mathbb C}(f) = \{x \in {\mathbb C}^n : f_i(x) = 0, \, 1 \leq i \leq n\} \subset {\mathbb C}^n the set of solutions of f. Also, denote D the maximal degree of a polynomial in f, and N the number of bits needed to write down the system f (that is, write down all coefficients of all polynomials in f).

The Theorem: On 9th November 2017, Pierre Lairez submitted to arxiv and the Journal of the AMS a paper in which he described an explicit probabilistic algorithm that computes an approximate solution of any given f \in H with any given accuracy (with probability 1), and proved that the average number of arithmetic operations of this algorithm is O(n^6D^4N).

Short context: Solving polynomial equations and systems of such equations is one of the fundamental problems in the whole mathematics. In 1998, Smale presented a list of problems he considered as the most important ones for the 21st century. Smales 17th problem asks for the algorithm which could solve systems of polynomial equations with polynomial average running time. In 2009, Beltrán and Pardo presented a probabilistic algorithm that achieves this (see here for deterministic version). The Theorem provides a much faster algorithm. In fact, N\geq 2^{\min(n,D)}, hence the running time O(n^6D^4N) in the Theorem is N^{1+o(1)} when \min(n,D) \to \infty.

Links: Free arxiv version of the original paper is here, journal version is here.

Go to the list of all theorems

If k large, p prime, gcd(n,d)=1, and n(n+d)…(n+(k-1)d)=y^p, then yd=0 or p<=exp(10^k)

You need to know: Prime numbers, greatest common divisor \text{gcd}(n,d) of positive integers n,d, perfect power.

Background: Arithmetic progression of length k is the sequence n, n+d, \dots n+(k-1)d.

The Theorem: On 4th September 2017, Michael Bennett and Samir Siksek submitted to arxiv and the Annals of Mathematics a paper in which they proved the existence of constant k_0 such that if k\geq k_0 is a positive integer, then any solution in integers to equation n(n+d)(n+2d)\dots(n + (k-1)d) = y^p with \text{gcd}(n,d)=1 and prime exponent p satisfies either y=0 or d=0, or p\leq \exp(10^k).

Short context: In 1975, Erdős and Selfridge, answering a question posed by Liouville in 1857, proved that the product of two or more consecutive nonzero integers can never be a perfect power. Erdős further conjectured that the same is true for the products of consecutive terms in arithmetic progression n, n+d, \dots n+(k-1)d, as soon as k sufficiently large and \text{gcd}(n,d)=1. The Theorem, together with Faltings’ theorem, implies that, for every fixed k\geq k_0, there may be at most finitely many arithmetic progressions of length k that violates the Erdős’ prediction.

Links: Free arxiv version of the original paper is here, journal version is here.

Go to the list of all theorems