Approximation ratio 3/2 for the metric TSP is not optimal

You need to know: Algorithms and their running time, polynomial algorithm, probabilistic algorithm, basic graph theory, complete graph, hamiltonian cycle.

Background: The metric travelling salesman problem (TSP) asks to find a hamiltonian cycle (also called TSP tour) of minimum weight in a compete graph with non-negative edge weights.

The Theorem: On 2nd July 2020, Anna Karlin, Nathan Klein and Shayan Gharan submitted to arxiv a paper in which they proved that there exists a constant \epsilon > 0 and a randomized algorithm that outputs a TSP tour with expected weight at most \frac{3}{2}-\epsilon times the weight of the optimal tour. In fact, one can take \epsilon=10^{-36}.

Short context: The TSP is one of the most-studied problems in the theory of algorithms. In 1970-th, Christofides and Serdyukov developed a polynomial time algorithm which finds a hamiltonian cycle of weight at most \frac{3}{2} times the optimal one. Because the Christofides-Serdyukov algorithm has been unimproved for over 40 years, some people started to conjecture that the approximation ratio \frac{3}{2} is optimal for polynomial-time algorithms. The Theorem disproves this conjecture.

Links: The free version of the original paper is available here.

Go to the list of all theorems

MIP*=RE

You need to know: Computer program and number of operations it takes to run, quantum computer, entangled qubits, probability.

Background: Let \Sigma=\{0,1\}, {\Sigma}^n be the set of words \sigma_1\dots\sigma_n with each \sigma_i being 0 or 1, and let \Sigma^*=\bigcup\limits_{n=0}^\infty \Sigma^n. Let |x| denote the length of any word x\in \Sigma^*. A language is any set of words L\subset\Sigma^*. A language L is called recursively enumerable if there is a computer program such that its output is a full list x_1, x_2, \dots of the words in L (in any order). Note that if L is infinite, such a program will run forever. The set of all recursively enumerable languages is denoted RE.

Imagine two quantum computers, which we call provers, that are all-powerful, that is, can do ANY (even infinite!) number of operations in, say, one second. The provers can share any number of entangled qubits, but any other communication between them is prohibited. Also, we have a standard (not quantum and not all-powerfull) computer, called verifier, which can do everything a regular computer can do, but, in addition, can ask any questions to the provers. Let MIP* be the set of all languages L, for which there exists a polynomial P and a computer program on such computer, which, for any input x\in \Sigma^*, and with probability greater than (say) \frac{3}{4}: (i) performs at most P(|x|) operations, then stops and output Yes or No, (ii) outputs Yes if x\in L and the provers give correct answers, and (iii) outputs No if x\not\in L even if the provers are trying to cheat.

The Theorem: On 29th September 2020, Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen submitted to arxiv a paper in which they proved that MIP*=RE.

Short context: A language L is called decidable if there is a program which, for any input x\in \Sigma^*, runs a finite time and then correctly outputs whether x\in L or not. The set RE contains languages which are not decidable. The Theorem, however, states that any language L in RE, even undecidable, belongs to MIP*. This means that, by interaction with two all-powerful quantum provers which share entangled qubits, we can check whether x\in L after performing at most P(|x|) operations. The Theorem has important consequences in pure mathematics. In particular, it implies that the Connes’ embedding conjecture, a central conjecture in the theory of von Neumann algebras, is false.

Links: The free version of the original paper is available here.

A comment: In fact, the first proof of the Theorem appeared 13th January 2020, but it used a published result whose proof turned out to be incorrect.

Go to the list of all theorems

The Sensitivity Conjecture is true

You need to know: For the Theorem: Graph, induced subgraph, degree of a vertex of a graph, notation \Delta(H) for the maximum degree of graph H. In addition, you may want to go the original paper (or online) and learn what is the Sensitivity Conjecture to fully appreciate the context.

Background: The n-dimensional hypercube graph is the graph Q^n, whose vertex set consists of vectors (x_1, x_2, \dots, x_n) such that each x_i is either 0 or 1, and two vectors are adjacent if they differ in exactly one coordinate.

The Theorem: On 1st July 2019, Hao Huang submitted to arxiv a paper in which he proved that for every integer n\geq 1, and every (2^{n-1}+1)-vertex induced subgraph H of Q^n, one has \Delta(H) \geq \sqrt{n}.

Short context: The inequality \Delta(H) \geq \sqrt{n} in the Theorem is the best possible, in sense that it is tight when n is a perfect square. The Theorem confirms a conjecture of Gotsman and Linial made in 1992, who proved that their conjecture, if true, would imply the celebrated Sensitivity Conjecture, one of the famous conjectures in theoretical computer science. The Theorem confirms the Gotsman-Linial conjecture and hence the Sensitivity Conjecture.

Links: Free arxiv version of the original paper is here, journal version is here.

Go to the list of all theorems

n-digit integers can be multiplied in O(n log n) operations

You need to know: Integer multiplication, algorithm, running time of an algorithm, logarithm, big O notation.

Background (needed only for the context section): The iterated logarithm \log^* is the unique function which has properties (i) \log^*n =0 if n\leq 1 and (ii) \log^*n = 1+\log^*(\log n) if n>1.

The Theorem: On 18th March 2019, David Harvey and Joris van der Hoeven posted online a paper in which they presented a new algorithm for multiplying integers, which, according to their analysis, has running time O(n\log n) for n-digit integers.

Short context: Integer multiplication is one of the basic operations in mathematics. Usual school method requires O(n^2) operations for n-digit integers. In 1971, Schönhage and Strassen presented a much faster O(n \log n \log\log n) algorithm, and conjectured that O(n \log n) algorithm should exist, and that this is optimal. In 2009, Fürer presented a faster algorithm with running time n\log n 2^{O(\log^*(n))} for n-digit integers, see here. The Theorem improves the running time to O(n \log n), which is conjectured to be optimal.

Links: The preprint is available here.

Go to the list of all theorems

There exists an oracle A relative to which BQP is not contained in PH

You need to know: Notations for all \forall and exists \exists, computer program and number of operations it takes to run, quantum computer.

Background: Let \Sigma=\{0,1\}, {\Sigma}^n be the set of words \sigma_1\dots\sigma_n with each \sigma_i being 0 or 1, and let \Sigma^*=\bigcup\limits_{n=0}^\infty \Sigma^n. Let |x| denote the length of any word x\in \Sigma^*. Let A:\Sigma^*\to \Sigma be an arbitrary function which we call oracle. Imagine a computer which can do everything a regular computer can do, but, in addition, can instantly compute A(x) for any x\in \Sigma^*. Let P_k^A be the set of functions F(x_1, \dots, x_k) in k variables x_i\in \Sigma^*, for which there is a polynomial P and a program on such a computer which computes F in at most P\left(\sum\limits_{i=1}^k|x_i|\right) operations. Let \text{PH}^A be the set of functions f:\Sigma^*\to \Sigma for which there exists integer k\geq 0, polynomial P, and function F\in P_{2k+1}^A such that f(x)=1 if and only if \forall y_1 \in \Sigma^{P(|x|)}\,\exists z_1 \in \Sigma^{P(|x|)} \dots \forall y_k \in \Sigma^{P(|x|)}\,\exists z_k \in \Sigma^{P(|x|)} such that F(y_1,z_1,\dots,f_k,z_k,x)=1. Let \text{BQP}^A be the set of all functions g:\Sigma^*\to \Sigma for which there is a polynomial P and a program on a quantum computer (again with extra ability to instantly compute A) which computes g(x) in at most P(|x|) operations.

The Theorem: On 31st May 2018, Ran Raz and Avishay Tal submitted to the Electronic Colloquium on Computational Complexity a paper in which they proved the existence of oracle A:\Sigma^*\to \Sigma such that \text{BQP}^A \not\subseteq \text{PH}^A.

Short context: It is widely believed that quantum computers, when/if they will be build, will be able to efficiently solve a large class of problems than regular computers can do. More formally, it is believed that  \text{BQP} \not\subseteq \text{P}, and, moreover, \text{BQP} \not\subseteq \text{PH}, where the definitions of P, PH, and BQP are as above but without oracle A. However, proving such separation results in well out of reach of the current techniques. Such problems often become easier with oracle, for example, it is not known whether \text{P} \neq \text{PH}, but it is known that \text{P}^A \neq \text{PH}^A for some oracle A. In 1993, Bernstein and Vazirani conjectured (at least verbally) the existence of oracle A such that \text{BQP}^A \not\subseteq \text{PH}^A. The Theorem confirms this conjecture.

Links: The free version of the original paper is available here.

Go to the list of all theorems

The 2-to-2-Games Conjecture with imperfect completeness is true

You need to know: Class P, class NP, NP-hard problems, notation {\mathbb Z}_m for set \{0,1,2,\dots,m-1\} with addition and multiplication defined modulo m.

Background: For a fixed parameters 0\leq c \leq s \leq 1 and integer m, consider the following problem. The input is a system of k linear equations in n variables x_1, x_2, \dots, x_n over {\mathbb Z}_m, each equation having the form a \cdot x_i + b \cdot x_j = c, a,b,c \in {\mathbb Z}_m, such that either (i) there is an assignment satisfying at least s k of the equations, or (ii) every assignment satisfies fewer than c k of the equations. The problem \text{UG}(c,s,m) is to decide which of the cases (i) or (ii) is true.

The Theorem: On 10th January 2018, Subhash Khot, Dor Minzer, and Shmuel Safra submitted to the Electronic Colloquium on Computational Complexity a paper in which they proved that for every 0< c \leq s < 1/2 there exists m such that \text{UG}(c,s,m) is NP-hard.

Short context: The statement of the Theorem was known as “the 2-to-2-Games Conjecture with imperfect completeness”. It is a partial result towards proving famous Unique Games Conjecture (UGC) which predicts that the same statement remains correct for every 0< c \leq s < 1. The UGC is one of the fundamental conjectures in the theory of algorithms, which implies, in particular, various hardness of approximation results. For example, important minimum vertex cover problem was known to be NP-hard to approximate within any factor better than c=10\sqrt{5}-21=1.3606..., see here. The Theorem implies this with better constant c=\sqrt{2}=1.4142.... The full UGC would imply that the same is true with c=2, which would be optimal.

Links: The free version of the original paper is available here.

Go to the list of all theorems

Systems of polynomial equations can be solved in expected quasilinear time

You need to know: Complex numbers, polynomials in complex variable, notation {\mathbb C}^n for set of vectors z=(z_1, \dots, z_n) with complex z_i, algorithm, probabilistic algorithm, average running time of an algorithm, big O notation, small o notation.

Background: Let n>0 be a positive integer. Let H be the space of all systems f = [f_1,\dots ,f_n]:{\mathbb C}^n \to {\mathbb C}^n on n polynomial equations in n unknowns. For f \in H, denote V_{\mathbb C}(f) = \{x \in {\mathbb C}^n : f_i(x) = 0, \, 1 \leq i \leq n\} \subset {\mathbb C}^n the set of solutions of f. Also, denote D the maximal degree of a polynomial in f, and N the number of bits needed to write down the system f (that is, write down all coefficients of all polynomials in f).

The Theorem: On 9th November 2017, Pierre Lairez submitted to arxiv and the Journal of the AMS a paper in which he described an explicit probabilistic algorithm that computes an approximate solution of any given f \in H with any given accuracy (with probability 1), and proved that the average number of arithmetic operations of this algorithm is O(n^6D^4N).

Short context: Solving polynomial equations and systems of such equations is one of the fundamental problems in the whole mathematics. In 1998, Smale presented a list of problems he considered as the most important ones for the 21st century. Smales 17th problem asks for the algorithm which could solve systems of polynomial equations with polynomial average running time. In 2009, Beltrán and Pardo presented a probabilistic algorithm that achieves this (see here for deterministic version). The Theorem provides a much faster algorithm. In fact, N\geq 2^{\min(n,D)}, hence the running time O(n^6D^4N) in the Theorem is N^{1+o(1)} when \min(n,D) \to \infty.

Links: Free arxiv version of the original paper is here, journal version is here.

Go to the list of all theorems

The Dichotomy conjecture is true: every CSP is either in P or is NP-complete

You need to know: Algorithm, running time of an algorithm, class P, class NP, NP-complete problems.

Background: Let \Sigma be a finite set. Let {\cal X}=\{x_1, \dots x_n\} be a set of n variables taking values in \Sigma. An assignment a=(a_1,\dots,a_n) \in \Sigma^n means that we assign to each variable x_i the value a_i. A relation R is a subset of \Sigma^k, where k is a positive integer. A constraint C (corresponding to R) is a k-element subset x_{i_1}, \dots x_{i_k} of {\cal X}. We say that assignment a\in \Sigma^n satisfies the constraint C if (a_{i_1}, \dots a_{i_k})\in R. Let \Gamma be a finite set of relations. A constraint satisfaction problem \text{CSP}(\Gamma) is: given \Sigma, {\cal X}, finite set R_1, \dots, R_m of relations such that each R_i \in \Gamma, and a finite set C_1, C_2, \dots, C_t of constraints (each constraint C_j corresponds to some relation R_i), is there an assignment a\in \Sigma^n that satisfies all the constraints?

The Theorem: On 8th March 2017, Andrei Bulatov submitted to arxiv a paper in which he proved that, for every fixed \Gamma, the problem \text{CSP}(\Gamma) is either solvable in polynomial time or is NP-complete.

Short context: In 1975, Ladner proved that, if P\neq NP, then there are problems in NP which are neither in P not NP-complete. However, the problems constructed in the proof of this theorem are rather artificial. In practise, most “natural” problems are either in P or NP-complete. Formalising this observation, Feder and Vardi in 1993 noticed that a large variety of “natural” problems are the special cases of \text{CSP}(\Gamma), and conjectured that in fact every \text{CSP}(\Gamma) is either in P or NP-complete. This conjecture became known as the Dichotomy conjecture. The Theorem confirms this conjecture. In a well defined sense, the class of problems \text{CSP}(\Gamma) is the largest subclass of NP in which such a dichotomy holds.

Links: Free arxiv version of the original paper is here.

Go to the list of all theorems

The boolean Pythagorean Triples problem has a positive answer

You need to know: Notation {\mathbb N} for the set of positive integers.

Background: A Pythagorean triple is a set of three positive integers a, b, and c, such that a^2+b^2=c^2.

The Theorem: On 3rd May 2016, Marijn Heule, Oliver Kullmann, and Victor Marek submitted to arxiv a paper in which they proved the following result. The set \{1,\dots,7824\} can be partitioned into two parts, such that no part contains a Pythagorean triple, while this is impossible for \{1,\dots,7825\}.

Short context: The boolean Pythagorean Triples problem has been a longstanding open problem in Ramsey Theory. It asks whether in every partition of the set {\mathbb N} of positive integers into two parts, one can always find a Pythagorean triple in one of the parts. There was a progress in the variants of the problem (see here and here), but the original problem remained open. The Theorem provides a positive answer. In fact, it guarantees that one can find a Pythagorean triple in every partition of the set of first 7825 positive integers, and that 7825 is the smallest integer with this property. The computer-assisted proof is almost 200 terabytes in size.

Links: Free arxiv version of the original paper is here, journal version is here.

Go to the list of all theorems

There exists a discrete bounded envy-free cake cutting protocol for any number of agents

You need to know: Algorithm.

Background: Let us call interval [0,1] a cake, and any finite union of subintervals of [0,1] – a piece of cake. Let {\cal X} be the set of all pieces of cake. Let N=\{1,\dots,n\} be the set of agents, each agent i having its own valuation function V_i:{\cal X}\to[0,\infty) such that (i) V_i(X\cup X')=V_i(X)+V_i(X') for all disjoint X, X' \in {\cal X}, and (ii) for every X\in {\cal X} and 0\leq \lambda\leq 1, there exists X'\subset X in {\cal X} with V_i(X') = \lambda V_i(X). A cake allocation is a partition [0,1]=\bigcup\limits_{i=1}^n X_i into disjoint pieces X_i\in{\cal X}. We say that agent i is envious if V_i(X_i)<V_i(X_j) for some j. A discrete protocol is an algorithm which returns an allocation after a sequence of queries which either (1) for given x \in [0,1] and r \geq 0, ask agent i to return a point y\in[0,1] such that V_i([x,y])=r (CUT query), or (2) for given x,y\in[0,1], ask agent i to return V_i([x,y]) (EVALUATE query). A protocol is called envy-free if no agent is envious if he/she follows the protocol truthfully. A protocol is bounded if the number of queries required to return a solution is bounded by a constant C=C(n) depending only on n but not on V_i.

The Theorem: On 13th April 2016, Haris Aziz and Simon Mackenzie submitted to arxiv a paper in which they proved the existence of a discrete bounded envy-free cake cutting protocol for any number of agents.

Short context: Cake cutting is a metaphor for the allocation of a heterogeneous divisible good among multiple agents with possibly different preferences over different parts of the cake. The existence of a discrete and bounded envy-free cake cutting protocol was the central open problem in this area. Such protocol is easy to n\leq 3. In 2015, Aziz and Mackenzie solved the problem for n=4. The Theorem solves it for all n.

Links: Free arxiv version of the original paper is here, journal version is here.

Go to the list of all theorems