Larger Corner-Free Sets from Better NOF Exactly-$N$ Protocols

- School of Computer Science and Engineering, The Hebrew University of Jerusalem
- More about Nati Linial

- Department of Computer Science, The Academic College of Tel-Aviv-Yaffo
- More about Adi Shraibman

*Discrete Analysis*, October. https://doi.org/10.19086/da.28933.

### Editorial introduction

Larger corner-free sets from better NOF exactly-$N$ protocols, Discrete Analysis 2021:19, 9 pp.

If $G$ is an Abelian group, then a *corner* in ${G}^{2}$ is a subset of the form $\{(x,y),(x+d,y),(x,y+d)\}$ with $d\ne 0$. The *corners theorem* of Ajtai and Szemerédi states that for every $\delta >0$ there exists $N$ such that every subset $A\subset [N{]}^{2}$ of density at least $\delta $ contains a corner with $d>0$. This result occupies a central place in additive combinatorics, because it belongs to a hierarchy of density statements (or statements that can be reformulated as density statements) of increasing generality, about which our knowledge is still incomplete.

At one end of this hierarchy is Roth’s theorem, which states that for every $\delta >0$ there exists $N$ such that every subset $A\subset [N]$ of density at least $\delta $ contains an arithmetic progression of length 3. To deduce this from the corners theorem one defines a subset $B\subset [2N{]}^{2}$

to be $\{(x,y):x-y\in A\}$.

At the other end of the hierarchy is the triangle removal lemma, which is equivalent to the statement that for every $\delta >0$ there exists $n$ such that if $H$ is a tripartite linear 3-uniform hypergraph with $N$ vertices in each vertex class and at least $\delta {N}^{2}$ edges, then it contains a subset of the form $\{({x}^{\prime},y,z),(x,{y}^{\prime},z),(x,y,{z}^{\prime})\}$, where $x\ne {x}^{\prime},y\ne {y}^{\prime}$ and $z\ne {z}^{\prime}$. (A 3-uniform hypergraph is called *linear* if no two edges share more than one vertex, which implies that it has at most ${N}^{2}$ edges.) To see that this implies the corners theorem, one can take a dense subset $A\subset [N{]}^{2}$ and define $H$ to be the set of all $(x,y,x+y)$ such that $(x,y)\in A$. Then $H$ is linear, and a triple $\{({x}^{\prime},y,z),(x,{y}^{\prime},z),(x,y,{z}^{\prime})\}$ in $H$ would come from a triple $\{({x}^{\prime},y),(x,{y}^{\prime}),(x,y)\}$ in $A$ such that ${x}^{\prime}+y=x+{y}^{\prime}$, which is therefore a corner (with $d={x}^{\prime}-x={y}^{\prime}-y$).

While these results are known qualitatively, for each of them there are huge gaps between the best known upper and lower bounds. For Roth’s theorem, the best known upper bound on how large $N$ must be is of the form $\mathrm{exp}({\delta}^{-(1+c)})$ for a small positive constant $c$: this is a recent breakthrough of Thomas Bloom and Olof Sisask. For the corners theorem Ilya Shkredov has shown that there is an upper bound of $\mathrm{exp}\mathrm{exp}({\delta}^{-C})$ for some constant $C$. And for the triangle removal lemma the bound is a tower of height proportional to ${\delta}^{-1}$: this was shown by Jacob Fox.

In the opposite direction, the best known lower bound for Roth’s theorem, which comes from a 1947 construction of Behrend, is $\mathrm{exp}(C\mathrm{log}({\delta}^{-1}{)}^{2})$. One might think that because the corners theorem is more general and the triangle removal lemma considerably more general, that there would be better constructions for these problems, but embarrassingly that is not the case: we do not know how to do much better than simply translating the Behrend construction into counterexamples for those two problems in the manner described above.

This paper makes a small dent in this general problem, by giving a construction for the corners theorem that improves on the constant $C$ above: if logarithms are taken to base 2, then previously it was $2\sqrt{2}$, but now it is brought down to $2\sqrt{\mathrm{log}e}$, which is an improvement from roughly $2.8$ to $2.4$.

While this is only a small improvement, it is the first example of a lower bound construction that works for corners but not for Roth’s theorem. Perhaps more interestingly, the authors arrived at it from the theory of communication complexity, which is of central importance in theoretical computer science and itself a source of fascinating open problems.

Shortly after the preprint appeared, Ben Green found a purely combinatorial version of the argument that achieved a slightly better bound still. The construction can be thought of as Behrend’s construction with some extra efficiencies. However, it is quite possible that those efficiencies would have remained unnoticed for a long time without the communication complexity perspective that led to their discovery, and there are reasons to hope that communication complexity will lead to more insights in future.

Let us briefly see what communication complexity is and how it relates to the corners problem. In general, the idea behind communication complexity is that $n$ players each have some partial information about a bit string $x$ and they wish to compute some function of $f$ while exchanging as little information as possible. An example that is relevant here is where there are three positive integers $x,y,z$, each at most $N$, and $f(x,y,z)=1$ if $x+y+z=N$ and 0 otherwise. There are three players, and each player gets to see a different pair of the numbers $x,y$ and $z$. (This type of set-up is sometimes called the “number on the forehead model”: one can imagine each player having a number written on their forehead so that the other players can see it but they can’t.) A *protocol* can be defined more formally as follows. The first player writes down a bit, which is allowed to depend on the two numbers they can see. Then at each subsequent stage the player whose turn it is writes down a bit, which can depend on the numbers that player sees, together with the bits that have been written down. The game ends when all three players know whether $x+y+z=N$, and the object is to achieve this in as few rounds as possible.

A special case of a lemma of Chandra, Furst and Lipton (Lemma 3.1 of this paper) states that if ${x}^{\prime}+y+z=x+{y}^{\prime}+z=x+{y}^{\prime}+z$, then any protocol that behaves identically for the triples $({x}^{\prime},y,z),(x,{y}^{\prime},z)$ and $(x,y,{z}^{\prime})$ also behaves in the same way for $(x,y,z)$. Indeed, if it is the turn of the player with $x$ on their forehead, they see $y$ and $z$ and a certain sequence of bits, but by induction they would see $y$ and $z$ and the same sequence of bits if they had the number ${x}^{\prime}$ on their forehead. It follows that if a protocol is to distinguish between triples that add up to $N$ and triples that do not, it must not behave in the same way for three triples $({x}^{\prime},y,z),(x,{y}^{\prime},z)$ and $(x,y,{z}^{\prime})$ that all add up to $n$. Since the number of possible behaviours of a protocol is exponential in the number of rounds, the minimum length of a protocol that always works is at least logarithmic in the number of colours needed to colour $\{(x,y,z):x+y+z\}$ if no colour class is allowed to contain a subset of the form $\{({x}^{\prime},y,z),(x,{y}^{\prime},z),(x,y,{z}^{\prime})\}$. But as we have seen, such subsets are easily transformed into corners, so an upper bound on the maximum density of a corner-free set translates into a lower bound for the communication complexity of the sums-to-$N$ problem. The implication can also be shown to go the other way, so this link between communication complexity and additive combinatorics is a close one that is beginning to bear fruit in interesting ways.

The work of this paper is presented by one of the authors in the following talk.