Multivariate polynomial values in difference sets

- mathematics, Oklahoma State University
- More about John R. Doyle

- mathematics, Millsaps College (MS)
- More about Alex Rice

*Discrete Analysis*, September. https://doi.org/10.19086/da.27995.

### Editorial introduction

Multivariate polynomial values in difference sets, Discrete Analysis 2021:11, 46 pp.

A well known theorem of Furstenberg and Sárközy states that every dense set of integers must contain two elements that differ by a perfect square. Together with Szemerédi’s theorem, it has been the starting point for a great deal of further research. One reason for this is that both theorems can be proved in interestingly different ways. Szemerédi’s original proof of Szemerédi’s theorem was purely combinatorial, but soon after it was discovered, Furstenberg pioneered an approach via ergodic theory, which also yielded (more simply) the result about square differences, which was proved independently by Sárközy using methods from analytic number theory. These different approaches naturally led to research programmes with slightly different emphases: ergodic theorists pursued qualitative generalizations, a particular highlight of which is the Bergelson-Leibman theorem, a “polynomial Szemerédi theorem” that simultaneously generalizes the two theorems just mentioned, while additive combinatorialists and analytic number theorists pursued more quantitative aspects, aiming to understand how dense a subset of {1,2,…,n} has to be in order to guarantee that it contains a substructure of a given kind. A particularly interesting question is to find a quantitative version of the Bergelson-Leibman theorem in its full generality: this remains open, though impressive results have been obtained by Peluse, Prendiville and others on various classes of special cases.

This article is related to the quantitative question of how dense a set of integers must be in order to contain two elements with difference equal to a perfect square. The central open problem in this direction is annoyingly simple to state: does there exist a constant α<1 such that a subset of {1,2,…,n} containing no two elements that differ by a perfect square must have cardinality at most nα? For a long time the best known upper bound, proved by Pintz, Steiger and Szemerédi in 1988, was of the form n/(logn)cloglogloglogn. Very recently this was improved by Bloom and Maynard to n/(logn)clogloglogn, but we are still a long way from a power-type upper bound.

One reason for the difficulty is that basically the only known method for obtaining lower bounds of type n1−o(1) for this kind of problem is to use some variant of a construction of Behrend, which provides a lower bound for Szemerédi’s theorem of n/exp(c√logn). Since there are several problems for which the Behrend construction does not appear to give bounds, and since the standard method for obtaining upper bounds, the so-called density increment method, appears to be limited to Behrend-style upper bounds at best, there are a number of questions, including this one, where closing the gap between a power-type lower bound and an n1−o(1)-type upper bound requires a substantial new idea.

Given our inability to determine the correct order of magnitude in the Furstenberg-Sárközy theorem, it is of interest to come up with variants of the problem where better upper bounds can be obtained. The second author of this article has a number of results in this direction. The upper bounds still use density increment arguments and therefore appear to be limited to bounds of Behrend-type quality. However, there is a substantial difference between the Behrend bound and bounds of the form n/(logn)ω(n) for a very slow-growing function ω, and the upper bounds obtained for the variants are significantly better than the Bloom-Maynard bound for the Furstenberg-Sárközy theorem.

Amongst these results, a particularly interesting one is that sets that lack differences of the form x100+y100 (to give a representative example) have a Behrend-style upper bound. What makes this interesting is that numbers of this form are much sparser than squares. This observation (and also the methods used in the proof) show that the bottleneck in the Furstenberg-Sárközy theorem is not so much to do with the sparsity of the set of differences to be avoided, and more to do with the irregularity of how the set is distributed in residue classes. Roughly speaking, the speed at which x100+y100 starts to “look random” mod p is faster than the corresponding speed for x2.

The article under review generalizes the methods used to prove this result, considerably expanding the class of polynomials to which it applies. For instance, it applies not just to diagonal forms such as x100+y100 but also to homogeneous polynomials with cross terms, such as x100+x50y50+y100. But that is just the start, and the authors have results for non-homogeneous polynomials (satisfying suitable conditions) as well. The generalization is achieved with help of Deligne’s estimates to quantify the rate of equidistribution of such polynomials mod p. Given the limits of present technology, the generality of the results in the paper is very satisfying, and the use of arithmetic geometry brings a new tool to the area that may well have further applications.