Hypergraph removal lemmas via robust sharp threshold theorems
- Mathematics, Bar Ilan University
- More about Noam Lifshitz
Editorial introduction
Hypergraph removal lemmas via robust sharp threshold theorems, Discrete Analysis 2020:10, 46 pp.
A central result in additive and extremal combinatorics is the triangle removal lemma, which roughly speaking states that a graph with few triangles can be approximated by a graph with no triangles. This result implies, amongst other things, Roth’s theorem on arithmetic progressions, and it has been the starting point for many generalizations. One of these, the hypergraph removal lemma, states that for a fixed k and a fixed k-uniform hypergraph H, any k-uniform hypergraph that contains few copies of H can be approximated by a hypergraph that contains no copies of H. This result (in the case that H is a simplex) implies Szemerédi’s theorem.
The proof of the hypergraph removal lemma uses a hypergraph generalization of Szemerédi’s regularity lemma, and a consequence of this is that it gives bounds that are extremely weak, which means that the lemma is useful only when k is fixed and the size of the hypergraph tends to infinity. This interesting paper concerns hypergraph removal lemmas where k can tend to infinity with n – indeed, it can even be linear in n. It would not be realistic to expect to prove removal lemmas for all such hypergraphs, and indeed quite strong conditions are imposed: the number of edges must be bounded, as must the maximum size of the intersection of any two edges. But these conditions still allow for a wide class of hypergraphs that occur naturally. For example, if we take H to be the k-uniform hypergraph that consists of two disjoint edges, then a k-uniform hypergraph that does not contain H is precisely the same as an intersecting family of k-sets, so a consequence of the removal lemma proved in this paper is that an almost intersecting family of k-sets can be approximated by an intersecting family, though this particular case is an already known theorem of Friedgut and Regev (published in this journal).
If the hypergraph regularity lemma is too weak to prove results for large k, then what can be used instead? The answer is in the title of the paper. Early on in the development of the theory of random graphs, it was noticed that many graph properties, such as being connected, “appear suddenly” in the sense that if the edge probability is just below a certain threshold, then with high probability the graph does not have the property, and if it is just above, then with high probability the graph does have it. Later it was recognised that this was a very general phenomenon. If we regard a graph property as a collection of subsets of the complete graph (namely, the graphs that have the property), then for many properties such as connectedness this collection is monotone, in the sense that it cannot be destroyed by adding edges. It also has many symmetries, since properties such as connectedness are preserved by isomorphisms of graphs.
In general, if A⊂{0,1}n, we say that A exhibits a sharp threshold if the probability that a random set A belongs to A, when each element of A is chosen independently with probability p, jumps suddenly from 0 to 1. There are many results concerning such thresholds, including a result of Friedgut and Kalai that states that A exhibits a sharp threshold if it is monotone and symmetric (where “symmetric” in this instance means invariant under a transitive group of permutations of the ground set {1,2,…,n}).
This paper proves a similar theorem under weakened conditions – hence the word “robust” in the title – and uses it to derive the removal lemma for hypergraphs of large uniformity. The proof of this robust version uses a substantially different argument from that of Friedgut and Kalai.