Blow-up lemmas for sparse graphs
- Department of Mathematics, London School of Economics and Political Science
- More about Peter Allen
- Department of Mathematics, London School of Economics and Political Science
- More about Julia Böttcher
- Departamento de Matemática y Ciencia de la Computación, Universidad de Santiago de Chile
- More about Hiep Hàn
- Departamento de Ciência da Computação, Universidade de São Paulo
- More about Yoshiharu Kohayakawa
- Fakultät Mathematik und Naturwissenschaften, Technische Universität Ilmenau
- More about Yury Person
Editorial introduction
Blow-up lemmas for sparse graphs, Discrete Analysis 2025:8, 141 pp.
Szemerédi’s regularity lemma is, roughly speaking, the statement that for every graph G and every ϵ>0, we can partition the vertex set of G into sets V1,…,Vk of almost equal size, with k depending on ϵ only, in such a way that for all but at most ϵk2 of the pairs (i,j) the bipartite graph spanned by Vi and Vj enjoys a quasirandomness property known as ϵ-regularity, which is all the bipartite graphs G(Ai,Aj) induced by subsets Ai and Aj of Vi and Vj that are not too small have roughly the same density (which is therefore the density dij of the bipartite graph G(Vi,Vj) induced by Vi and Vj).
We can then form a weighted “block graph” with vertex set {1,2,…,k}, where the edge joining i to j has weight equal to dij. This allows us to obtain approximations for the number of copies of some fixed small subgraph H in G. For example, if H is a triangle, we can choose vertices x,y and z at random and condition on which sets Vi they lie in. If x∈Vi,y∈Vj and z∈Vh, then the probability that xy is an edge is dij, the probability that xz is an edge given that xy is an edge is approximately dih (because one can check that almost all vertices in Vi have roughly the same number of neighbours in Vj, so knowing that xy is an edge gives almost no information about x) and the probability that yz is an edge given that xy and xz are edges is roughly djh, at least assuming that dij and dih are not too small. So the number of triples (x,y,z)∈Vi×Vj×Vh that form a triangle is roughly dijdihdjh|Vi||Vj||Vh|. All that was assuming that the three pairs (Vi,Vj),(Vi,Vh) and (Vj,Vj) are ϵ-regular, which is true almost all the time, so summing up, we obtain a good approximation for the number of triangles.
Often in graph theory, however, one wishes to prove results about embedding large graphs. The argument sketched above is sufficiently flexible to suggest that such embeddings ought to be possible under suitable conditions. For example, suppose for simplicity that G is a quasirandom bipartite graph of density 1/2 with vertex sets V1 and V2 of equal size. We could ask whether G must contain a Hamilton cycle. If we wanted to prove that it does, we could start by choosing a vertex x0, then proceed in a random greedy way, picking a random neighbour x1, then a random neighbour x2 of x1 not equal to x0, and so on. Suppose we have picked x0,x1,…,xm. If many vertices are still not yet chosen, and xm has many neighbours among those vertices (which will typically be the case, by the regularity property), then there will be many possibilities for xm+1. Let us say in this case that xm is good. In that case, there will be many good choices for xm+1, so the inductive procedure can continue until only a few vertices remain to be chosen. Thus, we can at least embed most of a Hamilton cycle.
It is clear that we cannot always embed a Hamilton cycle in its entirety, since a quasirandom graph can have isolated vertices. So one has to strengthen the condition on the graph to ensure that the minimum degree is large. A regular pair with this minimal-degree condition is called super-regular. More precisely, a pair (V,W) is (ϵ,δ)-super-regular if for any two subset A⊂V and B⊂W with |A|≥δ|V| and |B|≥δ|W| there are at least ϵ|A||B| edges from A to B, and also every vertex in A has at least δ|B| neighbours in B and vice versa. Note that the first condition is weaker than regularity, since it just asks for a lower bound on the densities.
The blow-up lemma states the following. Suppose we have a block graph with sufficiently large blocks and we replace all the (ϵ,δ)-super-regular pairs by complete bipartite graphs with the same vertex sets. Then any bounded-degree graph that can be embedded into the resulting larger graph can be embedded into the original one. It has been used in conjunction with Szemerédi’s regularity lemma to prove a large number of embedding results.
The argument sketched above for Hamilton cycles showed only that most of the Hamilton cycle could be embedded. To get the rest, one starts by setting aside a set of “buffer vertices” that are used at the end of the proof to modify the attempted embedding once it has got stuck.
This paper concerns analogues of the blow-up lemma for sparse graphs, which places it in a long tradition of “sparse random versions” of results initially proved for dense graphs (or more precisely formulated in a way that makes them non-trivial only for dense graphs). For example, Mantel’s theorem states that a graph with n vertices and more than ⌊n/2⌋⌈n/2⌉ edges must contain a triangle. A slightly more approximate version of this statement is that a subgraph of the complete graph of density greater than 1/2 must contain a triangle. The sparse random version of Mantel’s theorem replaces the complete graph by the random graph G(n,p) for any p greater by a constant factor than a trivial bound that turns out to be cn−1/2. (Below that bound the number of triangles is smaller than, say, half the number of edges so the conclusion clearly does not hold.)
There are already sparse random versions of Szemerédi’s regularity lemma, so it is very natural to ask for a sparse random version of the blow-up lemma. In fact this paper proves three such results. In particular, it proves a blow-up lemma for graphs of maximum degree Δ in subgraphs of G(n,p) provided that p≥C(logn/n)1/Δ. Note that n−1/Δ is a natural barrier because below that, the probability of a given Δ vertices having a common neighbour dips below 1, which is a serious problem for any greedy procedure for choosing the vertices of the embedded subgraph. However, for many graphs that observation does not translate into a proof of optimality (up to the log factor): the paper contains a discussion of the degree to which the results are optimal.
Sparse random versions of combinatorial results are typically substantially harder to prove than their dense counterparts. The proof in this paper is no exception: again it proceeds algorithmically, but the algorithm is much more complicated, with several subprocedures that rely on ideas from earlier papers. However, the reward for this effort is a fundamental result with a diverse set of applications, some of them given in the paper and others proved subsequently.