Quantitative invertibility of random matrices: a combinatorial perspective
- Statistics, Stanford University
- More about Vishesh Jain
Editorial introduction
Quantitative invertibility of random matrices: a combinatorial perspective, Discrete Analysis 2021:10, 38 pp.
This paper concerns the general and much studied question of how likely a large random real or complex square matrix is to be invertible, and, if it is invertible, how far it is from being singular. To interpret the latter question, note that an n×n matrix A is singular if and only if there is a vector x, belonging to Rn or Cn as appropriate, such that ‖x‖2=1 and ‖Ax‖2=0. Thus, a natural quantitative measure of the invertibility of a matrix is the smallest ϵ such that there exists x with ‖x‖2=1 and ‖Ax‖2=ϵ. This is equal to the smallest singular value of A, denoted sn(A), and it is also the reciprocal of the operator norm of A−1.
The most famous question in this general area is a problem of Komlós that is deceptively simple to state: what is the probability that an n×n matrix with independent random entries in {−1,1} is singular? It is widely believed that the main reason that such a matrix might fail to be invertible is that it has two rows or two columns that are either equal or that add up to zero. The probability of this is approximately 2(n2)2−(n−1), which is approximately 2n22−n. Even obtaining an o(1) bound was non-trivial, and achieved first by Komlós. There followed a long series of improvements: the first exponentially small bound was obtained by Kahn, Komlós and Szemerédi, much improved exponents were obtained by Tao and Vu, and by Bourgain, Vu and Wood, and in a recent breakthrough of Tikhomirov a bound of (1/2+o(1))n was obtained, so that we now know the correct exponent.
The papers of Tao and Vu introduced into the area an important set of ideas known as inverse Littlewood-Offord theory. The Littlewood-Offord problem asks the following: given a set of vectors v1,…,vn, how large can the probability be that a random sum ∑iϵivi is equal to some particular vector v, where ϵ1,…,ϵn are chosen independently at random from {−1,1}? More generally, one can ask similar questions with the ϵi being other real or complex random variables that satisfy suitable conditions. One can also ask for the probability that the sum belongs to some small ball rather than equalling a particular vector – for instance, this is clearly the right question when the ϵi are continuous random variables.
An inverse Littlewood-Offord problem seeks to understand the structure of the set {v1,…,vn} when the above probability is large. When proving that a random matrix is unlikely to be singular, one often ends up in a situation where one would like to apply a union bound, but a naive union bound fails because of a few bad cases. These cases turn out to correspond to sets of vectors where the Littlewood-Offord probability is large. However, if one can show that such sets have strong structural properties, then one can also show that they are rare, and in that way split up the union bound into a small “structured part” and a much larger “typical part”, where in each part the naive union bound works – in one case because one is adding up only a very small number of largish probabilities, and in the other case because one is adding up a large number of very small probabilities.
It has subsequently been understood, starting with work of Rudelson and Vershynin, that for many purposes the inverse Littlewood-Offord theory is in a certain sense “too strong” a tool to use. In principle, to obtain an upper bound on the number of bad cases, one does not need to describe the structure of those cases, which is hard to do, especially if one wants tight bounds – one just needs to show that they are rare. In a paper of Ferber, Jain, Luh and Samotij it was shown that it is possible to bypass the use of inverse Littlewood-Offord theory by using clever counting arguments instead, and thereby to obtain better bounds for certain problems.
This paper continues this line of thought, looking at a random matrix model Mn=M+Nn, where M is a fixed n×n complex matrix and Nn is a random matrix with independent entries of mean 0 and variance 1. This model arises naturally in computer science, where M can be thought of as the matrix that an algorithm would like to work with, and Nn is a bit of noise that is in practice added to it.
It was previously known, using inverse Littlewood-Offord theory and assuming a power-type upper bound on ‖M‖, that for every positive constant A there exists a positive constant B such that the probability that sn(Mn)≤n−B is at most n−A. The main result of this paper extends the range of applicability of this result so that ‖M‖ is allowed to be larger, and the bounds n−A and n−B are replaced by bounds of the form exp(−nc). Such results with exponential bounds were already known in the case M=0, using more geometric techniques originally due to Rudelson and Vershynin and further developed by them and other authors. However, for matrices with non-centred entries the results are new.