Random cones in high dimensions I: Donoho-Tanner and Cover-Efron cones

- Institut für Mathematische Stochastik, Westfälische Wilhelms-Universität Münster
- More about Thomas Godland

- Institut für Mathematische Stochastik, Westfälische Wilhelms-Universität Münster
- More about Zakhar Kabluchko

- Stochastik, Ruhr University Bochum
- More about Christoph Thäle

### Editorial introduction

Random cones in high dimensions I: Donoho-Tanner and Cover-Efron cones, Discrete Analysis 2020:5, 44 pp.

As its title makes clear, this paper is about random high-dimensional cones. A *cone* in Rd is a subset that is closed under addition and under multiplication by non-negative scalars. To define a random cone is less easy, since there is no single way of defining a probability measure on the set of all cones that stands out as being the most natural.

Such a situation can be regarded as an opportunity: one can try to find definitions that lead to interesting questions. One obvious randomized method of obtaining a cone in Rd is to pick n vectors v1,…,vn independently at random from some distribution on Rd and to take the cone that they generate. And an obvious distribution to take is the standard Gaussian distribution.

There is also the question of which n to take. If n is smaller than d, then the cone will not be of full dimension, and if n is significantly larger than d, then with high probability it will be the whole of Rd.

More precisely, a theorem of Schläfi from the 19th century gives that n hyperplanes through the origin but otherwise in general position divide Rd into C(n,d)=2∑d−1i=0(n−1i) regions. If the normal vectors are v1,…,vn, then each region is of the form {x:∀i ⟨x,ϵivi⟩>0} for some choice of signs ϵ1,…,ϵn. It follows that there are precisely C(n,d) choices of signs ϵ1,…,ϵn for which there exists x such that ⟨x,ϵivi⟩>0.

It follows that if v1,…,vn are any n vectors in general position and ϵ1,…,ϵn are random signs, then the probability that there is a hyperplane such that all the vectors ϵivi lie on one side of the hyperplane is C(n,d)/2n. And from that it follows that if v1,…,vn are chosen independently at random from a distribution that is centrally symmetric and absolutely continuous with respect to Lebesgue measure, then the probability that they lie to one side of a hyperplane, and therefore generated a non-trivial cone, is also C(n,d)/2n.

This implies that there is an important change when n passes 2d. If d=δn for some δ<1/2. then C(n,d)/2n is exponentially small, so the cone generated by v1,…,vn is all of Rd with extremely high probability, whereas if δ>1/2 then 1−C(n,d)/2n is exponentially small, so the cone is almost certainly proper.

This threshold has been shown to be a phase transition for a number of important parameters associated with random cones. What is achieved in this paper is a set of much more precise results concerning how the changes take place over the “critical window” – that is, how the parameters depend on c if n=2d+c√d+o(√d).

The Donoho-Tanner random cones in the title are the ones just described. The Cover-Efron cones are ones where one conditions on the cone not being all of Rd. The authors prove results for both models. One of the parameters they investigated is the number of faces of dimension k for small k. This was known to be around (nk) if δ>1/2 and around (2δ)k(nk) if δ<1/2, but the authors provide a formula for the dependence on c in the critical window (which in the case of the Donoho-Tanner random cones turns out to be given by the cumulative distribution function of a normal distribution). They also look at *quermassintegrals*: the kth quermassintegral of a random cone is the probability that it intersects non-trivially with a random subspace of codimension k.

Previous work in this area has tended to rely on detailed estimates for sums of binomial coefficients (as the simple argument given earlier exemplifies). The main idea that enables the authors to go beyond this is to interpret the quantities they are looking at in terms of binomial random variables, which allows them to make use several less elementary probabilistic tools.