Threshold for the expected measure of the convex hull of random points with independent coordinates

Let $\mu$ be an even Borel probability measure on ${\mathbb R}$. For every $N>n$ consider $N$ independent random vectors $\vec{X}_1,\ldots ,\vec{X}_N$ in ${\mathbb R}^n$, with independent coordinates having distribution $\mu $. We establish a sharp threshold for the product measure $\mu_n$ of the random polytope $K_N:={\rm conv}\bigl\{\vec{X}_1,\ldots,\vec{X}_N\bigr\}$ in ${\mathbb R}^n$ under the assumption that the Legendre transform $\Lambda_{\mu}^{\ast}$ of the logarithmic moment generating function of $\mu$ satisfies the condition $$\lim\limits_{x\uparrow x^{\ast}}\dfrac{-\ln \mu ([x,\infty ))}{\Lambda_{\mu}^{\ast}(x)}=1,$$ where $x^{\ast}=\sup\{x\in\mathbb{R}\colon \mu([x,\infty))>0\}$. An application is a sharp threshold for the case of the product measure $\nu_p^n=\nu_p^{\otimes n}$, $p\geq 1$ with density $(2\gamma_p)^{-n}\exp(-\|x\|_p^p)$, where $\|\cdot\|_p$ is the $\ell_p^n$-norm and $\gamma_p=\Gamma(1+1/p)$.


Introduction
Let µ be an even Borel probability measure on the real line and let X 1 , . . ., X n be independent and identically distributed random variables, defined on some probability space (Ω, F , P ), each with distribution µ, i.e., µ(B) := P (X i ∈ B) for all i n and all B in the Borel σ-algebra B(R) of R. Consider the random vector X = (X 1 , . . ., X n ) and, for a fixed N satisfying N > n, consider N independent copies X 1 , . . ., X N of X.The distribution of X is µ n := µ⊗• • •⊗µ (n times) and the distribution of ( X 1 , . . ., X N ) is µ N n := µ n ⊗• • •⊗µ n (N times).Our aim is to obtain a sharp threshold for the expected µ n -measure of the random polytope K N := conv X 1 , . . ., X N .
In order to make the notion of a sharp threshold precise, for any n 1 and δ ∈ 0, 1 2 we define the upper threshold (1.1) and the lower threshold as n → ∞, for any fixed δ ∈ 0, 1   2 .A threshold of this form was first established in the classical work of Dyer, F üredi and McDiarmid [10] for the case of the uniform measure µ on [−1 , 1].We apply the general approach that was proposed in [5] and obtain an affirmative answer for a general even probability measure µ on R that satisfies some additional assumptions, which we briefly explain (see Section 2 for more details).We assume that µ is non-degenerate, i.e.Var(X) > 0. Let x * = x * (µ) := sup {x ∈ R : µ([x, ∞)) > 0} be the right endpoint of the support of µ and set I µ = (−x * , x * ).Note that since µ is non-degenerate and even, we have that x * > 0. Let g(t) := E e tX := R e tx dµ(x), t ∈ R denote the moment generating function of X, and let Λ µ (t) := ln g(t) be its logarithmic moment generating function.By Hölder's inequality, Λ µ is a convex function on R. Consider the Legendre transform Λ * µ : I µ → R of Λ µ ; this is the function Λ * µ (x) := sup {tx − Λ µ (t) : t ∈ R} .One can show (see Proposition 2.6) that Λ * µ has finite moments of all orders.
We say that µ is admissible if it is non-degenerate, i.e.Var µ (X) > 0, and satisfies the following conditions: (i) There exists r > 0 such that E e tX < ∞ for all t ∈ (−r, r); in particular, X has finite moments of all orders.
In Section 4 we give an application of Theorem 1.1 to the case of the product p-measure ν n p := ν ⊗n p .
For any p 1 we denote by ν p the probability distribution on R with density (2γ p ) −1 exp(−|x| p ), where γ p = Γ(1 + 1/p).We show that ν p satisfies the Λ * -condition.Theorem 1.2.For any p 1 we have that Note that the measure ν p is admissible for all 1 p < ∞; it satisfies condition (ii-3) if p = 1 and condition (ii-2) for all 1 < p < ∞.Therefore, Theorem 1.2 implies that if K N is the convex hull of N > n independent random vectors X 1 , . . ., X N with distribution ν n p then the expected measure We close this introductory section with a brief review of the history of the problem that we study and related results.A variant of the question, in which µ n (K N ) is replaced by the volume of K N , has been studied in the case where µ is compactly supported.Define In [14] the following threshold for the expected volume of K N was established for a large class of compactly supported distributions µ: For every ε > 0, This result generalized the work of Dyer, F üredi and McDiarmid [10] who studied the following two cases: and the result holds with κ = ln 2 − 1 2 .This is the case of ±1 polytopes.(ii) If µ is the uniform distribution on [−1, 1], then Λ µ (t) = ln(sinh t/t), and the result holds with The generalization from [14] states that if µ is an even, compactly supported, Borel probability measure on the real line and 0 < κ(µ) < ∞, then (1.3) holds for every ε > 0, and (1.4) holds for every ε > 0 provided that the distribution µ satisfies the Λ * -condition.
Further sharp thresholds for the volume of various classes of random polytopes appear in [20] and [2], [3] where the same question is addressed for a number of cases where X i have rotationally invariant densities.Exponential in the dimension upper and lower thresholds are obtained in [12] for the case where X i are uniformly distributed in a simplex.General upper and lower thresholds have been obtained by Chakraborti, Tkocz and Vritsiou in [7] for some general families of distributions; see also [4].

Background and auxiliary results
As stated in the introduction, we consider an even Borel probability measure µ on the real line and a random variable X, on some probability space (Ω, F , P ), with distribution µ.In order to avoid trivialities we assume that Var µ (X) > 0, and in particular that p µ := max{P (X = x) : x ∈ R} < 1. Recall that µ is even if µ(−B) = µ(B) for every Borel subset B of R.
For the proof of our main result we have to make a number of additional assumptions on µ.The first one is that there exists r > 0 such that (2.1) E e tX := R e tx dµ(x) < ∞ for all t ∈ (−r, r).This assumption ensures that X has finite moments of all orders.
For every t ∈ (−t * , t * ) we define the probability measure P t on (Ω, F ) by P t (A) := E e tX−Λµ(t) 1 A , A ∈ F .
Define also µ t (B) := P t (X ∈ B) for any Borel subset B of R. Since dP t = e tX−Λµ(t) dP and E µ (X k e tX ) < +∞ for all k 1 and t ∈ J µ , we see that µ t has finite moments of all orders.Also, differentiating twice Λ µ and taking into account the definition of P t , we check that where E t and Var t denote expectation and variance with respect to P t .Notice that P 0 = P and µ 0 = µ.Since µ is non-degenerate we have that µ t ({c}) = 1 for all c ∈ R and t ∈ (−t * , t * ), which implies that Λ ′′ µ (t) > 0 for all t ∈ (−t * , t * ).It follows that Λ ′ µ is strictly increasing and since Λ ′ µ (0) = 0 we conclude that Λ µ is strictly increasing on [0, t * ).
Let m : [0, It is clear that m is non-decreasing.Observe that, from Markov's inequality, for any x ∈ (0, x * ) and any t 0, we have E e tX e tx µ([x, ∞)), and hence, (2.4) R and the restriction of ln f to it is concave.Any non-degenerate log-concave probability measure µ on R has a log-concave density f := f µ .Since f has finite positive integral, one can check that there exist constants A, B > 0 such that f (x) Ae −B|x| for all x ∈ R (see [6, Lemma 2.2.1]).In particular, f has finite moments of all orders.We refer to [6] for more information on log-concave probability measures.
The next lemma describes the behavior of Λ µ at the endpoints of J µ for a log-concave probability measure with unbounded support on R .Lemma 2.1.Let µ be an even log-concave probability measure on R with Proof.Let f denote the density of µ.Since x * = +∞, we have that supp(µ) = R, and hence, f can be written as f = e −q , where q : R → R is an even convex function.By symmetry, it is enough to consider the convergence of Λ µ (t) for t > 0.
Note that, since q is even and convex on R, we have lim x→+∞ q(x) = +∞ and the function u(x) = q(x)−q(0) x is increasing on (0, ∞).First we observe that we cannot have lim x→∞ u(x) = ∞.If this was the case then we would have lim x→∞ q(x) x = ∞, and hence x ) dx < ∞ for all t > 0, i.e.Λ µ (t) < ∞ for all t > 0, which is not our case.Therefore, since u is increasing, there exists t * > 0 such that which shows that t ∈ J µ , and hence (−t * , t * ) ⊆ J µ .
On the other hand, if t = t * then using the fact that u(x) t * for all x > 0 we get This shows that J µ = (−t * , t * ).
Finally, if we consider a strictly increasing sequence t n → t * then by the monotone convergence theorem we get Definition 2.2.Let µ be an even probability measure on R. We will call µ admissible if it satisfies (2.1) and (2.2), as well as one of the following conditions: (i) µ is compactly supported, i.e. x * < +∞.
Taking also into account Lemma 2.1 we see that, in all the cases that we consider, the interval The next lemma describes the behavior of Λ ′ µ for an admissible measure µ.The first case was treated in [14].
Lemma 2.3.Let µ be an admissible even Borel probability measure on the real line.Then, Λ ′ µ : J µ → I µ is strictly increasing and surjective.In particular, Proof.We have already explained that, since µ is strictly increasing.Now, we consider the three cases of the lemma separately.
(i) From the inequality −x * e tX Xe tX x * e tX , which holds with probability 1 for each fixed t, and the formula ty − m(y) for all t 0, we have that Λ µ m(y)/(y − x) xm(y)/(y − x).It follows that if we consider the function q x (t) := tx − Λ µ (t), then q x (0) = 0 and q x m(y)/(y − x) 0. Since q x is concave and q ′ x (0) = x > 0, this shows that q x attains its maximum at some point in the open interval 0, m(y)/(y − x) , and hence, Λ ′ µ (t) = x for some t in this interval.The same argument applies for all x ∈ (−x * , 0).Finally, for x = 0 we have that Λ ′ µ (0) = x.
(ii) We apply the same argument as in (i).
Let µ be an admissible even Borel probability measure on the real line.Lemma 2.3 allows us to define h : Observe that h is a strictly increasing C ∞ function and (2.5) .
Proof.If x * = +∞ then the convexity of Λ * µ and the fact that (Λ * µ ) ′ (x) > 0 for all x > 0 (which is a consequence of Lemma 2.4 (iv) and of the fact that (Λ * µ ) ′′ = h ′ > 0) imply that lim x↑x * Λ * µ (x) = +∞.Next, assume that x * < +∞.Since Λ ′ µ (t) x * for all t, the function t → tx * − Λ µ (t) is non-decreasing. Therefore, However, the third equality being a consequence of the dominated convergence theorem.It follows that Λ The next result generalizes an observation from [5] which states that Λ * µ has finite moments of all orders in the case where µ is absolutely continuous with respect to Lebesgue measure.The more general statement of the next proposition can be found as an exercise in [9].
because β is strictly increasing and continuous on [0, z] and β(0) = 1.The result follows by symmetry.
We close this section by recalling the Λ * -condition that was already mentioned in the introduction.
Definition 2.7.Let µ be an admissible even Borel probability measure on the real line.Recall that Λ * µ (x) m(x) for all x ∈ [0, x * ).We shall say that µ satisfies the Λ * -condition if

Proof of the main theorem
Let µ be an admissible even Borel probability measure on the real line.Recall that µ n = µ ⊗ • • • ⊗ µ (n times), and hence the support of µ n is I µn = I n µ .The logarithmic Laplace transform of µ n is defined by and the Cramer transform of µ n is the Legendre transform of Λ µn , defined by Since µ n is a product measure, we can easily check that Λ * µn (x) = n i=1 Λ * µ (x i ) for all x = (x 1 , . . ., x n ) ∈ I µn , which implies that In particular, for all p 1 we have that Iµ n (Λ * µn (x)) p dµ n (x) < +∞.We also define the parameter where β(µ) is a finite positive constant which is independent of n.In particular, β(µ n ) → 0 as n → ∞.In order to estimate ̺ i (µ n , δ), i = 1, 2, we shall follow the approach of [5].For every r > 0 we define B r (µ n ) := {x ∈ R n : Λ * µn (x) r}.
Note that, since Λ * µn (x) = n i=1 Λ * µ (x i ) for all x = (x 1 , . . ., x n ) and Λ * µ (y) increases to +∞ as y ↑ x * , for every r > 0 there exists 0 < For any x ∈ R n we denote by H(x) the set of all half-spaces H of R n containing x. Then we define The function ϕ µn is called Tukey's half-space depth.We refer the reader to the survey article of Nagy, Sch ütt and Werner [18] for a comprehensive account and references.We start with the upper threshold.Note that the Λ * -condition is not required for this result.Theorem 3.1.Let µ be an even probability measure on R.Then, for any δ ∈ 0, 1  2 there exist c(µ, δ) > 0 and n 0 (µ, δ) ∈ N such that for all n n 0 (µ, δ).
Proof.The standard approach towards an upper threshold is based on the next fact which holds true in general, for any Borel probability measure on R n .For every r > 0 and every N > n we have . This estimate appeared originally in [10] and follows from the observation that (by the definition of ϕ µn , Markov's inequality and the definition of Λ * µn ) for every x ∈ R n we have ϕ µn (x) exp(−Λ * µn (x)).
For the proof of the lower threshold we need a basic fact that plays a main role in the proof of all the lower thresholds that have been obtained so far.For a proof see [14,Lemma 4.1].
where p µ = max{P (X = x) : x ∈ R} < 1.Therefore, We are going to apply Lemma 3.2 with A = B (1+ε)Tn (µ n ), using Chebyshev's inequality exactly as in the proof of Theorem 3.1.From (3.4) it is clear that we will also need a lower bound for inf x∈B (1+ε)Tn (µn) ϕ µn (x) The main technical step is to obtain the next inequality.Proof.Let x ∈ B r (µ n ) and H 1 be a closed half-space with x ∈ ∂H 1 .There exists v ∈ R n \{0} such that H 1 = {y ∈ R n : v, y − x 0}.Consider the function q : B r (µ n ) → R, q(w) = v, w .Since q is continuous and B r (µ n ) is compact, q attains its maximum at some point z ∈ B r (µ n ).Define H = {y ∈ R n : v, y − z 0}.Then, z ∈ ∂(H) and for every w ∈ B r (µ n ) we have v, w v, z , which shows that ∂(H) supports B r (µ n ) at z.Moreover, H ⊆ H 1 and hence P ( X ∈ H) P ( X ∈ H 1 ).This shows that inf{ϕ µn (x) : x ∈ B r (µ n )} is attained for some closed half-space H whose bounding hyperplane supports B r (µ n ).Therefore, for the proof of the theorem it suffices to show that given ζ > 0 we may find n 0 (µ, ζ) so that if n n 0 (µ, ζ) then (3.5) for any closed half-space H whose bounding hyperplane supports B r (µ n ).
Let H be such a half-space.Then, there exists x ∈ ∂(B r (µ n )) such that where t i = h(x i ), because the normal vector to H is ∇Λ * µn (x) and (Λ * µ ) ′ = h by Lemma 2.4 (iii).We fix this x for the rest of the proof.By symmetry and independence we may assume that x i 0 for all 1 i n.Recall that Λ * µ (0) = 0 and that µ satisfies the Λ * -condition: we have m(x) ∼ Λ * µ (x) as x ↑ x * .Therefore, we can find M > τ > 0 with the following properties:

, n}. We consider the sets of indices
and the probabilities By independence we have that We will give lower bounds for P 1 , P 2 and P 3 separately.
Proof.We write (3.6) and use the following fact (see [14,Lemma 4.3]): For every τ ∈ (0, x * ), there exists c(τ ) > 0 depending only on τ and µ, such that for any k ∈ N and any v 1 , . . ., v k ∈ R with k i=1 v i > 0 we have that Combining the above with (3.6) and using the simple bound we conclude the proof of the lemma.
Lemma 3.5.We have that Proof.By independence, we can write By the choice of M we see that for all i ∈ A 3 , and this immediately gives the lemma.

Lemma 3.6.
There exist c 3 , c 4 > 0 depending only on ζ, M and µ, such that The proof of this estimate requires some preparation.Without loss of generality, we may assume that A 2 = {1, . . ., k} for some k n.Recall that t i = h(x i ) for each i, and that this is equivalent to having x i = Λ ′ µ (t i ) for each i (see Lemma 2.4 (ii)).Define the probability measure P x1,...,x k on (Ω, F ), by for A ∈ F .Direct computation shows that, under P x1,...,x k , the random variables t 1 X 1 , . . ., t k X k are independent, with mean, variance and absolute central third moment given by Lemma 3.7.The following identity holds: Proof.By definition of the measure P x1,...,x k , we have that .
It follows that and the lemma now follows from Lemma 2.4 (ii).
We will also use the following consequence of the Berry-Esseen theorem (cf.[11], p. 544).
Proof of Lemma 3.6.Consider the random variables for all 1 i k.Applying Lemma 3.8 we find θ > 0 and k 0 ∈ N such that if k k 0 then (3.7) Now, we distinguish two cases: Case 1: Assume that k < k 0 .Then, working as for A 3 , we see that Case 2: Assume that k k 0 .From Lemma 3.7 we have From (3.7) we see that and Lemma 3.6, we may write provided n n(µ, ζ) for an appropriate n(µ, ζ) ∈ N depending only on ζ and µ.This proves (3.5).
We are now able to provide an upper bound for ̺ 2 (µ n , δ).
For the rest of this section we fix p > 1.Following [1] we say that a non-negative function f : R → R is regularly varying of index s ∈ R, and write as x → ∞.This proves the following.Lemma 4.2.For every p 1 we have that − ln(ν p [x, ∞)) ∼ x p as x → ∞.Lemma 4.2 shows that in order to complete the proof of the theorem we have to show that Λ * νp (x) ∼ x p as x → ∞.Let g p (x) = x 2 for 0 x < 1 and g p (x) = x p for x 1.It is shown in [16] that for any p 1 and x ∈ R one has Λ * νp (x/c) g p (|x|) Λ * νp (cx) where c > 1 is an absolute constant.
For the proof of Λ * νp (x) ∼ x p as x → ∞ we shall apply the Laplace method; more precisely, we shall use the next version of Watson's lemma (see equation (2.34) in [17, Section 2.2]).
Proposition 4.3.Let S < a < T ∞ and g, h : [S, T ] → R, where g is continuous with a Taylor series in a neighborhood of a, and h is twice continuously differentiable and has its maximum at a and satisfies h ′ (a) = 0 and h ′′ (a) < 0. Assume also that the integral We apply Proposition 4.3 to get the next asymptotic estimate.Lemma 4.6.Let q 1, a > 0 and f : [a, ∞) → R be a continuously differentiable function such that f ′ is increasing on [a, ∞) and f (t) ∼ t q as t → +∞.Then, f ′ (t) ∼ qt q−1 as t → +∞.
and let F k : R → R denote the cumulative distribution function of the random variable S k /s k under the probability law P x1,...,x k : F k (x) := P x1,...,x k (S k xs k ) (x ∈ R).Write also ν k for the probability measure on R defined by ν k (−∞, x] := F k (x) (x ∈ R).Notice that E x1,...,x k (S k /s k ) = 0 and Var x1,...,x k (S k /s k ) = 1.

Lemma 3 . 8 .
For any a, b > 0, there exist k 0 ∈ N and θ > 0 with the following property: If k k 0 , and if Y 1 , . . ., Y k are independent random variables with