Stability of polydisc slicing

We prove a dimension‐free stability result for polydisc slicing due to Oleszkiewicz and Pełczyński. Intriguingly, compared to the real case, there is an additional asymptotic maximizer. In addition to Fourier‐analytic bounds, we crucially rely on a self‐improving feature of polydisc slicing, established via probabilistic arguments.


Introduction
The study of sections of convex bodies has a long and rich history.Many results about extremal sections and their stability are known (see the recent survey [40] and the references therein).An influential result of this type is Ball's cube slicing theorem from [4], which states that the hyperplane sections of the unit volume cube [− 1  2 , 1 2 ] n in R n have volume bounded between 1 and √ 2 (the lower bound had been known earlier and goes back to the independent works [26] of Hadwiger and [27] of Hensley).Ball's upper bound famously gave a simple counter-example to the Busemann-Petty problem in dimensions n ≥ 10 (see [5,14,24,32]).For many other ensuing works, see for instance [2,6,8,10,29,30,31,37,38,41,43,44,49], as well as the comprehensive surveys [40,50].Both bounds for cube slicing are sharp, the lower one uniquely attained at hyperplanes orthogonal to the vectors e i , 1 ≤ i ≤ n, the upper bound uniquely attained at hyperplanes orthogonal to the vectors e i ± e j , 1 ≤ i < j ≤ n, where e 1 , . . ., e n are the standard basis vectors in R n .However, only recently quantitative stability results have been developed: for every hyperplane a ⊥ in R n orhogonal to the unit vector a in R n with a 1 ≥ a 2 ≥ • • • ≥ a n ≥ 0, we have Research was supported in part by the NSF grant DMS-2246484.
where here and throughout this paper | • | denotes the standard Euclidean norm on R n .A local version of the upper bound has been established by Melbourne and Roberto in [36] (with applications in information theory), whilst the stated lower and upper bounds are from [16] (with the numerical value of the constant in the upper bound from [22], where it is instrumental in extending Ball's cube slicing to the ℓ p balls for p > 10 15 ).Distributional stability of Ball's inequality has been very recently studied in [23].
The goal of this paper is to derive a complex analogue of (1).Across the areas in convex geometry, significant efforts have been made to extend many fundamental and classical results well-known from real spaces to complex ones.For example, see [3,7,9,11,12,18,19,21,30,31,33,34,35] (sometimes complex-counterparts turn out to be "easier", e.g.[28,39,46], but for certain problems, on the contrary, satisfactory results have been elusive, e.g.[48]).A counterpart of Ball's cube slicing in C n was discovered by Oleszkiewicz and Pe lczyński in [42].Let D be the unit disc in the complex plane and let be the polydisc in C n , the complex analogue of the cube.For z, w ∈ C n , we let as usual z, w = n j=1 z j wj be their standard inner product.Oleszkiewicz and Pe lczyński proved that for every (complex) hyperplane a ⊥ = {z ∈ C n , z, a = 0} orthogonal to the vector a in C n , we have Interestingly, this is in fact formally a generalisation of Ball's result (see Szarek's argument in Remark 4.4 in [42]).The lower bound is attained uniquely at hyperplanes orthogonal to the standard basis vectors e j , 1 ≤ j ≤ n, the upper one is attained uniquely at hyperplanes orthogonal to the vectors e j + e it e k , 1 ≤ j < k ≤ n, t ∈ R. In this setting, we identify C n with R 2n via the standard embedding and vol is always Lebesgue measure on the appropriate subspace whose dimension is usually indicated in the lower-script (as for instance here a ⊥ becomes a subspace in R 2n of real dimension 2n − 2).Note that, in particular, vol 2n−2 (D n−1 ) = π n−1 (obtained as the canonical section D n ∩ (1, 0, . . ., 0) ⊥ ), which is the normalising factor above.
Thanks to the symmetries of D n under the permutations of the coordinates as well as complex rotations along axes z → (e it1 z 1 , . . ., e itn z n ), it suffices to consider real nonnegative vectors with say nonincreasing components.The main result of this paper is the following dimension-free stability result which refines (2).It is natural to introduce the normalised section function, Theorem 1.For n ≥ 2 and every unit vector as well as We do not try to optimise the numerical values of the constants involved (for the sake of clarity).Before we move to proof, several remarks are in place.
Remark 1.In contrast to the real case, the deficit term in our upper bound (4) is more complicated and features the minimum over two quantities: the distance to the unique extremiser and the ℓ 4 norm of a.The latter appears to account for the fact that lim In other words, curiously, polidysc slicing admits an additional asymptotic (Gaussian) extremiser ( 1√ n , . . ., 1 √ n ) ⊥ , n → ∞.In the real case, Remark 2. Up to the absolute constants, (4) is sharp, in that the asymptotic behaviour of the right hand side as a function of the quantities involved |a − e1+e2 √ n | and n j=1 a 4 j is best possible.Indeed, for the former quantity, consider vectors a = ( 1 2 + ǫ, 1 2 − ǫ, 0, . . ., 0) and note that, by combining (5) and Lemma 2, we get A n (a) = 1 2 + ǫ −1 = 2−ǫ+O(ǫ 2 ) as ǫ → 0, whilst the left hand side is 2−Θ(ǫ).
For the latter quantity, testing with a =

A sketch of our approach
The lower bound is established by quantifying a simple convexity argument leading to the main term (akin to the real case, as done in [16]).
For the upper bound, we principally follow the strategy developed in [16] (see also Section 5 in [22]).However, the presence of the asymptotic extremiser (see Remark 1) is a new obstacle.To wit, there are several entirely different arguments, depending on the hyperplane a ⊥ (in what follows we always assume as in Theorem 1 that a is a unit vector with nonnegative nonincreasing components).Here is a rough roadmap.
(a) When a is close to the extremiser e1+e2 √ 2 , we reapply polydisc slicing in a lower dimension to a portion of a, which yields its self-improvement and gives a quantitative deficit (this is largely inspired by a similar phenomenon for Szarek's inequality from [47] discovered in [20]).This part crucially uses probabilistic insights put forward in [15,16,17] and perhaps constitutes the most subtle point of the whole analysis.

Ancillary results and tools
Since in the proof we consider several cases that require different approaches and tools, this section which includes auxiliary results is split into several subsections.

3.1.
The role of independence.Our approach, to a large extent, relies on the following probabilistic formula for the volume of sections of the polydisc, obtained in [12] by Fourier-analytic means (see also [17] for a direct derivation): for every n ≥ 1 and every unit vector a in R n , we have where ξ 1 , ξ 2 , . . .are independent random vectors uniform on the unit sphere S 3 in R 4 .
To leverage independence and rotational symmetry in ( 5), we note the following general observation.
Lemma 2. Let d ≥ 3 and let X and Y be independent rotationally invariant random vectors in In particular, in R 4 , The special case of d = 3 appeared as Lemma 6.6 in [16], whereas for the general case we follow the argument from Remark 15 of [17] (see also Corollary 17 therein).
Proof of Lemma 2. Let ξ 1 , ξ 2 be independent random vectors uniform on the unit sphere S d−1 in R d .By rotational invariance, X and Y have the same distributions as |X|ξ 1 and |Y |ξ 2 .Conditioning on the values of the magnitudes |X| and |Y |, it thus suffices to show that for every a 1 , a 2 ≥ 0, we have By homogeneity and symmetry, this will follow from the special case of a 1 = 1, . By rotational invariance, we have (in the sense of the usual Lebesgue surface integral) and our goal is to argue that this equals 1 for all 0 < t < 1.Let F (x) = |x| 2−d .On the sphere, for every x ∈ S d−1 , x is the outer-normal, hence the divergence theorem yields since ∆F = 0 (e 1 + tx never vanishes for x ∈ B d 2 , 0 < t < 1).Noting that clearly h(0) = 1, this finishes the proof.
Oleszkiewicz and Pe lczyński's approach crucially relies on the fact that and that the supremum is attained at s = 2 as well as when s → ∞.Implicit in their proof of this subtle claim is the following quantitative version, crucial for us.
Lemma 3.For the special function Ψ defined in (9), we have Proof.When 2 ≤ s ≤ 8 3 , we have as showed in [42] (Proof of Proposition 1.1 in Case (II), p. 290).It remains to apply an elementary bound to v = s 2 − 1 ∈ [0, 1  3 ], When s ≥ 8 3 , it is showed in [42] (Proof of Proposition 1.1 in Case (I), p. 288) that It remains to note that the function in the bracket is increasing in s on [ 8 3 , ∞), thus it is at least its value at s = 8  3 , which is greater than 1 151 .
3.3.Lipschitz property of the section function and complex intersection bodies.In perfect analogy to the real case, there is a complex analogue of the classical Busemann's theorem from [13] saying that x → Theorem 4 (Koldobsky-Paouris-Zymonopoulou, [34]).Let K be a complex symmetric convex body K in C n , that is K is a convex body in R 2n with e it z ∈ K, We use this result to establish a Lipschitz property of the section function A n .Lemma 5.For unit vectors a, b in R n , we have Proof.Let K = ( 1 π D) n be the volume 1 polydisc, so that A n (a) = vol 2n−2 (K ∩ a ⊥ ).Then, by Theorem 4, N (a) = |a|A n (a) −1/2 is a norm, thus for unit vectors a and b, we have By the definition of N , the right hand side becomes |a − b| and using the polydisc slicing inequalities, that is 1 ≤ A n (x) ≤ 2 for every vector x, the result follows.
3.4.Berry-Esseen bound.Finally, we will employ a Berry-Esseen type bound with explicit constant for random vectors in R 4 .Recently, Raič has obtained such a result for an arbitrary dimension.
Theorem 6 (Raič, [45]).Let X 1 , . . ., X n be independent mean 0 random vectors in R d such that n j=1 X j has the identity covariance matrix.Let G be a standard Gaussian random vector in R d .Then where the supremum is over all Borel convex sets in R d .

Proof of Theorem 1
In this section we will present the proof of the Theorem 1.We recall that a is assumed to be a unit vector in R n such that a 1 ≥ a 2 ≥ . . .≥ a n ≥ 0.
We begin with a short proof of the lower bound (3).First note that using (5) and the convexity of (•) −1 (Jensen's inequality), which gives the sharp lower bound without the error term.This of course can be easily improved upon (the same idea is used in the proof of Theorem 6.1 in [16]).
We move on to the upper bound (4).Its proof requires considering multiple cases dependent on the size of the two larges coordinates of the vector a.
For the convenience of the reader we include the following pictorial guide to the proof.
We consider six cases.The labels Lk correspond to the lemmas in which a given case is resolved.In Section 4.1 we explain the case where two largest coordinates are near 1 √ 2 , corresponding to L7 in the picture above.In Section 4.2 we explain the bound when all cooridinates are below 3/8, i.e. we cover the region L8.In Section 4.3 we study the case where a 1 is below 1/ √ 2, which we examine in two regimes depending on the value of a 2 corresponding to L9 and L10.We address the case when a 1 is only slightly above 1 √ 2 , marked as L12, in Section 4.4.Finally, in Section 4.5 we complete the picture by settling the case when a 1 is large (L13).We put these bounds together, proving the theorem, in Section 4.6.

Two largest coordinates are close to
When n = 2, from Lemma 2, we have and we check that this is at most 2 − δ(a).One way to verify that a −2 ], we have a 1 = cos( π 4 − 2t) and the desired inequality becomes . Note that on this interval, sin(4t) = 4 sin t cos t cos(2t) ≥ 4 sin t cos 2 (2t) ≥ 2 sin t and (1 Hence, Theorem 1 holds when n = 2.We can assume from now on that n ≥ 3. Our goal here is to establish Theorem 1 for vectors a which are near the extremiser.This relies on a self-improving feature of the polydisc slicing result.Proof.We let X = a 1 ξ 1 + a 2 ξ 2 and Y = n j=3 a j ξ j .Then, using ( 5), ( 6) and the concavity of t → min{α, t}, we obtain . We thus get .
Using ( 6) again, we get that E|X| It will be more convenient to work with the rotated variables for which Then, in terms of u 1 , u 2 , we have .
Note also that where θ is a random variable with density 2 π (1 − x 2 ) 1/2 on [−1, 1] (the distribution of ξ 1 , ξ 2 which is the same as the one of ξ 1 , e 1 ).We will use this representation in what follows.
Therefore, using that θ 0 < 1 we estimate the probability of the event Putting this together and using the fact that 1 We claim that the right hand side as a function of u 2 is decreasing.Indeed, its derivative equals and hence the derivative is negative.Setting u 2 = 0 in (11) thus gives where we have used We note for future reference that the complementary case to the one considered in Lemma 7 is , this in particular implies that a 2 is bounded away 4.2.All weights are small.When all weights are small and bounded away from 2 , we can rely on the Fourier analytic bound (8) because Lemma 3 guarantees savings across all weights.This case results with the term a 4  4 in (4) which quantifies the distance to the asymptotic extremiser a = ( 1 Proof.By the assumption, a −2 k ≥ 8 3 for all k, thus, using ( 8) and (10), The numerical inequality 2e −x ≤ 2 − 151 76 x for 0 ≤ x ≤ 1 151 finishes the proof.k }.When a 2 is bounded away from 0, this allows to conclude that A n (a) is bounded away from 2. Otherwise, we use the Gaussian approximation for n k=2 a k ξ k .A toy case illustrating why this works is the vector for large n.Then, if G denotes a standard Gaussian random vector in R 4 independent of the ξ j , the central limit theorem suggests that A n (a) is well-approximated by (for a computation of this expectation, see ( 14) below).Of course, to make this heuristics quantitative, we shall use a Berry-Esseen type bound, Raič's Theorem 6.
Thus we brake the analysis now into two further subcases.Proof.We let Y = n j=2 a j ξ j and observe that, by ( 5) and ( 6), Note that Y has covariance matrix where G denotes a standard Gaussian random vector in R 4 .Since |G| 2 has density x 4 e −x/2 , x > 0 (χ 2 (4) distribution), we obtain Putting these together yields, It can be checked that the first term is a decreasing function of a 2 1 .Consequently, using 3  8 ≤ a Proof.Note that Ψ(a −2 k ) ≤ 1 for each k, as guaranteed by (10) since a −2 k ≥ 2 for each k.Using this (for all k except k = 2) in conjunction with (5) gives Furthermore, again by (10), Since ≤ a 2 1 − 1 2 , we have

(b) When a has all coordinates well below 1 √ 2 , 1 √ 2 1 √ 2 , 1 √ 2 ,
we employ Fourier-analytic bounds and quantitative versions of the Oleszkiewicz-Pe lczyński integral inequality for the Bessel function.This results in the ℓ 4 norm quantifying the improvement near the asymptotic extremiser.(c) When a has one coordiate around and the others small, a is neither close the the extremiser e1+e2 √ 2 , nor the Fourier-analytic bounds are applicable.We rely on probabilistic insights again and use a Berry-Esseen type bound.(d) When a has a coordinate barely above we use a Lipschitz property of the normalised section function and reduce the analysis to the previous cases.(e) When a has a coordinate well -above we use a projection argument.
Second largest weight is bounded away from 0. The goal here is to treat the case when a 2 is not too small.Note that in the following lemma instead of assuming that (13) holds, we assume slightly less, i.e. that a 2 ≤ 1−10 −5 √ 2 .will use this in Section 4.4.Lemma 10.We have, A n (a) ≤ 2 − 10 −19 , provided that