An optimal bound for the ratio between ordinary and uniform exponents of Diophantine approximation

We provide a lower bound for the ratio between the ordinary and uniform exponent of both simultaneous Diophantine approximation and Diophantine approximation by linear forms in any dimension. This lower bound was conjectured by Schmidt and Summerer and already shown in dimension $2$ and $3$. This lower bound is reached at regular systems presented in the context of parametric geometry of numbers, and thus optimal.


Introduction
In the 50's, V. Jarník [8,9,10] considered exponents of Diophantine approximation, and in particular the ratio between ordinary and uniform exponent.An optimal lower bound expressed as a function of the uniform exponent was established for simultaneous approximation to two real numbers and for one linear form in two variables.The question was reconsidered recently by different authors [13,17,18,27,7,5].The optimality of V. Jarník's inequalities for two numbers was shown by M. Laurent [13].The inequality for simultaneous approximation to three real numbers was obtained by the second named author [17].Introducing parametric geometry of numbers [27,26], W. M. Schmidt and L. Summerer considered recently a new method to obtain the optimal lower bounds for the approximation to three numbers (both in the cases of simultaneous approximation and approximation for one linear form in three variables), and improve the general lower bound in any dimension.They conjectured in this context that the general lower bound in the problem of approximation to n real numbers arise from so-called regular systems.The goal of the present paper is to prove this conjecture.To do this we use Schmidt's inequality on heights [24] applied to a wellchosen subsequence of best approximation vectors.Our main result is stated in Theorem 1 below.The optimality of our bound follows from a recent breakthrough paper by D. Roy [22].
Exponents of Diophantine approximation give more detailed information about approximation to θ in the case when θ admits approximations better than the approximations provided by Dirichlet's Schubfachprinzip.The ordinary exponent deals with the question whether Dirichlet's Schubfachprinzip can be improved for approximation vectors of arbitrarily large size t, while uniform exponents deals with the question whether it can be improved for any sufficiently large upper bound t for the size of approximation vectors.The aim of this paper is to provide a lower bound for the ratios λ(θ)/ λ(θ) and ω(θ)/ω(θ) as a function of λ(θ) and ω(θ) respectively, in any dimension.In dimension n = 1 simultaneous approximation and approximation by one linear form coincide.Khintchine [12] observed that the uniform exponent for an irrational θ always takes the value 1 and it follows from Dirichlet's Schubfachprinzip that the ordinary exponent satisfy ω(θ) = λ(θ) ≥ 1 = ω(θ) = λ(θ).In dimension n = 2, Jarník proved in [9,10] the inequalities These inequalities are optimal by a result of M. Laurent [13].In [17], Moshchevitin proved the optimal bound for simultaneous approximation in dimension n = 3: The proof is based on consideration of a special pattern of best approximation vectors.This pattern was discovered in an earlier paper by D. Roy [23], where another problem was considered.We discuss this pattern in Section 3.1 when explaining our proof in low dimensions.
Schmidt and Summerer provided an alternative proof using parametric geometry of numbers in [28], and found the following bound for approximation by one linear form in 3 variables: A simple proof of this bound was given in [18].In [10], Jarník also provided a lower bound in arbitrary dimension n ≥ 2.
Using their new tools of parametric geometry of numbers, Schmidt and Summerer [26] provided the first general improvement valid for the whole admissible interval of values of the uniform exponents ω and λ.
In [28] Schmidt and Summerer conjecture that, as in dimension n = 3, the general optimal lower bound is reached at regular systems.In this paper we show that this conjecture holds.Let us first introduce some notation.
For given n ≥ 1 and parameters α * ≥ n and 1/n ≤ α < 1, we consider the polynomials Note that Denote by G(n, α) the unique real positive root of R n,α (x) and by G * (n, α * ) the unique positive root of R * n,α * (x).Some further necessary properties of there polynomials are discussed in Subsection 2.4 below.Now we are able to formulate the main result of our paper.
The main part of Theorem 1 is the lower bound.The proof uses determinants of best approximation vectors, following the idea of [17].It deeply relies on an inequality of Schmidt [24] applied inductively to a well chosen subsequence of best approximation vectors.The second part of Theorem 1 is a consequence of the parametric geometry of numbers, and is proved independently in Section 6.
In the next section, we define the main tools needed for the proof: best approximation vectors and their properties.With examples of approximation to 3 and 4 numbers in Section 3, we then provide a proof of Theorem 1 in the important case of simultaneous approximation (Section 4).In Section 5, we explain how a hyperbolic rotation reduces the case of approximation by one linear form to the case of simultaneous approximation.

Sequences of best approximations
We denote by (z l ) l∈N the sequence of best approximations (or minimal points) to θ ∈ R n .This notion was introduced by Voronoi [29] as minimal points in lattices, it was first defined in our context by Rogers [20].It has been used implicitly or explicitly in many proofs concerning exponents of Diophantine approximation.Many important properties of best approximation vectors are discussed in a survey by Chevallier [1].
Let k ≥ 1 be an integer.Let L and N be two maps from Z k to R + , where N represents the height of an approximation vector in Z k and L represents the approximation error.We call a sequence of best approximation vectors (z l ) l≥0 ∈ (Z k ) N with respect to L and N a sequence such that In general we do not have uniqueness of such a sequence, and existence follows if L reaches a minimum on sets of the form where B is any real bound.
In the context of best approximation vectors for simultaneous Diophantine approximation for Q-independent numbers 1, θ 1 , ..., θ n we set and define the unique sequence of best approximation vectors We may also assume that for every l large enough one has where α < λ(θ).
In the context of best approximation vector for approximation by one linear form, we can set In such a way we define the sequence Here due to the symmetry, we may assume that L l > 0. In the Q-independent case this defines vectors z l uniquely.By definition of best approximations We may also assume that M 1 is large enough so that for every l ≥ 1 where α * < ω(θ).
In the context of simultaneous Diophantine approximation, provided that 1, θ 1 , . . ., θ n are linearly independent over Q, it is known that a sequence of best approximation vectors ultimately spans the whole space R n+1 .However in the context of approximation by one linear form, the situation is different: it may happen that vectors of best approximation span a strictly lower dimensional subspace of R n+1 .See the surveys [15,16] by Moshchevitin and the paper [1] by Chevallier for more details.Fortunately, if best approximation vectors do not span the whole space R n+1 we get a sharper result, since G(n, α) is a decreasing function of n (see Proposition 2).Thus, we may assume without loss of generality that in both contexts best approximation vectors ultimately span the whole space R n+1 .
Using sequences of best approximation vectors, to prove that show that given arbitrary g < G, there exists arbitrarily large indices k with q k+1 ≫ q g k .Similarly, to prove that ω(θ) ω(θ) ≥ G * it is enough to show that given arbitrary g * < G * and α * < ω(θ), there exists arbitrarily large indices k with Here and below, the Vinogradov symbols ≪, ≫ and ≍ refer to constants depending on θ but not the index k.This observation relies on the expression of exponents of Diophantine approximation in terms of best approximation vectors .
The proofs in the case of simultaneous approximation and approximation by one linear form rely on the same geometric analysis.The idea is to take α < λ(θ) or α * < ω(θ).For an arbitrarily large index k, we construct a pattern of best approximation vectors in which at least one pair of successive best approximation vectors satisfies q k+1 ≫ q g k (13) where g = G(n, α) < G(n, λ), in the case of simultaneous approximation and where g * = G * (n, α * ) < G(n, ω), in the case of approximation by one linear form.
Given a sublattice Λ ⊂ Z n+1 , we denote by det(Λ) the fundamental volume of the lattice Λ in the linear subspace Λ R .We recall well known facts about best approximation vectors and fundamental determinants of the related lattices.Lemma 1. Two consecutive best approximation vectors z i and z i+1 are Q-linearly independent and form a basis of the integer points of the rational subspace they span.
See for example [4,Lemma 2].Lemma 2. For any l ≥ 1, consider the lattice Λ l with basis z l , z l+1 and the lattice Γ l with basis z l−1 , z l , z l+1 .In the context of simultaneous approximation we have the estimates of their fundamental volumes In the context of approximation by one linear form, we do not have directly such estimates.In section 5 we explain how hyperbolic rotation provides a helpful analogue.
The proof of Lemma 2 is well known, see for example [1] or [16].For the sake of completeness, and because we want to adapt the proof for the case of approximation by one linear form, we provide a detailed proof.The upper bounds rely on the following lemma (see [25, Lemma 1]), while the lower bound comes from Minkowski's first convex body theorem.Lemma 3. Assume X 1 , . . ., X m are linearly independent vectors of an Euclidean space E n , and have coordinates X t = (x t,1 , . . ., x t,n ) for 1 ≤ t ≤ m ≤ n in some Cartesian coordinatesystem of E n .Then det 2 (X 1 , . . ., X m ) is the sum (with m n summands) of the squares of the absolute values of the determinants of the (m×m)-submatrices of the matrix (x t,j ) 1≤t≤m,1≤j≤n .
Proof of Lemma 2. The proof relies on the geometric fact that the best approximation z l = (q l , a 1,l , a 2,l , . . ., a n,l ) ∈ Z n+1 satisfy (10).We first prove the upper bounds.
Consider the 2-dimensional fundamental volume of the lattice spanned by z l , z l+1 .The coordinates of these vectors form the matrix .
However it is not convenient to use this matrix to apply Lemma 3. We consider a special choice of Cartesian coordinates.We take the system of orthogonal unit vector (e 0 , e 1 , . . ., e n ) in the following way: e 0 is parallel to (1, θ 1 , . . ., θ n ) and e 1 , • • • , e n are arbitrary.Then, in our new coordinates where Z l ≍ q l and |Ξ i,l | ≪ ξ l .Now we consider the 2 × (n + 1) matrix .
If M i,j is the (2 × 2) minor of index i, j, we have by Lemma 3 Consider the 3-dimensional fundamental volume det(Γ l ) of the lattice spanned by z l−1 , z l , z l+1 .Denote by M i,j,k the 3 × 3 minors of the matrix By Lemma 3 we have We now prove the lower bound for det(Λ l ).Consider the symmetric convex body and the intersection P = Π∩ z l , z l+1 R .The intersection P ∩ z l , z l+1 Z is reduced to zero by definition of the best approximation.Hence Minkowski's first convex body theorem ensures that for the two-dimensional convex set P we have area(P) ≤ 4 det(Λ l ).
The intersection of P with the coordinate hyperplane {z 0 = 0} is an interval with endpoints A and B of length |AB| ≥ 2ξ l .So P contains a polygon P ′ ⊂ P with vertices A, B, −z l+1 , z l+1 .It is clear that the Euclidean distance between the point z l+1 and the line AB is greater than q l+1 .We deduce the lower bound for the area of P ′ area(P ′ ) ≥ 4q l+1 ξ l .
Notation We denote by calligraphic letter S the sets of best approximation vectors {z k , . . ., z m }.
Given such a set S, we denote by Greek letter Γ = z k , . . ., z m Z the lattice spanned by its elements, and by bold Roman letter S = z k , . . ., z m R the rational subspace spanned over R.
Finally, we denote with Gothic letter S the underlying lattice of integer points S = S ∩ Z n .Note that Γ ⊂ S. We should note that two-and three-dimensional objects play a special role in our proofs (see e.g.Lemma 2 and Lemma 4).Therefore, if our objects are 2-dimensional, we rather use the letters L, Λ, L and L, following notation of previous papers [5,15,16,17,18] dealing with low-dimensional cases.For certain sets S of consecutive best approximation vectors we will use the word pattern.Fore example three successive independent best approximation vectors z z z l−1 , z z z l , z z z l+1 form a simplest pattern.More complicated patterns may consist of combinations of triples of successive best approximation vectors connected by certain rules.
If a pattern S is the union of say four patterns S 1 , S 2 , S 3 and S 4 , we denoted it by If moreover the two patterns S 2 and S 3 generate the same rational subspace, we denoted by Finally, if the rational subspaces generated by S 1 and S 2 have intersection Q and Q = Q ∩ Z n is its lattice of integer points, we denote it by either

Key lemma
The following lemma plays a key role in the proof of Theorem 1.

Lemma 4 (Γ
In the context of simultaneous Diophantine approximation, consider (z l ) l∈N the sequence of best approximations to the point θ ∈ R n .Suppose that k > ν and triples consist of linearly independent consecutive best approximation vectors.Suppose that and consider the three-dimensional lattices Suppose that for positive s and t the following estimate holds Suppose that the index of our vectors are large enough so that for α < λ(θ).
where the second equality comes from w(s, t) ∈ (0, 1) being the root of the equation Assume the positivity condition When the parameters are s = t = 1, this lemma directly provides the result for the approximation to 3 numbers (Proof from [17], see subsection 3.1 for details).Parameters s and t are needed in higher dimension.We exhibit a range of pairs of triples of consecutive best approximation vectors, denoted by an index, satisfying conditions of Lemma 4. Parameters s and t appear with values depending on dimension and the geometry of best approximation vectors that need to be optimize with respect to g(s, t).To prove Theorem 1, we show inductively that the optimized parameter g(s, t) is root of the polynomial R n,α defined by (7) for α < λ arbitrarily close to λ.
Proof of Lemma 4. From (20) it follows that s > w(s, t) and hence g > 0. Now we use Lemma 2. By (15) together with (16), 17) can be rewritten as: This means that either Now we take into account (18).In case (a), we use s > w(s, t) to deduce that and so q ν+1 ≪ q g(s,t) ν .In the case (b) we use condition (21) to deduce Indeed, consider the function which is a polynomial in w of degree two.We see that Moreover by ( 21) we have ) is a root of equation U s,t (w(s, t)) = 0.So we get (23).Now by means of (23 and q k+1 ≪ q g(s,t) k .

About the values of g(s, t)
This subsection is rather technical and deals with some properties of g(s, t).First of all we should note that the value of g = g(s, t) defined in Lemma 4 satisfies the relation Indeed, equation (24) immediately follows from (19) and (20).
Then we should point out that if either s or t is 1, we can use (24) to express back the value of the other parameter s or t in terms of the value g = g(s, t) defined in (19).Namely, we have the following equalities which are equivalent to (19) in the special cases s = 1 or t = 1:

About polynomials R n,α (x) and R * n,α * (x)
To continue with our exposition we need to establish some further properties of polynomials R n,α (x), R * n,α * (x) defined in ( 7) and ( 8).Proposition 1.The polynomials R n,α (x) and R * n,α * (x) can be defined inductively for all n ≥ 2 in the following way: The result of the proposition above follows from easy calculations.
Recall that by G(n, α) we have denoted the unique real positive root of R n,α (x) and by Proof.Inequalities (30) follow from and this is equivalent to (31).To see this one should take into account the right inequality from (30).
The following proposition give an analog to the inequality (30) for the dual case.Its proof is quite similar.

Schmdt's inequality on heights
The proof of Theorem 1 essentially relies on Lemma 4 as well as on Schmidt's inequality on height (see [24], in fact this inequality was already used in the last section in [17]).It provides the setting to apply Lemma 4 for different parameters (s, t) to be determined later.
Proposition 5 (Schmidt's inequality).Let A, B be two rational subspaces in R n , we have where the height H(A) is the fundamental volume of the lattice of integer points 3 Examples: simultaneous approximation to three and four numbers.
In this section, we describe in details the proofs in the cases of simultaneous approximation to three and four numbers.An example for approximation by one linear form will be presented in Section 5.3.2.

Simultaneous approximation to three numbers
Consider θ ∈ R 3 with Q-linearly independent coordinates with 1.Consider a sequence (z l ) l∈N of best approximations vectors to θ. Recall that as we consider simultaneous approximation, the sequence (z l ) l∈N spans the whole space R 4 .
Lemma 5.For arbitrarily large indices k 0 , there exists indices k > ν > k 0 and triples of consecutive best approximation vectors consisting of linearly independent vectors such that where This was proved in [17].
Denote by S 4 the pattern of best approximation vectors described in Lemma 5 (see Figure 1 ).Lemma 5 ensures that the pattern S 4 suites the first conditions to apply Lemma 4 for arbitrarily large indices.
We now explain how to obtain the pattern of best approximation vectors in Lemma 5.It is the basic step for a more general construction in higher dimension.
Proof of Lemma 5. Figure 1 may be usefull to understand the construction.Consider (z l ) l∈N a sequence of best approximation vectors to θ ∈ R 3 , and an arbitrarily large index k 0 .Since (z l ) l≥k 0 spans a 4-dimensional subspace, we can define k to be the smallest index such that dim Note that by minimality, z k+1 is not in the 3-dimensional subspace spanned by (z l ) k 0 ≤l≤k .In particular, since two consecutive best approximation vectors are linearly independent the three consecutive best approximation vectors z k−1 , z k , z k+1 are linearly independent.Set ν > k 0 to be the largest index such that Note that by maximality, z ν−1 is not in the 3-dimensional subspace spanned by (z l ) ν≤l≤k+1 .
In particular, since two consecutive best approximation vectors are linearly independent the three consecutive best approximation vectors z ν−1 , z ν , z ν+1 are linearly independent.Moreover, combining both observations we deduce that the lattice Figure 1: All best approximation vectors with index between ν and k lie in the 2-dimensional lattice Λ.The four bold vectors are linearly independent and span the whole space.

Simultaneous approximation to four numbers
In the case of simultaneous approximation to four numbers, we select a pattern S 5 of best approximation vectors that combines two patterns S 4 coming from Lemma 5.This is the first step of the induction for arbitrary dimension, where we combine two patterns of lower dimension.Thus, it is an enlightening example.Note that in this simple case, a proper choice of parameters was made in [5, equalities after formula (13) from the case i(Θ) = 1].
Consider θ ∈ R 4 with Q-linearly independent coordinates with 1.Consider (z l ) l∈N a sequence of best approximation vectors to θ. Lemma 6.Let k 0 be an arbitrarily large index.There exists indices k 0 < r 0 < r 1 ≤ r 2 < r 3 such that the following holds.

The triples of consecutive best approximation vectors
2. The two triples of consecutive best approximation vectors S r 1 and S r 2 generate the same rational subspace 3. The pairs of consecutive best approximation vectors z r 0 , z r 0 +1 and z r 1 −1 , z r 1 span the same 2-dimensional lattice 4. The pairs of consecutive best approximation vectors z r 2 , z r 2 +1 and z r 3 −1 , z r 3 span the same 2-dimensional lattice

Both quadruples of best approximation {z r
} consist of linearly independent vectors.
6.The whole space R 5 is spanned by We discuss the meaning of the lemma, and apply it to the proof of the main result for simultaneous approximation to four numbers.The proof is postponed at the end of the section.
The 5-dimensional pattern described in Lemma 6 is denoted by till the end of the section.Note that it consists of two 4-dimensional patterns given by indices ν = r 0 and k = r 1 in Lemma 5 and given by indices ν = r 2 and k = r 3 in Lemma 5.These two 4-dimensional patterns S 4,0 and S 4,1 intersect on the 3-dimensional subspace Q.Thus, For the pattern S 5 , Schmidt's inequality (34) provides with arbitrary x ∈ (0, 1).This means that either det S 3,0 (det Binary tree sketching the situation described in Lemma 6.
Here, there is one parameter x to optimize.In higher dimension, we have many more, and need to compute the optimal values of that parameters inductively.
Proof of Lemma 6. Figure 3 may be usefull to understand the construction.Consider a sequence (z l ) l∈N of best approximation vectors to θ ∈ R 4 , and an arbitrarily large index k 0 .Set r 3 to be the smallest index such that Note that by minimality, z r 3 +1 is not in the 4-dimensional subspace spanned by (z l ) k 0 ≤l≤r 3 .In particular, since two consecutive best approximation vectors are linearly independent the three consecutive best approximation vectors z r 3 −1 , z r 3 , z r 3 +1 are linearly independent and span a 3-dimensional lattice denoted by Γ 3 .Set r 0 > k 0 to be the largest index such that dim z r 0 −1 , z r 0 , . . ., z r 3 , z r 3 +1 R = 5.
Note that by maximality, z r 0 −1 is not in the 4-dimensional subspace spanned by (z l ) r 0 ≤l≤r 3 +1 .In particular, since two consecutive best approximation vectors are linearly independent the three consecutive best approximation vectors z r 0 −1 , z r 0 , z r 0 +1 are linearly independent and span a 3-dimensional lattice denoted by Γ 0 .Moreover, combining both observations we deduce that is a 3-dimensional rational subspace.Now appears the induction step: we apply the same procedure in lower dimension to the two 4-dimensional subspaces Note that it gives a proof of Lemma 5.
Set r 1 to be the smallest index such that Note that by minimality, z r 1 +1 is not in the 3-dimensional subspace S 3,0 spanned by (z l ) r 0 −1≤l≤r 1 .
In particular, since two consecutive best approximation vectors are linearly independent the three consecutive best approximation vectors z r 1 −1 , z r 1 , z r 1 +1 are linearly independent and span a 3-dimensional lattice Γ 1 included in Q = S 3,1 .By construction, r 0 is already the largest index such that z r 0 −1 , z r 0 , . . ., z r 1 −1 , z r 1 R = S 4,0 .
Set r 2 to be the largest index such that Note that z r 2 −1 is not in the 3-dimensional subspace S 3,3 spanned by (z l ) r 2 ≤l≤r 3 +1 .In particular, since two consecutive best approximation vectors are linearly independent the three consecutive best approximation vectors z r 2 −1 , z r 2 , z r 2 +1 are linearly independent and span a 3-dimensional lattice Γ 2 included in Q = S 3,1 .By construction, r 3 is already the smallest index such that , and is the intersection Q ∩ S 3,3 ∩ Z 5 (see Lemma 1).
Note that we may have r 1 = r 2 .Lattices Γ 1 and Γ 2 may not coincide, but they are both sub-lattices of In Figure 3, the dashed lines should be interpreted as follows.The best approximation vectors (z l ) r 0 ≤l≤r 1 generate the 2-dimensional lattice Λ 0 .The best approximation vectors (z l ) r 2 ≤l≤r 3 generate the 2-dimensional lattice Λ 1 .The best approximation vectors (z l ) r 1 −1≤l≤r 2 +1 generate the 3-dimensional rational subspace Q = S 3,1 = S 3,2 .The five bold vectors span the whole space R 5 .

The triples of consecutive best approximation vectors
S k,l spans the k-dimensional rational subspace S k,l .
3. The rational subspaces S k,l satisfy the relations In particular, Q 2,l is spanned by both z r 2l , z r 2l +1 and z r 2l+1 −1 , z r 2l+1 .
Here, the first index always denotes the dimension of the considered object.For a given dimension k, there are 2 n−k+1 subspaces S k,l and 2 n−k−1 subspaces Q k,l of dimension k.
Another important pattern of best approximation vectors which may be useful for the considered problem was already discovered for any dimension in 2013 by V. Nguyen in [19, §2.3] while studying simultaneous approximation to the basis of an algebraic number field and an extra real number.
Lemma 7 coincide with Lemma 5 for the approximation to three numbers and with Lemma 6 for the approximation to four numbers.In the later case, we have Λ j ∼ Q 2,j for 0 ≤ j ≤ 1.
We can partially describe the situation with the binary tree from Figure 4, where each child is included in its parent.In particular, the parent of a given rational subspace S k,l is S k+1,σ(l) where σ is the usual shift on the binary expansion.
We may write the recursive step of the construction of patterns as follows: where Q n−1,0 is a n−1 dimensional subspace.For the rational subspaces S n,0 , S n,1 and Q n−1,0 and their underlying lattices S n,0 , S n,1 and This relation enables us to shift the optimization equation in the next dimension as obtained in the next lemma.
Lemma 8. Let n ≥ 4. Consider the pattern of best approximation vectors S n+1,0 and its sub-patterns given by Lemma 7.Here as before, are the integer points lattices of the rational subspace S k,l and Q k,l .Then Figure 4: Binary tree sketching the situation described in Lemma 7.
where the parameters w k,l , w ′ k,l , y k and z k are defined inductively as follows.
Parameters y 0 , z 0 ∈ (0, 1) are arbitrary such that and then (45) Furthermore, the parameters satisfy the relations We prove Lemma 7 in subsection 4.3 and then Lemma 8 in subsection 4.4.We first finish the proof of Theorem 1 in the case of simultaneous approximation.

Proof of Theorem 1
Consider θ ∈ R n with Q-linearly independent coordinates with 1, and take α < λ(θ).Denote by g the unique positive root of R n,α defined by (7).
Choose y 0 and z 0 in the following way: Using (29), one can check the condition By the induction formula (43), we deduce that for every 4 ≤ k ≤ n Indeed, the formula (49) is satisfied for n − k = 0. Suppose that it is valid for a certain value of k.Then In particular, we have We consider g(s, t) defined in (19) (Lemma 4) for the parameters From ( 27) and ( 28), it follows that Recall that now g = G(n, α) is the root of the polynomial R n,α (x).So the positivity condition (21) for parameters (50) follows from (31).At the same time for parameters (51) the positivity condition ( 21) is clearly true.
According to Lemma 8, we have (41) and therefore there exists an index 0 ≤ l ≤ 2 n−4 − 1 such that either To summarize, all the conditions are met to apply Lemma 4 for either Hence, there exists ν with q ν+1 ≫ q g ν and ( 13) is met, proving Theorem 1.Consider (z l ) l∈N a sequence of best approximation vectors spanning a (m+1)-dimensional rational space S m+1 .Set r 2 m−2 −1 to be the smallest index such that

Proof of Lemma 7
Note that z r 2 m−2 −1 +1 is not in the m-dimensional subspace spanned by (z l ) k 0 ≤l≤r 2 m−2 .In particular, since two consecutive best approximation vectors are linearly independent the three consecutive best approximation vectors are linearly independent and span a 3-dimensional subspace denoted by S 3,2 m−2 −1 .Set r 0 > k 0 to be the largest index such that Note that z r 0 −1 is not in the m-dimensional subspace spanned by (z l ) r 0 ≤l≤r 2 m−1 −1 +1 .In particular, since two consecutive best approximation vectors are linearly independent the three consecutive best approximation vectors z r 0 −1 , z r 0 , z r 0 +1 are linearly independent and span a 3-dimensional subspace denoted by S 3,0 .Moreover, combining both observations we get that We use the induction hypothesis for the two m-dimensional subspaces for k ′ 0 = r 0 − 1 and k ′′ 0 = r 0 respectively.This provides two patterns S ′ m and S ′′ m of triples of best approximation vectors defined by indices and S ′′ m−1,0 span the rational subspace Q m−1,0 .Hence, the pattern S defined by the triples given by indices combining the two sub-patterns S ′ m and S ′′ m satisfies the required properties at the rank m+1.
Since (z l ) l∈N a sequence of best simultaneous approximation vectors to θ ∈ R n spans the whole space R n+1 , Lemma 7 follows.
Remark.Note that the proof provides a m-dimensional pattern for θ ∈ R n where m is the dimension of the space spanned by its best approximation vectors.Furthermore, note that this construction holds for both simultaneous approximation and approximation by one linear form.

Proof of Lemma 8
By induction on k we prove a more general formula If we write it in terms of Q i,j = S i,4j+1 = S i,4j+2 , we have Lemma 8 is the latter formula for k = n − 3.
We call factors of the first product, of the form factors of Type I, and factors of the second product of the form Base of induction follows the steps of approximation to four numbers.Namely, Schmidt's inequality (34) provides Since S n+1,0 spans the whole space R n+1 , we have det S n+1,0 = 1 and using the fact that det Q n−1,0 = det S n−1,1 = det S n−1,2 (by (39) ), we get the formula Setting w 0,0 = w ′ 0,0 = 1 and y 0 and z 0 such that y 0 + z 0 − 1 = 0, we can rewrite This establishes the expected formula for k = 1.In the inductive step, Schmidt's inequality (34) splits each term of the product in two terms involving rational subspaces of lower dimension, and shift the values of the parameters y k and z k .
Type II Type I Similarly, for factors of Type II, for any v ∈ (0, 1), using (56) with i = n − k and j = 4l + 2 and j = 4l + 3 respectively, and the fact that Combining the splitting of Type I (58) and Type II (59) factors in the induction hypothesis (53), it appears that we should define the parameters (y k+1 , z k+1 ) to be solutions of the system in variables (v, u) u(1 That is or equivalently The last equality coincide with the definition F (y, z) in ( 43) and (59) gives formulae (45) for w k+1,l and w ′ k+1,l .
We now prove the relation ( 46) and (47) by descending induction, showing that for any 4 ≤ k ≤ n First, note that w 0,0 + w ′ 0,0 = 2, and w 0,0 (2 hence we have the base of induction at k = n. Assume that for some 4 ≤ k ≤ n (60) and (61) holds.The two sums represent the degrees of determinants that appears respectively at the numerator and at the denominator in (41).
The key is to observe the splitting in (57) : the new sum for the denominator is the sum from the previous numerator, while at the numerator, the previous denominator is doubled but we have a cancellation with one denominator.Namely, using the recurrence formula (43) and (45) for the parameters, we get Hence the result by descending induction.

Approximation by one linear form
In this section, we explain how the very same geometry of a sequence of best approximation vectors provides Theorem 1 for approximation by one linear form.We need to consider a hyperbolic rotation to get a suitable analogue of the estimates in Lemma 2. For this, we use Schmidt's inequalities on heights in a slightly larger context than rational subspaces.

About Schmidt's inequalities on heights
As stated in Proposition 5, Schmidt's inequality deals with the intersections of rational subspaces with the lattice Z d of integer points.Here we need to deal with a more general situation.Let Λ ⊂ R d be a complete lattice, that plays the role of integer points.
The proof is the same as for rational subspaces, and use the description of subspaces by their orthogonal vectors.
Definition.Given a fixed complete lattice Λ, we define the height H Λ of a Λ-rational subspace M to be the fundamental volume of the Λ-points of M. Proposition 6 (Schmidt's inequality).Let Λ be a complete lattice.Let M 1 , M 2 be two Λ-rational subspaces in R d , we have Let N be a basis in N.For j = 1, 2, we take a collection of vectors µ j ⊂ M j in such a way that the collection M j = N ∪ µ j forms a basis of M j .This means that we complete N by µ j to a basis of M j .Let µ * j be a collection of independent vectors in K ∩ M j which can me obtained from µ j by orthogonal projection on K parallel to N. Let us consider the parallelepiped Π ⊂ M 1 + M 2 generated by all the vectors from the collection N ∪ µ 1 ∪ µ 2 , and the parallelepiped Π * ⊂ M 1 + M 2 generated by all the vectors from the collection N ∪ µ * 1 ∪ µ * 2 .We consider also the parallelepipeds Π N , Π * 1 , Π * 2 generated by the collections of independent vectors correspondingly.Also we need to consider parallelepipeds corresponding to the collections where vol k (•) stands for k-dimensional volume.So To obtain (62) we need to apply the last inequality in the case when N is a basis of the lattice Λ ∩ (M 1 ∩ M 2 ) and µ 1 , µ 2 complete N to the basises of lattices Λ ∩ M 1 and Λ ∩ M 2 correspondingly.

Hyperbolic rotation
Given a sequence (z l ) l∈N = t (q 1,l , . . ., q n,l , a l ) of best approximations to a point θ ∈ R n for the approximation by one linear form, we can extract a subsequence satisfying Lemma 7.For approximation by one linear form, it may happen that the sequence of best approximation vectors spans a subspace of dimension m < n + 1 in R n+1 (see [1]).In this case, Proposition 2 provides that Theorem 1 holds with the stronger lower bound G * (m, ω(θ)) instead of G * (n, ω(θ)).See the remark after the proof of Lemma 7. In the sequel, we suppose that the best approximation vectors span the full space.In particular the coordinates θ 1 , . . ., θ n are linearly independent with 1.
Consider the matrix We can consider the sequence of best approximation as points of the lattice L = L.Z n+1 with (z l ) l∈N = L.(z l ) l∈N ∈ L.
Here, we simply replace the last coordinate a l by the error of approximation L l .
Consider a large parameter T , and the hyperbolic rotation .
The lattice L ′ = G T L is complete since the determinants of L and G T are 1.

Consider the sequence (z
For best approximation by one linear form we defined M l = max 1≤i≤n |z i,l |, and after hyperbolic rotation we have max Since we assume that the best approximation vectors (z l ) l∈N span the full space R n+1 , we can apply Lemma 7 to (z l ) l∈N and obtain a set of indices (r k ) 0≤2 n−2 −1 .Denote . Since G T and L have determinant 1, these sets satisfies the properties of linear independence and inclusion listed in Lemma 7.
Further in the proof of Theorem 1, we need an estimate on the fundamental volumes of the lattices Λ For large T , we can follow a similar proof as in Lemma 2.
Lemma 10.Fix an index k.Let T be large enough so that Given two consecutive and linearly independent best approximation vectors z k , z k+1 , the fun- Given three consecutive and linearly independent best approximation vectors Proof.For T satisfying (63), we see that Consider the 2 × (n + 1) matrix and the 3 × (n + 1) matrix The rest of the proof is completely analogous to the proof of Lemma 2. To obtain the upper bounds in (64) and (65) we need to get upper bounds for 2 × 2 minors of the matrix (67) and for 3 × 3 minors of the matrix (68) by taking into account inequalities (66).This bounds will be of the form 2 × 2 minors of (67 Then application of Lemma 3 gives upper bounds in (64) and (65).
The lower bound for det Λ ′ k from (64) follows from Minkowski's first convex body theorem as well, analogously to the argument of the final part of the proof of Lemma 2. One should consider the symmetric convex body It is clear that Consider the section , by means of Minkowski's theorem, we obtain the upper estimate for its area comes from (63).
Here, we need a large parameter T to obtain a good upper bound for the minors.If T = 1, such upper bounds are false.

Proof of Theorem 1 for approximation by one linear form
The proof in the case of approximation by one linear form follow the same steps as in the case of simultaneous approximation.Hence, we give a sketch of the proof in general, but to make the ideas of the proof clearer, in Section 5.3.2we give a very detailed proof in the simplest case of approximation to 4 numbers.Idea of the argument comes from [18].Note that by reversing time, we get two inequalities in term of coefficients, and two in term of linear forms.

Proof in any dimension
Consider θ ∈ R n with Q-linearly independent coordinates with 1, and take α * < ω(θ).Let g * = G * (n, α * ) be the unique positive root of the polynomial R * n,α * defined in (8), recall (33).We define for 4 ≤ k ≤ n the parameters and which satisfy the assumptions (42) and (43) of Lemma 8 because of the induction formula (29) and R * n,α * (g * ) = 0. Considering a sequence (z l ) l∈N of best approximations to a point θ ∈ R n , we obtain via Lemma 7 a set of indices satisfying good properties.Suppose that k 0 is large enough so that for α * < ω(θ).
For any fixed T ≫ 1, the hyperbolic rotation (z ′ l ) l∈N = G T L•(z l ) l∈N preserves the property of linear independence, and hence the structure of the pattern of best approximation vectors constructed in Lemma 7. We consider the rotated sets from the sets S k,l and Q k,l defined in Lemma 7. We denote respectively by S ′ k,l and Q ′ k,l the lattices of their G T L-points.Section 5.1 explains that we can modify the proof of Lemma 8 so that As for the proof of the analogue of Lemma 4, we express the denominators in two different ways.Indeed, and we write both det Q ′ 2,2l and det Q ′ 2,2l+1 with their two expressions coming from (69).Given s, t ∈ [0, 1], analogously to Lemma 4, we define where the second equality comes from w * (s, t) ∈ [0, 1] being the root of the equation We obtain the analogue of ( 24) for g * (s, t) from which we deduce the analogue of ( 27) and ( 28), that is From (78) and (79), following similar argument as in Section 4.2, we get the analogue of (52): Applying the estimates of Lemma 10, and weighting the two ways to write the denominators coming from (69) with parameters w 1 and w 2 , we get

Example of approximation to 4 numbers
Consider a sequence of best approximation vectors to θ ∈ R 4 by one linear form.We may assume that it spans R 5 .Take α * < ω(θ).
We consider the unique positive real number g * such that α .
Set the parameters (using (76)) One can check that As 0 < g * = (α * − 1) x + w 1 − 1 x , we deduce that Now we are able to start the proof.For an index k 0 ≫ 1 we apply Lemma 6.It provides a pattern of best approximation vectors of linearly independent triples satisfying properties of Lemma 6.Consider T such that T > M r 3 +1 and T > L −1/n r 3 −1 .For j ≥ r 0 − 1, we apply the hyperbolic rotation to the integer vectors z z z j to get z z z ′ j = G T L • z z z j .For 0 ≤ i ≤ 3 we consider the subspace and its lattice of G T L points We recall that S 3,1 = S 3,2 = Q.
Consider the 2-dimensional lattices We apply Schmidt's inequality (Propositon 6) with underlying lattice G T L to obtain the analogue of (36) det S 3,0 (det By Lemma 10, we get Here, T disappears as has power 6 at numerator and denominator : We deduce We deduce that at least one of the four following inequalities holds 1) From ( 89) and (86), as L r 0 < M −α * r 0 +1 or M r 0 +1 < L −1/α * r 0 , we deduce the upper bound for the linear form .
3) From ( 91) and (86), as , we deduce the upper bound for the linear form .
4) From ( 92) and ( 86), as L r 3 −1 M r 3 < M α * −1 r 3 , we deduce the lower bound for the coefficient Hence, we proved that one of the following four inequalities holds: .
So we have checked (14) and the result follows.

Construction of points with given ratio
In this last section, we prove the second part of Theorem 1.To construct points with given ratio, we place ourselves in the context of parametric geometry of numbers introduced by Schmidt and Summerer in [27,26].For the convenience of the reader and the sake of selfcontainment, we briefly present the parametric geometry of numbers in section 6.1.An important theorem by Roy [22] enables to construct points with computable exponents of Diophantine approximation out of Roy-systems, a combinatorial family of piecewise linear applications.For our purpose, we construct explicitly in Section 6.2 a family of Roy-systems with three parameters.The construction shows how the values G(n, α) and G * (n, α * ) appear naturally in the context of parametric geometry of numbers, and why they are reached at regular systems.

Construction of a family of Roy-systems with three parameters
In this section, we construct explicitly a family of Roy-systems with parameters.According to Proposition 7 and Theorem 2, these Roy-systems provide the existence of points with requested exponents, proving the second part of Theorem 1.
1 cρ q 1 q 0 Figure 6: Pattern of the combined graph of P on the fundamental interval [1, cρ] The fact that all coordinates sum up to 1 for q = 1 follows from ρ being a root of the polynomial R * n,ω defined in (8).On each interval between two consecutive division points, there is only one line segment with slope 1.On [1, q 0 ], there is one line segment of slope 1 For c = 1, the parameters q 0 and q 1 coincide and we constructed a regular system.
For both simultaneous approximation and approximation by a linear form, the constructed 3-parameters families of self-similar Roy-systems provide infinitely many distinct points θ ∈ R n via Roy's theorem with Q-linearly independent coordinates with 1, as explained in [14, end of §3].The Q-linear independence comes from P 1 (q) → ∞ when q → ∞.The construction of infinitely many points follows from a change of origin with the same pattern and self-similarity.The degenerate cases when some of the exponents are infinite is managed by (non self-similar) Roy-systems consisting in patterns described by Figure 6 or 7, where the parameter c and/or λ or ω increases to infinity at each repetition.An explicit example of this trick is to be found in [14, end of §3].

Figure 3 :
Figure 3: Selected sequence of best approximation vectors.

z
n−k y n−k = g and recursive formula (43) gives us z n−k−1 y n−k−1 = g.It is enough for verifying (43) with k replaced by k + 1 by means of the first group of equalities from (29).

Figure 4
Figure 4 may be useful to understand the construction.Let k 0 ≫ 1.We prove the lemma by induction in the dimension n.Suppose that we can construct a pattern S m,0 of 2 m−3 triples of consecutive best approximation vectors given by indices k 0 < r 0 < r 1 , . . ., r 2 m−3 −2 < r 2 m−3 −1 spanning a m-dimensional rational space.Such a construction for m = 4, 5 holds via Lemmas 5 and 6.This provides the base of induction.
and is spanned by two consecutive best approximation vectors (see Lemma 1).Hence, the considered indices ν and k provide 6 best approximation vectors satisfying Lemma 5.