Global Phase Portrait and Large Degree Asymptotics for the Kissing Polynomials

We study a family of monic orthogonal polynomials which are orthogonal with respect to the varying, complex valued weight function, $\exp(nsz)$, over the interval $[-1,1]$, where $s\in\mathbb{C}$ is arbitrary. This family of polynomials originally appeared in the literature when the parameter was purely imaginary, that is $s\in i \mathbb{R}$, due to its connection with complex Gaussian quadrature rules for highly oscillatory integrals. The asymptotics for these polynomials as $n\to\infty$ have been recently studied for $s\in i\mathbb{R}$, and our main goal is to extend these results to all $s$ in the complex plane. We first use the technique of continuation in parameter space, developed in the context of the theory of integrable systems, to extend previous results on the so-called modified external field from the imaginary axis to the complex plane minus a set of critical curves, called breaking curves. We then apply the powerful method of nonlinear steepest descent for oscillatory Riemann-Hilbert problems developed by Deift and Zhou in the 1990s to obtain asymptotics of the recurrence coefficients of these polynomials when the parameter $s$ is away from the breaking curves. We then provide the analysis of the recurrence coefficients when the parameter $s$ approaches a breaking curve, by considering double scaling limits as $s$ approaches these points. We shall see a qualitative difference in the behavior of the recurrence coefficients, depending on whether or not we are approaching the points $s=\pm 2$ or some other points on the breaking curve.


Introduction
The main goal of this paper is to determine the asymptotic behavior of the recurrence coefficients of polynomials satisfying the following non-Hermitian, degree dependent, orthogonality conditions: where p n is a monic polynomial of degree n in the variable z, f (z; s) = sz, and s ∈ C is arbitrary. Polynomial sequences satisfying non-Hermitian orthogonality conditions similar to (1.1) first appeared in the literature in the context of approximation theory (c.f. [5,6,36,47]). In the present day, complex orthogonal polynomials with respect to exponential weights have been studied in [15,16] (with quartic potential) and [18,19] (with cubic potential). They have found uses in various areas of mathematics including random matrix theory and theoretical physics [2,3,4,12], rational solutions of Painlevé equations [9,13,14,16], and, of particular interest in the present work, numerical analysis [8,21,26]. Indeed, motivation for the present work is concerned with the numerical treatment of highly oscillatory integrals of the form where for sake of exposition, we take f to be an entire function. Historically, the numerical treatment of such integrals falls into two regimes, as explained in the monograph [26]. The first regime occurs when ω is relatively small, and the weight function is not highly oscillatory. In this regime, traditional methods of numerical analysis based on Taylor's Theorem, such as Gaussian quadrature, are adequate and provide a suitable means of evaluating such integrals. However, methods such as Gaussian quadrature require exceedingly many quadrature points as the parameter ω grows large, and as such, the second regime concerns the treatment of I ω [f ] when the parameter ω is large. Here, numerical methods based on the asymptotic analysis of such integrals take over, and methods such as numerical steepest descent are preferred. In order to address this apparent schism between the two regimes, the authors of [8] proposed a new quadrature rule based on monic polynomials which satisfy 1 −1 p ω n (z)z k e iωz dz = 0, k = 0, 1, . . . , n − 1.
(1.2) Note in (1.2), the weight function no longer depends on the degree of the polynomial n. Letting {z i } 2n i=1 be the 2n complex zeros of p ω 2n , the quadrature rule proposed in [8] is to approximate the integral via where the weights w j are the standard weights used for Gaussian quadrature. Note that as ω → 0, the rule (1.3) reduces elegantly to the classical method of Gauss-Legendre quadrature. Moreover, [8,Theorem 4.1] shows us that showing that the proposed quadrature method attains high asymptotic order as ω grows, especially when compared to other methods, such as Filon rules, used to handle the numerical treatment GLOBAL PHASE PORTRAIT AND LARGE DEGREE ASYMPTOTICS FOR THE KISSING POLYNOMIALS 3 of highly oscillatory integrals. For more information on the numerical analysis of oscillatory integrals, the reader is referred to [26], and in particular Chapter 6 for the relations to non-Hermitian orthogonality.
Despite the theoretical successes of numerical methods based on non-Hermitian orthogonal polynomials listed above, many questions about the polynomials themselves remain open. For instance, as the weight function in (1.2) is now complex valued, questions such as existence of the polynomials and the location of their zeros can no longer be taken for granted. However, provided the polynomials exist for the corresponding values of n and ω, all of the classical algebraic results on orthogonal polynomials will continue to apply. This is due to the fact that the bilinear form (1.5) still satisfies the relation zf, g = f, zg . Indeed, there will still be a Gaussian quadrature rule and the polynomials will still satisfy the famous three term recurrence relation zp ω n (z) = p ω n+1 (z) + α n (ω)p ω n (z) + β n (ω)p ω n−1 (z). (1.6) We restate that the weight function for the polynomials p ω n does not depend on n, which is why relations such as (1.6) continue to hold in the complex setting.
The results of [8] kick started the study of the polynomials in (1.2), and the authors of [21] dubbed such polynomials the Kissing Polynomials on account of the behavior of their zero trajectories in the complex plane. In particular, the work [21] provides the existence of the even degree Kissing polynomials, along with the asymptotic behavior of the polynomials as ω → ∞ with n fixed. On the other hand, the asymptotic analysis of the Kissing polynomials for fixed ω as n → ∞ can be handled via the Riemann-Hilbert techniques discussed in [41] or the appendix of [25], where it was shown that the zeros of the Kissing polynomials accumulate on the interval [−1, 1] as n → ∞ with ω > 0 fixed.
One can also let both n and ω tend to infinity together, by letting ω depend on n. In order to get a nontrivial limit as the parameters tend to infinity, one sets ω = ω(n) = tn, where t ∈ R + . This leads to the varying-weight Kissing polynomials which satisfy the following orthogonality conditions Thus, studying the behavior of the Kissing polynomials in (1.2) as both n and ω go to infinity at the rate t is equivalent to studying the behavior of the polynomials in (1.7) as n → ∞. The varying-weight Kissing polynomials were first studied in [25], where it was shown that for t < t c , the zeros of p t n accumulate on a single analytic arc connecting −1 and 1, which we denote here to be γ m (t). Here, t c is the unique positive solution to the equation 2 log 2 + √ t 2 + 4 t − t 2 + 4 = 0, (1.8) numerically given by t c ≈ 1.32549. In [25], strong asymptotic formulas for p t n in the complex plane and asymptotic formulas for the recurrence coefficients were given as n → ∞ with t < t c . Moreover, the curve γ m (t) can be defined as the trajectory of the quadratic differential which connects −1 and 1, as shown in [25,Section 3.2]. These results all followed in a standard way from the nonlinear steepest descent analysis of the Riemann-Hilbert problem for these polynomials, The analysis of the varying-weight Kissing polynomials for t > t c was undertaken in [22]. Again using the Riemann-Hilbert approach for these polynomials, the authors were able to show that there exist analytic arcs γ m,0 (t) and γ m,1 (t) such that the zeros of the varying-weight Kissing polynomials accumulate on γ m,0 ∪ γ m,1 as n → ∞. We restate some of the main results of [22] below. [22]. Let t > t c . There exist two analytic arcs, γ m,0 (t) and γ m,1 (t), that are trajectories of the quadratic differential (1)
Remark 1.2. The h function described above is given by h(z; s) = −2φ(z) + iκ in the notation of [22], where κ ∈ R is a real constant of integration. Moreover, the quadratic differential listed above differs from that of [22] by a factor of 4. For more details, we refer the reader to Sections 4 and 5 of [22]. Moreover, we note that if we let λ 0 = 2i t in (1.14), the quadratic differential We also point out that a further continuation of the work in [22,25] was carried out in [10], where varying weight Kissing polynomials with a Jacobi type weight were considered.
Another natural generalization of the works [22,25] is to allow t to take on complex values. That is, instead of considering the polynomials defined in (1.7) with t ∈ R + \ {t c }, we consider monic polynomials where f (z; s) = sz and s ∈ C is arbitrary, as introduced in (1.1). As stated at the beginning of this introduction, these polynomials will be investigated throughout this work, and we particularly concern ourselves with the asymptotics of the recurrence coefficients of these polynomials as n → ∞.

Statement of Main Results
In this section, we discuss the necessary background on non-Hermitian orthogonality and state our main findings.
We first note that as everything in the integrand of (1.1) is analytic, Cauchy's Theorem gives us complete freedom to choose a contour connecting −1 and 1 to integrate over. However, in light of the asymptotic behavior of the zeros of the polynomials as n → ∞, it is expected that there exists a "correct" contour over which to take the integration in (1.1). This contour should be the one on which the zeros of p n accumulate as n → ∞. The study of this intuitive notion of the "correct" theory to the analysis of non-Hermitian orthogonal polynomials via the GRS program, we adopt an alternate viewpoint based on deformation techniques born from advances in the theory of random matrices and integrable systems. We will make heavy use of the technique known as continuation in parameter space, first developed in the context of integrable systems (c.f. [40,56,57]), but which has only recently been applied in the field of orthogonal polynomials [12,15,16]. In contrast to the GRS program, where one constructs a so-called g-function as a solution to a certain variational problem, now one constructs a scalar function which solves a certain Riemann-Hilbert problem, which we call the h-function or modified external field.
We quickly note that as the weight function we consider, exp(−nf (z; s)), depends on the parameter s, the scalar Riemann-Hilbert problem also depends on the parameter s. Importantly, the number of arcs over which this Riemann-Hilbert problem is posed, or equivalently the genus of the underlying Riemann surface, is also to be determined. Indeed, we will see that h-functions corresponding to Riemann surfaces of different genus lead to asymptotic expansions which possess markedly different behavior as n → ∞. This difference is analogous to the difference in asymptotic behavior of the polynomials (and their recurrence coefficients) in the one cut and two cut cases, as described above for the GRS program. However, once one proves that for a specified genus and corresponding s ∈ C the scalar problem has a solution, one may continue with the process of steepest descent as will be outlined in Section 3 below. We will see that the h-functions constructed in (1.10) and (1.15) are the desired h-functions corresponding to genus 0 and 1 regimes, respectively, when s ∈ iR − . In order to establish the global phase portrait for all s ∈ C, we deform these solutions off of the imaginary axis using the technique of continuation in parameter space discussed above. During this deformation process, we will encounter curves in the parameter space which separate regions of different genera. These curves in parameter space are called breaking curves and we denote the set of breaking curves, along with their endpoints, as B. For our purposes, breaking curves can only originate and terminate at what are called critical breaking points, and we will see that the only critical breaking points we encounter in the present work are s = ±2. The description of the breaking curves in the parameter space forms our first main result.
Here, b −∞ = (−∞, −2) and b ∞ = (2, ∞). The breaking curve b + connects −2 and 2 while remaining in the upper half plane, and the breaking curve b − is obtained by reflecting b + about the real axis.
As seen in Figure 3, the set B divides the parameter space into three simply connected components: G 0 and G ± 1 . We will see that the region G 0 corresponds to the genus 0 region, whereas the regions  Figure 3. Definitions of the regions G 0 and G ± 1 in the s-plane. The set B is drawn in bold. The regular breaking points ±it c are indicated on the breaking curves b ± , where we recall that t c was defined in (1.8).
Having determined the description of the set B, we will be able to deduce asymptotic formulas for the recurrence coefficients for the orthogonal polynomials defined in (1.1) for all s ∈ C \ B via deformation techniques. We quickly digress to discuss notation before stating these results. We first introduce monic polynomials, p N n (z; s) which satisfy the following orthogonality conditions. where N is a fixed integer. Note that for each N ∈ N, we have a family of polynomials {p N n (z; s)} ∞ n=0 . The polynomials that we consider in (1.1) are given by p n (z; s) = p n n (z; s); that is, we consider the polynomials along the diagonal where N = n. Now, provided the polynomials exist for the appropriate values for n, N , and s, they satisfy the following three term recurrence relation In the present work, we concern ourselves with the situation N = n, and for sake of notation we set α n := α n n and β n := β n n . It should be stressed that the polynomials p n−1 , p n and p n+1 do not satisfy the recurrence relation (2.2). We now state our second result, on the asymptotics of the recurrence coefficients in the region G 0 . Theorem 2.2. Let s ∈ G 0 . Then the recurrence coefficients α n and β n exist for large enough n, and they satisfy as n → ∞.
As mentioned above, for s ∈ G ± 1 , the underlying Riemann surface has genus 1. Indeed, the Riemann surface corresponds to the algebraic equation ξ 2 = Q(z; s), where Q is a rational function, and we take the branch cuts for the Riemann surface on two arcs -one connecting 1 to λ 0 (s), labeled γ m,0 , and the other connecting −1 to λ 1 (s), labeled γ m,1 , where λ 0 and λ 1 will be determined. Moreover, for s ∈ G ± 1 , the asymptotics of the recurrence coefficients will depend on theta functions on our Riemann surface. These theta functions will be used to construct functions M 1 (z, k) and M 2 (z, k), along with a constant d, whose precise descriptions we provide in Section 3.5. The functions where γ c,1 is a to be determined curve connecting λ 0 (s) to λ 1 (s), and have at most one simple zero there. Furthermore, for N = n and given > 0, we define N(s, ) be a sequence of indices, n, such that M 1,n (z, d), M 2,n (z, d) are non-vanishing in {z : |z| > 1/ }. In particular, there exist constants To make use of these functions in our asymptotic analysis, we need to know that the cardinality of the set N(s, ) is infinite. This is asserted in the following result: Lemma 2.3. For all n ≥ 1 and > 0 small enough, if n ∈ N(s, ), then n + 1 ∈ N(s, ).
These functions M 1,n (z, d) and M 2,n (z, d) arise in the asymptotics of the recurrence coefficients for s ∈ G ± 1 , which we state below. Theorem 2.4. Let s ∈ G ± 1 and n ∈ N(s, ). Then the recurrence coefficients α n and β n exist for large enough n, and they satisfy and β n (s) = (2 + λ 0 (s) − λ 1 (s)) 2 16 as n → ∞.
Above, the notation f (n) = O (1/n) indicates that there exists a constant which depends only on , M = M ( ), such that |f (n)| ≤ M/n for large enough n. We recall that the parameter is used to define the set of valid indices, N(s, ), along which we take limits. Having determined the asymptotics of the recurrence coefficients of the polynomials in (1.1) when s ∈ C \ B, our final two results recover these asymptotics when s ∈ B.
As seen in Theorem 2.1, the breaking curves b −∞ and b ∞ are the intervals (−∞, −2) and (2, ∞), respectively. The theory of orthogonal polynomials with respect to real weights, varying or otherwise, has been written about extensively in the literature, most notably from the viewpoint of potential theory. In particular, the results of Deift, Kriecherbauer, and McLaughlin in [28] can be applied in conjunction with the GRS program to show that the empirical zero counting measure of the polynomials in (1.1) converge to a continuous measure supported on the interval [−1, 1] as n → ∞, when s ∈ R and |s| < 2. The results of [28] can also be used to show that the corresponding limit measure is supported on [−1, a) for some a < 1 when s > 2. Similarly, one also has that this measure is supported on (b, 1] for some b > −1 when s ∈ R is such that s < −2. The difference in the support of the limiting measure when |s| > 2 and |s| < 2 is also of interest in random matrix theory, and occurs when the soft edge meets the hard edge (see the work of Claeys and Kuijlaars [24]). From the viewpoint of the present work, these transitions at s = ±2 can be seen to come from the fact these are critical breaking points.
As the case where s ∈ R ∩ B has been extensively studied, we next consider the asymptotic behavior of the recurrence coefficients as we approach a regular breaking point which is not on the real line. More precisely, we let s * be a regular breaking point in B \ ((−∞, 2] ∪ [2, ∞)) and we let s approach s * as where L 1 ∈ C is such that s = s(n) ∈ G 0 for large enough n. The scaling limit (2.6) is referred to as the double scaling limit, as it describes the behavior of the polynomials as both n → ∞ and s → s * . This formulation then leads to the following description of the recurrence coefficients in the aforementioned double scaling limit.
) and let s → s * as described in (2.6). Then the recurrence coefficients exist for large enough n, and they satisfy α n (s) = δ n s 2 + 2s 4 Note above that as κ ∈ R and that the recurrence coefficients decay at a rate of n 1/2 . In particular, the modulus of δ n does not depend on n. Now, we are just left with investigating the behavior of the recurrence coefficients for s near the critical breaking points s = ±2. For brevity, we focus just on the case s = 2, although we note that the case s = −2 can be handled similarly via reflection, as exp(−nf (z; −s)) = exp(−nf (−z; s)). To state our results, we consider the Painlevé II equation [49,Chapter 32]: (2.10) Next, let q = q(w) be the generalized Hastings-McLeod solution to Painlevé II with the parameter α = 1/2, which is characterized by the following asymptotic behavior (2.11) In order to study the asymptotics of the recurrence coefficients as s → 2, we take s in a double scaling regime near this critical point as where we impose that L 2 < 0. This leads us to our final main finding.
Theorem 2.6. Let s → 2 as described in (2.12). Then the recurrence coefficients exist for large enough n, and they satisfy as n → ∞, where q is the generalized Hastings-McLeod solution to Painlevé II with parameter α = 1/2. Furthermore, the function U (w) = q 2 (w) + q (w) is free of poles for w ∈ R.
Plots of the recurrence coefficients are given in Figures 4 and 5, and should be compared with Theorems 2.2 and 2.6.  (a) α 2n (i).   [58, §5.2]. In Figure 5, we have used from [21] that β n (s) ∈ R and α n (s) ∈ iR when s ∈ iR. Moreover, it was also shown in [21] that for fixed n, β 2n+1 (it) will have poles (as a function of t) for t ∈ R. As such, we have plotted β 2n+1 on a log scale in Figure 5d. Once the recurrence coefficients α n (s) and β n (s) have been computed up to N = n = 50, we assemble the Jacobi matrix for the orthogonal polynomials and calculate its eigenvalues, which correspond to the zeros of p 50 (z; s), as explained for instance in [23,35]. Calculations have been done in Maple, using an extended precision of 100 digits.
Overview of Paper. The outline of the present paper is as follows. In Section 3, we provide a review on the Riemann-Hilbert problem for the orthogonal polynomials and the method of nonlinear steepest descent pioneered by Deift and Zhou in the early 1990s. In particular, we show how the existence of a suitable h-function can be used to obtain strong asymptotics of the polynomials throughout the complex plane and asymptotics of the recurrence coefficients. Moreover, we also prove Lemma 2.3 when discussing solutions to the global parametrix in Section 3.5.2.
In Section 4, we implement the technique of continuation in parameter space to obtain the desired h-function for s ∈ C \ B. In this section, we prove Theorems 2.1, 2.2, and 2.4.
Finally, in Section 6, we complete our analysis by investigating the double scaling limit as s → 2 via (2.12). We end the paper with a proof of Theorem 2.6. PhD student at the University of Cambridge, and he is thankful for his current support by the Cantab Capital Institute for the Mathematics of Information and the Cambridge Centre for Analysis. A. D. gratefully acknowledges financial support from EPSRC, grant EP/P026532/1, Painlevé equations: analytical properties and numerical computation.

The Riemann-Hilbert Problem and Overview of Steepest Descent
The formulation of the orthogonal polynomials as a solution to a Riemann-Hilbert problem was first given by Fokas, Its, and Kitaev in the early 1990s [34]. This formulation became even more powerful in the late 1990s due to the development of the nonlinear steepest descent method to obtain asymptotic solutions to Riemann-Hilbert problems, developed by Deift and Zhou [29,30,31]. In this section, we review the Riemann-Hilbert problem and nonlinear steepest descent as it relates to the polynomials defined in (1.1). We refer the reader to the works [17,27] for more details on these issues.
3.1. The Riemann-Hilbert Problem and The Modified External Field. Given a smooth curve Σ connecting −1 to 1 in C, oriented from −1 to 1, consider the following Riemann-Hilbert Above, σ 3 is the Pauli matrix given by σ 3 = diag(1, −1). For notational convenience, we drop the dependence of the above RHP and its solution on s and n. We also define κ n,N as the normalizing constant for p N n , obtained via (3. 2) The existence of Y is equivalent to the existence of the monic orthogonal polynomial p N n defined in (2.1), of degree exactly n, and, furthermore, if κ n−1,N is finite and non-zero, then Y is explicitly given by .

(3.3)
We recall that throughout the present analysis we take N = n, and we also drop the dependence of Y on N for notational convenience. In (3.3), Cg denotes the Cauchy transform of the function g along Σ, i.e.
which is analytic in C\Σ. The Deift-Zhou method of nonlinear steepest descent is a powerful method of determining asymptotics of solutions to these types of Riemann-Hilbert problems, and as such, we can use it to determine asymptotics of the polynomials p n and related quantities. The first transformation requires the existence of a modified external field, or h-function, whose properties we describe below. First, we define γ c,0 := (−∞, −1] and set Ω = Ω(s) = γ c,0 ∪ Σ. Next we partition Ω into two subsets as Ω = M ∪ C, where the arcs in M are denoted main arcs and those in C are denoted complementary arcs. Once this partitioning has been completed, we may define a hyperelliptic Riemann surface R = R(s) whose branchcuts are precisely the main arcs in M and whose branchpoints we define to be the set Λ = Λ(s) . If the genus of R is L, we may write M = ∪ L j=0 γ m,j and C = ∪ L j=0 γ c,j . Moreover, when we refer to the genus of h, we are referring to the genus of the associated Riemann surface. Finally, we impose that all arcs in Ω are bounded, aside from the one complementary arc γ c,0 .
The question remains: how do we partition Σ and choose the arcs in M and C? Equivalently, we may ask: how do we choose the appropriate genus L? In order to make the appropriate first transformation to (3.1) to begin the process of steepest descent, we must first construct a function h that satisfies both a scalar Riemann-Hilbert problem on Ω and certain inequalities, to be described below. Therefore, the arcs in M and C, and also the genus L, are chosen so that we can construct a suitable h-function. The modified external field must satisfy: Above, we impose that ω L = 0 and η 0 = 1; the remaining real constants η i and ω j (which only depend on s) can be chosen arbitrarily to satisfy (3.4). Furthermore, the constant = (s) depends only on the parameter s.
Remark 3.1. Given any genus L and arbitrary constants , η j , ω j ∈ R, there is no guarantee that a solution to (3.4) even exists. However, if such a solution does exist, it will be unique.
Remark 3.2. Provided we are able to construct a solution to (3.4), we define the Riemann surface R to be the two-sheeted genus L Riemann surface associated to the algebraic equation and M is a polynomial of degree 1 − L, chosen so that h possess the correct asymptotics at infinity. The branch cuts of R are taken along γ m,j , j = 0, . . . , L, and the top sheet is fixed so that In addition to solving the above scalar Riemann-Hilbert problem, h must also satisfy the following inequalities If s ∈ C is such that we can construct a function h(z; s) satisfying both (3.4) and (3.8), we call s a regular point. Now, provided s is a regular point, we may proceed with the process of nonlinear steepest descent as follows.

3.2.
Overview of Deift-Zhou Nonlinear Steepest Descent. The first transformation of steepest descent aims to normalize the Riemann-Hilbert problem at infinity. To do so, we define Figure 6. The contourΣ after opening lenses in the case L = 1.
where we recall that ∈ R is defined by (3.4d) and f (z; s) = sz. By making this transformation, we see that T satisfies the following Riemann-Hilbert problem: Note that the Riemann-Hilbert problem above also depends on s, but we have again dropped this dependence for notational convenience. We also remark that (3.4c) and (3.8b) imply that h(z) = 0 for z ∈ M. As M is part of the zero level set of h, the jump matrix for T has highly oscillatory diagonal entries when z ∈ M. Furthermore, if z ∈ C \ γ c,0 , the diagonal entries of the jump matrix will be constant and purely imaginary. Moreover, the (1, 2)-entry of the jump matrix will decay exponentially quickly to 0 by (3.8a). The next transformation of the steepest descent process deforms the jump contours so that the highly oscillatory entries of the jump matrix decay exponentially quickly, and is referred to as the opening of lenses.
The opening of lenses relies on the following factorization of the jump matrix across a main arc to be an arc which starts and ends at the endpoints of γ m,j and remains entirely on the +(−) side of γ m,j . For now we do not impose any restrictions on the precise description of these arcs, but we enforce that they remain in the region where h > 0, which is possible due to (3.8b). We define L ± j to be the region bounded between the arcs γ m,j and γ ± m,j , respectively, and setΣ := Σ ∪ L j=0 γ + m,j ∪ γ − m,j as in Figure 6. We can now define the third transformation of the steepest descent process as T (z), otherwise. (3.12) We then have that S solves the following Riemann-Hilbert problem onΣ: which decays exponentially quickly to the identity as n → ∞, due to (3.8b). As S = T outside of the lenses, we see that there are no changes to the jump matrix across a complementary arc, so that which again tends exponentially quickly to a diagonal matrix as n → ∞. Finally, we see that over γ m,j , the jump matrix is given by which follows from the factorization (3.11). Now consider the following model Riemann-Hilbert problem for the global parametrix, M , which is obtained by neglecting those entries in the jump matrix which are exponentially close to the identity in the Riemann-Hilbert problem for S, Assuming we are able to solve the model Riemann-Hilbert problem, we would like to make the final transformation by setting R = SM −1 , however, this will turn out not to be valid near the endpoints. As such, we will need a more refined local analysis near these points. More precisely, we will solve the Riemann-Hilbert problem for S exactly near these points, and impose further that it matches with the global parametrix as n → ∞. Therefore, define D λ = D δ (λ) to be discs of fixed radius δ around each endpoint λ ∈ Λ. For each λ ∈ Λ, we seek a local parametrix P (λ) , dependent on n, which solves We also require that P (λ) has a continuous extension to D δ (λ) \Σ and remains bounded as z → λ. The construction of both the global and local parametrices are now standard, but are included below Figure 7. The contour Σ R in the case L = 1. Note that we have chosen the contours ∂D λ to have clockwise orientation.
for completeness. Near the hard edges at ±1, the local parametrix can be constructed with the help of Bessel functions. Near the soft edges, if any, the local parametrices can be constructed using Airy functions.

Small Norm Riemann-Hilbert Problems.
We may complete the process of nonlinear steepest descent by defining the final transformation as Provided we were able to appropriately construct both the local and global parametrices, the matrix R will satisfy a "small norm" Riemann-Hilbert problem on a new contour, Σ R , whose jumps decay to the identity in the appropriate sense. The contour Σ R will consist of the oriented arcs forming the boundaries ∂D λ about each λ ∈ Λ and the portions of γ ± m,L which are not in the interior of D λ , as illustrated in Figure 7 for the genus L = 1 case. Moreover, the jump matrix j R (z) will satisfy for some c > 0 with uniform error terms. In particular, we may write the jump matrix as j R (z) = By [30,Theorem 7.10], this behavior then implies that R has an asymptotic expansion of the form where the ∆ j are given by (3.21). Therefore, if we are able to determine the ∆ k in (3.21), we will be able to sequentially solve for the R k in the expansion for R in (3.22) via the Riemann-Hilbert problem (3.23).

3.4.
Unwinding the Transformations. The process of retracing the steps of Deift-Zhou steepest descent to obtain uniform asymptotics of the orthogonal polynomials in the plane is now standard.
Of particular interest to us is to obtain the asymptotics of the recurrence coefficients, but we briefly state how to recover asymptotics of the polynomials throughout the complex plane. First, we consider asymptotics of the polynomials for some z ∈ C \ M. Unwinding the transformations away from the lenses, we see that where M (z) above is the appropriate global parametrix. By (3.3), we know the orthogonal polynomial is given by the (1, 1) entry of Y , so that Similarly, we may retrace the steps of steepest descent in the upper (lower) lenses, but away from the endpoints in Λ(s), to see that where we use the +(−) sign in the upper (lower) lip of the lens. Therefore, we have that in the upper (lower) lip of the lens. Note that in both (3.25) and (3.27), we are able to express the polynomial p n in terms of the global parametrix M and the matrix R(z). The knowledge that R decays to the identity, along with our explicit solution of the global parametrix, is enough to determine the outer asymptotics of the polynomials as n → ∞. A similar procedure can be used to recover the asymptotic expansion of the recurrence coefficients.
We recall that the three term recurrence relation is given by zp n n (z) = p n n+1 (z) + α n p n n (z) + β n p n n−1 (z) To state the recurrence coefficients in terms of Y , we first note that from (3.1) that we may write Then, we may write the recurrence coefficients (c.f. [17]) as As before, we will unwind these transformations until we are able to express the recurrence coefficients in terms of the global parametrix and the matrix valued R(z). We continue by writing Next, using (3.9) we compute Then, it easily follows that (3.29) becomes The above equation will be the starting point of our analysis in Sections 4.5 and 4.6, where we prove Theorems 2.2 and 2.4 , respectively, providing the asymptotics of the recurrence coefficients.
3.5. The Global Parametrix. Below, we give a detailed description on how to solve the model problem (3.17) in the genus 0 and genus 1 cases, which will be the only two regimes we see for the linear weight under consideration. The arguments below can be easily adapted to cases of higher genera corresponding to other weights, as in [16].
3.5.1. Genus 0. In the case we are working in the genus 0 regime, Σ = γ m,0 (s), where γ m,0 is chosen so that we may construct a suitable h function satisfying both (3.4) and (3.8). The model Riemann-Hilbert problem (3.17) in the genus 0 case takes the following form. We seek M : This can be solved explicitly [17,25] as where ϕ(z) = z+(z 2 −1) 1/2 , with branch cuts taken on γ m,0 so that ϕ(z 3.5.2. Genus 1. In the genus 1 regime, we have that Σ = γ m,0 (s) ∪ γ c,1 (s) ∪ γ m,1 (s), and the set of branchpoints is given by Λ(s) = {−1, 1, λ 0 (s), λ 1 (s)}, where the arcs and endpoints are chosen so that we may construct a suitable h-function. Now, the model problem (3.17) takes the form We follow the approach of [16,30,57], and solve this problem in four steps. We recall from Remark 3.2 that R is the hyperelliptic Riemann surface associated with the algebraic equation whose branchcuts are taken along γ m,0 and γ m,1 , and whose top sheet fixed so that as z → ∞ on the top sheet of R. We form a homology basis on R using the A and B cycles defined in Figure 8. We also recall that as R is of genus 1, the vector space of holomorphic differentials on R has dimension 1 and is linearly generated by .
We then define ω := bΩ 0 , with b chosen to normalize ω so that Moreover, if we define it is well known that τ > 0, see [32, Chapter III.2].

3.5.3.
Step One -Remove Jumps on Complementary Arcs. The first step aims to remove the jumps over the complementary arcs and we will follow the procedure outlined in [57]. First, we introduce the function with a branch cut taken on γ m,0 and γ m,1 and branch chosen so that Ξ(z) → z 2 as z → ∞. Next, defineg The constant ∆ 0 is chosen so thatg is analytic at infinity. More precisely, ∆ 0 is defined so that Note that by (3.41) and the definition of ω, it follows that ∆ 0 = η 1 τ . It follows thatg is bounded near each λ ∈ Λ and satisfiesg Then, M 0 solves the following Riemann-Hilbert problem Note that M 0 has no longer has any jumps over the complementary arcs.

3.5.4.
Step Two -Solve n = 0. In the case that n = 0, the model problem for M 0 takes the form The solution to (3.49) is well known (see for instance [17]), and is given by with branch cuts on γ m,0 and γ m,1 and the branch of the root chosen so that It is important to understand the location of the zeros of the entries of M (0) 0 (z), as they will play a role later on in this construction. Note first that the zeros of φ(z) with a zero at ∞ 1 and one simple zero on each sheet of R. If we denote by z 1 the zero of φ 2 (z) − 1, thenẑ 1 , which denotes the projection of z 1 onto the opposite sheet of R, solves φ 2 (z) + 1.

3.5.5.
Step Three -Match the jumps on M. The next step in the solution is to match the jump conditions (3.48b) ans (3.48c). We will do this by constructing two scalar functions, M 1 (z, d) and M 2 (z, d) which satisfy is a yet to be defined constant that will be chosen to cancel the simple poles of the entries of M 0 . If we can construct such functions, then it is immediate from (3.49b) and (3.53) that satisfies (3.48b) and (3.48c). We can construct M 1 and M 2 with the help of theta functions on R.
We define the Riemann theta function associated with τ in (3.42) in the standard way The following properties of the theta function follow immediately from (3.56): is not identically zero, it has a simple zero at ζ = 1 2 + τ 2 mod Λ τ . Next we define the Abel map as where we recall ω was normalized to satisfy (3.41) and the path of integration is taken on the upper sheet of R. By (3.41), we have that u is well defined on C \ M modulo Z. From (3.41) and (3.42) it follows that where again, all the equalities above are taken modulo Z. Next we set where we recall that W = n(ω 0 + ∆ 0 ) and d is yet to be determined. Then, both M 1 and M 2 are single valued on C \ M. Equations (3.57) and (3.59) immediately show that the functions M 1 and M 2 satisfy (3.53), as desired. In the remainder of this section, we will slightly abuse notation and think of the functions φ 2 (z) and M 1,2 (z) as functions on R. The latter are multiplicatively multi-valued on R, but one may still consider the order of zeros and poles in the usual fashion. 3.5.6.
Step 4 -Choose d and normalize L. We have now constructed M 1 and M 2 so that L defined in (3.55) satisfies (3.48b) and (3.48c). We must now choose d so that L is analytic in C \ M and normalize L so that it tends to the identity as z → ∞. By standard theory [32], for arbitrary d ∈ C the function Θ(u(z) − d) on R either vanishes identically or vanishes at a single point p 1 , counted with multiplicity. Recall that we have defined z 1 to be the unique finite solution to φ(z) 2 − 1 = 0 andẑ 1 , its projection onto the opposite sheet of R, to be the unique finite solution to φ(z) 2 + 1 = 0 on R.
We now choose d so that the simple zeros of the denominators of L cancel the zeros of φ ± φ −1 . From the remarks immediately following (3.57), this is satisfied if we set as Θ(ζ) = 0 when ζ = 1 2 + τ 2 mod Λ τ . As the Theta function is even, we have that which verifies that each entry of L is analytic in C \ M. Now we must normalize L so that it decays to the identity as z → ∞. We first note that we have alternative formula for d, To see this, we note that φ 2 (z) − 1 is meromorphic on R with a zero at ∞ 1 , a simple zero at z 1 , and poles at λ 0 and 1. By Abel's Theorem [32, Theorem III.6.3], we have that Using (3.58), along with (3.41) and (3.42), we see that so that (3.63) follows by (3.61).
As L has the same jumps as M 0 in (3.48b) and (3.48c), we can conclude that det L is entire, and as L is bounded at infinity, we have that solves (3.48). The condition Θ(W ) = 0 can be rewritten as Moreover, the solution is given by where L is defined in (3.55).
3.6. The Local Parametrices. Recalling the discussion preceding (3.18), we will need a more detailed local analysis about the endpoints λ ∈ Λ. Although these constructions are now standard, we state them below for completeness. For details we refer the reader to [17,27,30,41].
3.6.1. Soft Edge. In light of (3.4), let λ ∈ Λ be such that h(z) = c (z − λ) 3/2 + O (z − λ) 5/2 as z → λ for some c = 0. We will also make use of the following function where the path of integration emanates upwards in the complex plane from λ and does not cross Ω(s). Then, where c = 0. There exist real constants K λ ± such that h (λ) where in light of (3.4), K λ + − K λ − = 4πiη 1 . We assume λ = λ 0 so that the main arc γ m,0 lies to the left of λ and the complementary arc γ c,1 lies to the right of λ, where left and right are in reference to the orientation ofΣ. The case where the complementary arc leads into λ and the main arc exits λ can be handled similarly with minor alterations. We want to solve the following Riemann-Hilbert problem in a neighborhood of λ 0 , D λ 0 , where we recall (3.75) We also require that P (λ 0 ) has a continuous extension to D λ 0 \Σ and remains bounded as z → λ 0 . We solve for P (λ 0 ) by setting where U (λ 0 ) satisfies a Riemann-Hilbert problem in D λ 0 , with jump U (3.77) The Riemann-Hilbert problem for U (λ 0 ) can be constructed via Airy functions as in [17,30]. We define Ai is the Airy function and ω := exp (2πi/3). We next define so that f A (z) conformally maps a neighborhood of λ 0 to a neighborhood of 0. Recall that we still have the freedom to choose the precise description of γ ± m,0 , so we choose them in D λ 0 so they are mapped to the rays {z : arg z = ± 2π 3 }, respectively, under the map f A . Then, we have that where E (λ 0 ) n is any analytic function, satisfies the jumps given in (3.77). The analytic prefactor E (λ 0 ) n is chosen so that satisfies the matching condition (3.74c) and is given by where Sectors I, II, III, and IV are defined in Figure 9, and In the formulas above, the branch cut for f

1/4
A is taken on γ m,0 and is the principal branch.
3.6.2. Hard Edge. Now we assume that we are looking at the analysis near z = 1, and we recall that h(z) = O (z − 1) 1/2 as z → 1. We will show in the construction of h in Section 4 that where P (1) has a continuous extension to D 1 \Σ and remains bounded as z → 1, and where the jump matrix j S in D 1 is given by (3.86) Analogously to the analysis in the soft edge, we define P (1) (z) = U (1) (z)e − n 2 h(z)σ 3 , so that U (1) solves a new Riemann-Hilbert problem in D 1 , with jump matrix given by (3.87) Now, U (1) can be written explicitly in terms of Bessel functions, as in [41], and we state this construction below. First set where I 0 is the modified Bessel function of the first kind, K 0 is the modified Bessel function of the second kind, and H 0 and H 0 are Hankel functions of the first and second kind, respectively. With this in hand, we may define the Bessel parametrix as , |arg ζ| < 2π 3 ,  where E (1) n is analytic prefactor chosen to ensure the matching condition (3.85c). Therefore, we have that where all branch cuts above are again taken to be principal branches. A similar analysis may be conducted around z = −1, and we state the solution to the local parametrix here is given by

The Global Phase Portrait -Continuation in Parameter Space
As seen above, one of the keys to implementing the Deift-Zhou method of nonlinear steepest descent is the existence of the h-function. Fortunately, genus 0 and 1 solutions for s ∈ iR have already been established in [25,22], so we can implement the continuation in parameter space technique developed in [16,56,57]. By following this procedure, we will show that by starting with some genus L h-function for s ∈ iR ∩ G L , we will be able to continue this genus L solution to all s ∈ G L .
Below, we will first define breaking points and breaking curves. The set of breaking curves along with their endpoints will be denoted as B and we will show that the inequalities (3.8) can only break down as we cross a breaking curve. Next, we provide the basic background on quadratic differentials needed for our analysis. Finally, we recap the previous work on orthogonal polynomials of the form (1.1) where s ∈ iR and show how we may deform these solutions to all s ∈ C \ B. Above, we also impose that the zero of h is of at least order 1. We call a breaking point critical if either: (i) The saddle point in (4.1) coincides with a branchpoint in Λ(s), or (ii) The order of the zero at the saddle point is greater than one or there are at least two saddle points of h on Ω counted with multiplicity. If a breaking point s is not a critical breaking point, it is a regular breaking point.
Remark 4.1. Note that h is analytic in C\M(s). In the above definition of breaking point, if z 0 ∈ M(s), we mean h (z 0 ) = 0 in the following sense. Note that h + (z) and h − (z) have analytic extensions to a neighborhood of z 0 ∈ M(s). Moreover, in this neighborhood, the two extensions are related via h + (z) = −h − (z). Therefore, if z 0 is such that h + (z 0 ) = 0 (where here we are referring to the extension so this is well defined), then h − (z 0 ) = 0, so we say h (z 0 ) = 0.
We have the following lemma from [16,Lemma 4.3], and we include the proof for convenience. Proof. Writing z = u + iv and s = s 1 + is 2 , we may consider (4.1) to be a system of 3 real equations in 4 real unknowns in the form G(u, v, s 1 , s 2 ) = 0. We may choose either j = 1 or j = 2 so that h s j (z 0 ; s b ) = 0. Then, as h (z 0 ; s b ) = 0, we may calculate the Jacobian as where we have used the Cauchy-Riemann equations for the second equality above. As h = 0, as s b is a regular breaking point, the Implicit Function Theorem completes the proof.
The curves in Lemma 4.1 are defined to be breaking curves. We will see that the breaking curves partition the parameter space so as to separate regions of different genus of h function, as they are precisely where the inequalities on h break down. Proof. To see this, first consider the case that the Inequality (3.8b) breaks down in a vicinity of z 0 , where z 0 is an interior point of a main arc. By definition, h(z) = 0 for all interior points z of a main arc, so clearly we must have that h(z 0 ; s 1 ) = 0. To show that s 1 is a breaking point, we must just show that h (z 0 ; s 1 ) = 0. To get a contradiction, assume that h (z 0 ; s 1 ) = 0. As h + is analytic at z 0 and its derivative doesn't vanish, we may write that h + (z) = c + (z − z 0 )a(z), where a is analytic in a neighborhood of z 0 and does not vanish in this neighborhood and c is a purely imaginary constant. Therefore, h + (z) does not change sign in close proximity to z 0 on the +-side of the cut, and as h = h + here, the real part of h does not change on the + side of the cut in close proximity of z 0 . A similar argument applied to h − shows that the real part of h does not change on the −-side of the cut in close proximity of z 0 , either. As h(z; s(t)) > 0 for all z in close proximity of a main arc for t < 1, we have that by continuity in s and by the constant sign of h(z; s 1 ) in close proximity to z 0 that h(z; s 1 ) > 0 for all z in close proximity to z 0 . This is precisely the inequality which we have assumed to have broken down, so we have reached the desired contradiction. As such h (z 0 ; s 1 ) = 0, and s 1 is a breaking point. Going the other way, we have that the real part of h + must change sign above/below the cut if h ± (z 0 ) = 0, which clearly violates Inequality (3.8b).
Next, assume that Inequality (3.8a) breaks down at z 0 , where z 0 is an interior point of a complementary arc, γ c . Given that h(z; s(t)) < 0 for all interior points of a complementary arc, we have by continuity that if the inequality breaks down for s 1 at some point z 0 , we must have that h(z 0 ; s 1 ) = 0. We are now left to show that h (z 0 ) = 0. To get a contradiction, assume that h (z 0 ) = 0. Then there is a zero level curve of h(z) passing through z 0 which looks locally like an analytic arc (that is, no intersections). Furthermore, the sign of h(z) is constant on either side of γ c in close proximity to z 9 . By continuity, we have that h(z; s 1 ) < 0 for all interior points z ∈ γ c \{z 0 }. Therefore, we are able to deform the complementary arc back into the region where h(z) < 0 for all z ∈ γ c , contradicting the assumption the inequality was violated. Therefore, we must have that h (z 0 ; s 1 ) = 0, and as such s 1 is a breaking point. On the other hand, assume that s 1 is a breaking point. Then as h(z 0 ; s 1 ) = 0, we clearly have that the strict inequality (3.8a) is violated at z 0 .
Moreover, the condition that h (z 0 ) = 0 enforces that we can not deform the complementary arc so as to fix the inequality.

Quadratic Differentials.
In this subsection, we review the basic theory of quadratic differentials needed for the subsequent analysis. The theory presented below follows [52,55], and we refer the reader to these works for complete details.
A meromorphic differential on a Riemann surface R is a second order form on the Riemann surface, given locally by the expression −f (z) dz 2 , where f is a meromorphic function of the local coordinate z. In particular, if z = z(ζ) is a conformal change of variables, represents in the local coordinate ζ. In the present context, we may always take the underlying Riemann Surface to be the Riemann sphere. Of particular interest to us is the critical graph of a quadratic differential , which we explain below.
First, we define the critical points of = −f dz 2 to be the zeros and poles of −f . The order of the critical point, p, is the order of the zero or pole, and is denoted by η(p). Zeros and simple poles are called finite critical points; all other critical points are infinite. Any point which is not a critical point, is a regular point.
In a neighborhood of any regular point p, the primitive is well defined by specifying the branch of the root at p and analytically continuing this along the path of integration. Then, we define an arc γ ⊂ R to be an arc of trajectory of if it is locally mapped by Υ to a vertical line. Equivalently, for any point p ∈ γ, there exists a neighborhood U where Υ is well defined and moreover, Υ(z) is constant for z ∈ γ ∩ U . A maximal arc of trajectory is called a trajectory of . Moreover, any trajectory which extends to a finite critical point along one of its directions is called a critical trajectory of and the set of critical trajectories of , along with their limit points, is defined to be the critical graph of .
To understand the topology of the critical graph of a quadratic differential , we must necessarily study both the local structure of trajectories near finite critical points, along with the global structure of the critical trajectories. Fortunately, the local behavior near a finite critical point is quite regular. Indeed, from a point p of order η(p) = m ≥ −1 emanate m + 2 trajectories, from equal angles of 2π/(m + 2) at p. This also includes regular points, which implies that through any regular point passes exactly one trajectory, which is locally an analytic arc. In particular, this implies that trajectories may only intersect at critical points.
The global structure of trajectories is more involved, and requires more detailed analysis. In general, a trajectory γ is either (i) a closed curve containing no critical points, (ii) an arc connecting two critical points (which may coincide), or (iii) an arc that has no limit along at least one of its directions.
Trajectories satisfying (iii) are called recurrent trajectories, and their absence in the present work is assured by Jenkins' Three Poles Theorem [50,Theorem 8.5].
With the necessary background on quadratic differentials now complete, we will see how their trajectories play a crucial role in the construction of the h-function. The genus 0 regime was studied in [25] and the genus 1 regime was studied in [22]. In both cases, the authors established the structure of the critical graph of the appropriate quadratic differential related to the genus 0 (sub-critical) and genus 1 (super-critical) h-functions.

4.3.1.
Genus 0. The case where s = −it and 0 < t < t 0 was studied in [25]. We recall that t 0 was defined as the unique positive solution to We want to show that we may extend the results of [25], by using the technique of continuation in parameter space discussed above, to construct a genus 0 h-function which satisfies both (3.4) and (3.8). In order to state some of the results from [25], we first define With this lemma in hand, we take the branch cut of (4.5) on γ m,0 (s), with the branch chosen so that The critical graph of s is depicted in Figure 11. We see that there are four trajectories emanating from the double zero at z = 2i/t = 2/s, two of which form a loop surrounding the endpoints −1 and 1. We may easily extend this critical graph from the subset of the imaginary axis to all s ∈ G 0 . Lemma 4.4. For all s ∈ G 0 , there exists a smooth curve γ m,0 (s) connecting −1 and 1 which is a trajectory of the quadratic differential s .
Proof. Fix some s 0 = −it with 0 < t < t 0 and some s 1 ∈ G 0 . The goal is to show that there exists a trajectory of s 1 which connects −1 to 1. As G 0 is simply connected, we may connect s 0 to s 1 with a curve that lies completely within G 0 , which we call ρ. As we deform s along ρ towards s 1 , we note that the topology of the critical graph of s will only change if a trajectory emanating from 2/s ever meets γ m,0 (s). Assume for sake of contradiction, there existed some s * ∈ ρ for which this occurred. We would then have h(z; s * ) = 0 for z ∈ γ m,0 (s), as it is a trajectory of the quadratic differential s . Moreover, we would also have that h (2/s * ; s * ) = 0 as 2/s * is a zero of h (z; s * ). In other words, s * is a breaking point. However, this contradicts the fact that ρ lies completely within G 0 , which by definition contains no breaking points in its interior. As such, the topology of the critical graph at s 1 is the same as it was at s 0 , and we conclude that there exists a trajectory of s 1 connecting −1 and 1.
In light of the lemma above, we keep the notation of γ m,0 (s) to be the trajectory of s which connects −1 and 1. where the path of integration is taken in C \ Ω(s). Proof. It is clear that h is analytic in C \ Ω(s). Next, note that h(z; s) → 0 as z → 1 and h(z; s) is constant along γ m,0 (s), as it is a trajectory of s , so that h(z; s) = 0 for z ∈ γ m,0 (s). As h + = −h − on γ m,0 , we we have that h + (z) + h − (z) = 0 for z ∈ γ m,0 , so that h satisfies the appropriate jump over γ m,0 . Next, a residue calculation gives us that h + (z) − h − (z) = 4πi for z ∈ γ c,0 . We can integrate (4.7) directly to yield, h(z; s) = 2 log z + z 2 − 1 1/2 − s z 2 − 1 1/2 . so that h satisfies (3.4d). Finally, it is clear from (4.5) that h(z) = O √ z ∓ 1 as z → ±1, so that the h constructed above satisfies all of the requirements of (3.4).
To see that h(z; s) satisfies (3.8), we note that the inequalities were proven directly in [25] for s = −it with 0 < t < t 0 . By using Lemma 4.2, we see that the inequalities will hold for all s ∈ G 0 , completing the proof.
With the genus 0 h-function now constructed explicitly for all s ∈ G 0 , we now turn to the genus 1 case.

Genus 1.
The genus 1 case is slightly more involved, but as before, we will deform the existing solution on the imaginary axis to all other values of s. Therefore, we start with defining and we now set s := −h (z; s) 2 dz 2 , where h is defined in (4.10). It was shown in [22] that for s = −it where t > t 0 , there exist trajectories of the quadratic differential s connecting −1 to λ 0 and λ 1 to 1. Here, λ 0 and λ 1 satisfy The second condition of (4.11) is known as the Boutroux Condition, and its importance will become clear shortly. The critical graph of s for s ∈ iR ∩ G − 1 as proven in [22] is displayed in Figure 12. In this case, the critical graph is symmetric with respect to the imaginary axis, and there exists a trajectory connecting −1 to λ 0 and one connecting λ 1 = −λ 0 to 1.
We consider the case s ∈ G − 1 . As in the proof of Lemma 4.4, we note that for any s ∈ G − 1 , there will exist trajectories connecting −1 to λ 0 and λ 1 to 1, which we define to be γ m,0 (s) and γ m,1 (s). Further, we define γ c,1 to be the curve connecting λ 0 to λ 1 along which h(z) < 0, whose existence is guaranteed by the definition of G − 1 . Finally, assume for now that we may deform s within G − 1 so as to preserve the conditions (4.11). Then, for s ∈ G − 1 we have Ω(s) = γ c,0 ∪ γ m,0 ∪ γ c,1 ∪ γ m,1 and we define h(z; s) = u 1 h (u; s) du, (4.13) where the path of integration is taken in C \ Ω(s) and h is given in (4.10). We now have the following Lemma, which shows that the so-constructed h function is the correct one needed for genus 1 asymptotics.
Proof. Again, it is immediate that h is analytic in C\Ω(s) and has the appropriate endpoint behavior near all endpoints in Λ. Moreover, from the first condition of (4.11), we ensure that h has the correct asymptotics at infinity. The Boutroux condition ensures that we have a purely imaginary jump over γ c,1 and the same residue calculation as in the genus 0 case yields that h + (z) − h − (z) = 4πi for z ∈ γ c,0 . Finally, as h(z) = 0 for z ∈ M, along with h + (z) + h − (z) = 0 for z ∈ M and the Boutroux condition, we have that h + + h − is purely imaginary on the main arcs γ m,0 and γ m,1 .
As before, the inequalities (3.8) were established in [22] directly for s ∈ iR with s < −t 0 , so we may again use Lemma 4.2 to show that the inequalities continue to hold for all s ∈ G − 1 . Therefore, it is left to show that we may deform s while preserving (4.11). Denoting λ 0 = u 0 + iv 0 and λ 1 = u 1 + iv 1 , we may write the conditions (4.11) as F (s; u 0 , v 0 , u 1 , v 1 ) = 0, where F = (f 1 , f 2 , f 3 , f 4 ) and Note that f 3 and f 4 are equivalent to the Boutroux condition, as any loop on R may be written as a combination of the A and B cycle on R. Taking the Jacobian of the above conditions with respect to the endpoints yields, As λ 0 = λ 1 since we are at a regular point, note that is the unique (up to multiplicative constant) holomorphic differential on R. Subtracting the first and second columns from the third and fourth columns, we get that That is, A and B are the A and B periods of a holomorphic differential on R, and the determinant is given by det ∇F = (AB) > 0, (4.20) which follows from Riemann's Bilinear inequality. As this determinant is non-zero, we can deform the endpoints continuously in s so as to preserve (4.11), verifying that for all s ∈ G − 1 , we may construct a genus 1 h-function.

4.4.
Proof of Theorem 2.1. We recall that the aim of Theorem 2.1 is to verify that Figure 3 is the accurate picture of the set of breaking curves in the parameter space. We recall the statement of the theorem below. As the genus of R(s) is either 0 or 1, we have that the genus must be 0 along a breaking curve. That is, Ω(s) = γ c,0 ∪ γ m,0 . We have seen in (4.8) that the regular genus 0 h-function is given by: Remark 4.2. Note that there is one other genus zero h function which occurs when s ∈ R and |s| > 2.
Here, we have that with a cut taken on the real line connecting λ 1 and 1 or λ 2 and −1, depending on the situation. However, neither of these h-functions admit saddle points, so they do not need to be considered when looking for breaking points.
It is clear by looking at (4.21) that the only saddle point is at z 0 = 2/s. As this is a simple zero of h , we see that the only critical breaking points occur when the saddle point coincides with the branchpoints in Λ(s). That is, the only critical breaking points are s = ±2. We now have the following simple calculation. Note that this vanishes only for s = ±2, which are critical breaking points, so that the proposition above is true for all regular breaking points.
By Lemma 4.1, the above proposition immediately implies the following, just as in [16, Corollary 6.2].
Corollary 4.8. Breaking curves are smooth, simple curves consisting of regular breaking points (except possibly the endpoints). They do not intersect each other except perhaps at critical breaking points or at infinity. They can originate and end only at critical breaking points and at infinity. Figure 3 is the correct picture, proving Theorem 2.1.

Now, we can indeed verify the global phase portrait depicted in
To find the breaking curves, we recall that the only saddle point occurs at Recall also, that the only critical breaking points are s = ±2, at which the saddle point collides with the hard edge at ±1, respectively. As h(2/s, s) = O (s − 2) 3/2 as s → ±2, we note that 3 breaking curves emanate from each of ±2. Now, if s ∈ R and |s| > 2, then −s 4 Furthermore, recall that the map z → z + z 2 − 1 1/2 sends the interval (−1, 1) to the unit circle.
As such, we also have that 2 log 2 s + 4 when s ∈ R and |s| > 2. Therefore, the rays (2, ∞) and (−∞, −2) are both breaking curves. Finally, note that so that the two rays emanating from ±2 towards infinity along the real axis are the only two portions of the breaking curve which intersect at infinity. According to Corollary 4.8, the remaining breaking curves either emanate from ±2 or form closed loops in the s-plane consisting of only regular breaking points. As h(2/s; s) has non-zero real part for s ∈ (−2, 2), we conclude that the remaining breaking curves do not intersect the real axis. Next, note that h(2/s; s) is harmonic for s off the real axis, so that off the real axis, there are no closed loops along which h(2/s, s) = 0. Therefore, the remaining breaking curves begin and end at ±2. Finally, as we see that the breaking curves which connect −2 and 2 are symmetric about the real axis.

Proof of Theorem 2.2.
Having successfully verified the global phase portrait is as depicted in Figure 3, with G 0 corresponding to the genus 0 region and G ± 1 corresponding to the genus 1 regions, we may now use the techniques illustrated in Section 3.4 to obtain asymptotics of the recurrence coefficients for s ∈ C \ B.
For s ∈ G 0 , we are in the genus 0 region and as such we will use the global parametrix defined in Subsection 3.5.1. We recall that the global parametrix is given in (3.36) as where ϕ(z) = z + (z 2 − 1) 1/2 , with the branch cut taken on γ m,0 so that ϕ(z) = 2z + O (1/z), (z 2 − 1) 1/4 = z 1/2 + O(z −3/2 ) as z → ∞. From this, we immediately see that where T 1 and T 2 are defined as the following terms of the asymptotic expansion of T at infinity, In Section 3.3 we stated that R has an asymptotic expansion of the form which is valid uniformly near infinity and each R k satisfies Recalling that T (z) = R(z)M (z) for large enough z, we may write and and as such we turn our attention to determining R 1 and R 2 . We recall the discussion in Section 3.3, where we wrote j R (z) = I + ∆(z), where ∆ admits an asymptotic expansion in inverse powers of n as As ∆(z) decays exponentially quickly for z ∈ Σ R \ ∪ λ∈Λ ∂D λ , we have that The behavior of ∆ k (z) for z ∈ ∂D λ can be determined in terms of the appropriate local parametrix used at the particular λ ∈ Λ. We give an explicit formula for ∆ k (z) for z ∈ ∂D 1 following [41, Section 8]. We compute that the Bessel parametrix defined in (3.89) satisfies uniformly as ζ → ∞, where the matrices B k are defined as

GLOBAL PHASE PORTRAIT AND LARGE DEGREE ASYMPTOTICS FOR THE KISSING POLYNOMIALS 39
As j R (z) = P (1) (z)M −1 (z) − I for z ∈ ∂D 1 , we may use(3.85c)-(3.90) to see that so that we have by direct inspection, for z ∈ ∂D 1 . Definingh(z) = h(z) − 2πi, we are able to similarly compute that when z ∈ ∂D −1 . It was also shown in [41, Section 8] that we may write that for some constant matrices A (1) and B (1) . Using the behavior of h defined in (4.8) and ϕ near ±1, we find that We recall from Section 3.3 that the ∆ k may be used to solve for the R k via the following Riemann-Hilbert problem.
Having determined the ∆ k (z) for z ∈ ∂D ±1 , we may solve for the R k directly. By inspection, we see that solves the Riemann-Hilbert problem (4.45) for R 1 .
To determine R 2 , we again follow [41] where it was shown for some constant matrices A (2) and B (2) . As we now have explicit formula for R 1 , ∆ 1 , and ∆ 2 , we may use the properties of h and ϕ to determine that and Having determined the A (2) and B (2) , we may again solve the Riemann-Hilbert problem for R 2 by inspection as (4.49) Now, we may expand the R k at infinity to determine the appropriate terms in (4.35). As R k (z) = A (k) /(z − 1) + B (k) /(z + 1) for k = 1, 2 and z ∈ C \ (D 1 ∪ D −1 ), we have that Using the explicit formula for the A (k) and B (k) , we determine that 4.6. Proof of Theorem 2.4. For s ∈ G ± 1 , the h-function is of genus 1, and we must use the global parametrix constructed in Section 3.5.2. Throughout this proof, we recall that we are working with the assumption that n ∈ N(s, ), so that the global parametrix exists by Lemma 3.1. Recall again (3.34) , which states that where T 1 and T 2 are defined as the following terms of the asymptotic expansion of T at infinity, We see by (4.35), T k = M k + O 1 n as n → ∞ for k = 1, 2, so we have that as n → ∞.
Remark 4.3. In order to compute higher order terms in the expansion of the recurrence coefficients in the genus 1 regime, one would again need to write the jump matrix for R as a perturbation of the identity. This would involve writing the jump matrix on ∂D λ in terms of the appropriate local parametrix used at λ. As the expansion of the Bessel parametrix was given in (4.38), we give the expansion of the Airy parametrix below. Using (3.78) and [1,Section 10.4], we have that uniformly as ζ → ∞. Above, and One could again carry out the process detailed in Section 4.5 to obtain higher order terms in the genus 1 regime, but we just concern ourselves with the leading term. .

Double Scaling Limit near Regular Breaking Points
Having determined the behavior of the recurrence coefficients as n → ∞ with s ∈ G 0 ∪G ± 1 , we turn our attention to the behavior of these coefficients for critical values of s * ∈ B where s * ∈ R. Below, the double scaling limit describes the asymptotics of the recurrence coefficients as both n → ∞ and s → s * simultaneously at an appropriate scaling rate. 5.1. Definition of the Double Scaling Limit. In the remainder of this section, we will assume that s approaches s * within the region G 0 . In particular, we fix s * ∈ B \ ((−∞, −2] ∪ [2, ∞)) and take where the constant L 1 is chosen so that s ∈ G 0 for all n large enough. Furthermore, we impose that s * < 0, so that 2 s * > 0; this requirement is for ease of exposition, and the case where s * > 0 can be handled similarly. As s → s * within G 0 , we have that Ω(s) = γ c,0 ∪ γ m,0 (s). Furthermore, there exists a genus 0 h-function which satisfies h is analytic in C \ Ω(s), where l ∈ R. As s * is a regular breaking point, we now have that (h(2/s * ; s * )) = 0, by definition, and a more detailed local analysis will be needed in the vicinity of this point. As the first transformation is the same as the first transformation in Section 3, we briefly restate it below. We recall that Y defined in (3.3) solves the Riemann-Hilbert problem (3.1). By setting we then have that T defined above solves the Riemann-Hilbert problem (3.10).

5.2.
Opening of the Lenses. In order to address some of the more technical issues which arise when attempting to open lenses, we turn again to the theory of quadratic differentials. Recall that γ m,0 (s) is defined to be the trajectory of the quadratic differential which connects −1 and 1, whose existence is assured due to Lemma 4.4. Moreover, we also have that four trajectories s emanate from z = 2/s at equal angles of π/2, as described in Section 4.2 above. Finally, an application of Teichmüller's Lemma (c.f. [55,Theorem 14.1]) shows that the trajectories define two infinite sectors and one finite sector whose boundary is formed by a closed trajectory from z = 2/s which encircles both ±1. Moreover, at the critical value s * , we have that two trajectories go to infinity from z = 2/s * , and the other two connect z = 2/s * with ±1. Another application of Teichmüller's Lemma shows that the two infinite trajectories tend to infinity in opposite directions. The depictions of these critical graphs are given in Figure 13; for more details on the precise structure of the the critical graph we refer the reader to [25,Section 3.2].  Recall that the key to the opening of lenses is that the jump matrices decay exponentially quickly to the identity along the lips of the lens. In the sections above this immediately followed from the inequality (3.8b) which stated that sign of the real part of h was greater than zero. However, at the critical value of s * , this will no longer be true above the critical point 2/s * , and a more detailed local analysis will be needed. We label the trajectories emanating from z = 2/s as γ i , i = 1, 2, 3, and the regions bounded by these trajectories as H j , j = 1, 2, 3, as in Figure 13.
To understand the sign of the real part of h, consider the function We may now state the following lemma.
Lemma 5.1. Fix s ∈ G 0 so the s < 0. Then, Proof. By the basic theory (c.f [45,Appendix B], [39,Chapter 3]) the domains H 1 and H 2 are half plane domains which are conformally mapped by Υ to either the left or right half planes. As s < 0, there exists some t 0 > 0 so that z = −it ∈ H 2 for all t > t 0 . Recalling that we may use that s < 0 to conclude that Υ(z; s) > 0 for z = −it, where t > t 0 . Therefore, we must have that Υ conformally maps H 2 to the right half plane and as such Similarly, as Υ is analytic around z = 2/s and has a double zero at z = 2/s, we can conclude that Υ(z; s) < 0 for z in H 1 ∪ H 3 in close proximity to z = 2/s. As H 1 is a half plane domain, we immediately have that Υ(z; s) < 0, z ∈ H 1 . (5.8) Again following the theory laid out in [45,Appendix B], it follows that H 3 is a ring domain. Therefore there exists some c > 0 so that the function z → exp (cΥ(z; s)) maps H 1 conformally to an annulus In particular we have that 0 > Υ(z; s) > Υ(1, s), z ∈ H 3 (5. We now open lenses as depicted in Figure 14. Note that the upper lip of the lens, γ + m,0 passes through z = 2/s and both γ ± m,0 remain entirely within H 2 ∪ H 3 . As before, we define L ± j to be the region bounded between the arcs γ m,j and γ ± m,j , respectively, and setΣ := Σ ∪ L j=0 γ + m,j ∪ γ − m,j . We can now define the third transformation of the steepest descent process as T (z), otherwise. (5.11) We then consider the model Riemann-Hilbert problem formed by disregarding the jumps on γ ± m,0 . In particular, we seek M such that M (z) is analytic for z ∈ C \ γ m,0 (s), (5.12a) The solution to this Riemann-Hilbert problem was provided in Section 3.5.1, and we recall that M is given by where ϕ(z) = z+(z 2 −1) 1/2 , with branch cuts taken on γ m,0 so that ϕ(z Note that the jump on γ + m,0 (s) is no longer exponentially decaying to the identity as s → s * in a neighborhood of z = 2/s. Moreover, the matrix M is not bounded near the endpoints z = ±1. Therefore, we define D c := D δ (2/s), D −1 := D δ (−1), and D 1 := D δ (1) to be discs of radius δ centered at z = 2/s, −1, and 1, respectively. We take δ small enough so that D c ∩ γ − m,0 = ∅. Note that for s near s * , the trajectory γ m,0 (s) is close to 2/s * , so that for n large enough we must have that D c ∩ γ m,0 (s) = ∅. In each D k , k ∈ {c, −1, 1}, we seek a local parametrix P (k) such that P (k) (z) is analytic for z ∈ D λ \Σ, (5.14a) P (k) As shown in Section 3.6, P (1) and P (−1) are given by and We will now move on to the construction of the local parametrix P (c) within D c .

5.3.
Parametrix around the Critical Point. We consider a disc D c around z = 2/s of small radius δ. We partition D c into D + c and D − c as in Figure 15, so that D + c is the region within D c that lies to the left of γ m,0 and D − c is the region which lies to the right. We define the following function in D + c :h In terms of the h function, we may write We now have the following lemma, following the lines laid out in [16,Proposition 4.5].
Lemma 5.2. There exists a jointly analytic function ζ(z; s) which is univalent in a fixed neighborhood of z = 2/s * , with s in a neighborhood of s * , and an analytic function K(s) near s = s * so that where K(s) is analytic near s = s * and satisfies where k 1 = 2i Moreover, we can calculate that We immediately have that ζ satisfies (5.19), is conformal map in a neighborhood of z = 2/s and satisfies ζ(2/s * , s) ≡ 0.
We now specify that the size of the disc D c is chosen to be small enough so that ζ(z; s) + K(s) is conformal for n large enough (or equivalently, when s is close to s * ), which is possible via the lemma above. Moreover, we also impose that the arc γ + m,0 is mapped to the real line via ζ(z; s) + K(s) within D c .
From the proof of Lemma 5.2, we see that Therefore, we note that the double scaling limit (5.1) can be equivalently stated by taking n → ∞ and s → s * so that lim n→∞, s→s * nK(s) = 2iL 1 where k 1 is given in (5.24). We may obtain the local parametrix about z = 2/s by solving the following Riemann-Hilbert problem:

29c)
We recall that the jumps in (5.29b) are given by P (c) We solve for P (c) by first defining U (c) so that Then, U (c) is also analytic for z ∈ D c \Σ and satisfies the following jump conditions within D c : We may solve for U (c) using the error function parametrix presented in [20,Section 7.5]. We introduce Then, C(ζ) is analytic for ζ ∈ C \ R and satisfies and moreover possess the following asymptotic expansion, uniform in the upper and lower half planes: Next define, where ζ and K are as defined via Lemma 5.2. Using the proof of Lemma 5.2, we see that f C (z; s) conformally maps a neighborhood of z = 2/s to a neighborhood of z = 0. If we define we see that where E (c) n is any matrix which is analytic throughout D c , solves (5.29a) and (5.29b). We now choose E (c) n so that P (c) satisfies (5.29c). As n → ∞ for z ∈ D + c , we have Similarly, we have that as n → ∞ for z ∈ D − c , Therefore, if we set we see that P n is analytic within D c as both M and J have the same jumps over γ m,0 and are bounded within D c . Moreover, we see that We then have that where P 0 is given by  As before, we have that ∆ k (z) = 0 for z ∈ Σ R \ (∂D −1 ∪ ∂D 1 ∪ ∂D c ), as the jump matrix decays exponentially quickly to the identity off of the boundaries of the discs D −1 , D 1 , and D c . From (4.41), (4.42), and (5.48), we have for k ∈ N that We first solve for R 1/2 (z). Using Lemma 5.3, we may write for some constant matrix C (1/2) . Using the explicit expression (5.54b) for ∆ 1/2 , we can calculate C (1/2) as We can then compute that solves the Riemann-Hilbert problem (5.56) with k = 2. As we now have explicit expressions for R 1/2 and R 1 , we may expand at infinity to get where L 2 < 0. Note that as L 2 < 0, we have that s ∈ G 0 for large enough n.
6.1. Outline of Steepest Descent. Although we are now considering the case where s depends on n via the double scaling limit (6.1), the first two transformations of steepest descent remain unchanged to the previous analysis, and as such, we summarize the steps briefly and refer the reader to Section 3 for full details. As s ∈ G 0 for n large enough, we have immediately that there is a genus 0 h-function and a contour γ m,0 (s) such that The main difference between the case of regular points and the critical breaking point at s = 2 comes in the analysis about z = 1. Note that the map so that f n,B (z, 2) = O (z − 1) 3 as z → 1. Therefore, a different analysis will be needed in D 1 in the double scaling limit (6.1).
In order to calculate the entries of the matrix Ψ 1 (w) in (6.21c), that will be needed later to obtain the asymptotics of the recurrence coefficients, we use the fact that this Riemann-Hilbert problem originates from a folding procedure of the Flaschka-Newell one for Painlevé II. Applying formulas (25) and (37) in [60], we have where Φ(λ, w) solves a Riemann-Hilbert problem corresponding to Painlevé II, see [60,Section 2] and also [33, Theorem 5.1 and (5.0.51)]. Here q = q(w) solves Painlevé II and D = D(w) is given by (6.20). Furthermore, we observe that the solution Ψ(ζ, w) that we study corresponds to the Stokes multipliers b 1 = 0 and b 2 = b 4 = 1, in the notation used in [38, §1.3], and therefore a 2 = 0 and a 1 = a 3 = −i in terms of the Stokes multipliers for Painlevé II, see [38, (A.10)]. This is in fact the generalized Hastings-McLeod solution to Painlevé II, with parameter α = 1/2, which is characterized by the following asymptotic behavior: x → +∞.
As λ → ∞, we have the expansion where the entries of the matrices m 1 (w) and m 2 (w) are given explicitly in formula (21) in [60], see also [33, (5. Combining (6.23), (6.25) and (6.26), we arrive at the following formulas for the entries of the matrix Ψ 1 (w) in (6.21c):  Proof. As h has a critical point at z = 2