Transition probability estimates for subordinate random walks

Let Sn be a symmetric simple random walk on the integer lattice Zd . For a Bernstein function ϕ we consider a random walk Snϕ which is subordinated to Sn . Under a certain assumption on the behaviour of ϕ at zero we establish global estimates for the transition probabilities of the random walk Snϕ . The main tools that we apply are a parabolic Harnack inequality and appropriate bounds for the transition kernel of the corresponding continuous time random walk.


INTRODUCTION
The main aim of this article is to obtain global estimates for the transition probabilities for a class of random walks on the integer lattice ℤ that are subordinated to the simple random walk. Random walks from this class are obtained via discrete subordination which was defined in [8]. They have neither second moment nor finite support and thus studying their long time behaviour becomes very demanding. The procedure of discrete subordination can be regarded as a discrete counterpart of Bochner's subordination for semigroups of operators which was widely applied in probability theory for continuous time Markov processes.
To be more precise, let be the one-step transition operator of the simple (symmetric) random walk on the space ℤ , that is ( ) = The operator − generates a random walk which is the subordinate random walk related to the function , see Section 2 for the probabilistic definition.
In this article we are concerned with the transition probabilities of the random walk which are defined as ( , , ) = ℙ ( = ) . In the course of study we assume that is a complete Bernstein function and satisfies the following scaling condition: there are some constants * , * > 0 and 0 < * ⩽ * < 1 such that * ( ) * ⩽ ( ) ( ) ⩽ * ( ) * , 0 < ⩽ ⩽ 1. (1.1) Under these two assumptions we establish global estimates for the function ( , , ), that is we prove that for all , ∈ ℤ and ∈ ℕ it holds ( , , ) ≍ min see Theorem 3.1, Theorem 5.1 and Theorem 6.17. In the above relation, the symbol ≍ means that the ratio of the two expressions is bounded from below and from above by some positive constants. Similar questions have already been addressed in the literature. In [4] global estimates for the transition probabilities of random walks with unbounded range on ℤ were established under the assumption that the one-step transition probability from to is a stable-like function, i.e. it is comparable to the regularly varying function | − | −( + ) for ∈ (0, 2). Let us compare this result to our estimates (1.2). In [17] it was proved that under the same assumptions as in the present paper, the one-step transition probability of the subordinate random walk satisfies In particular, if * = * in (1.1) then we are in the scope of [4] but it may well happen that 0 < * < * < 1. Moreover, condition (1.1) means that the function is a -regularly varying function at 0 with Matuszewska indices contained in (0,1), see [9,Sec. 2]. Complete Bernstein functions with such behaviour at zero can be found in the closing table of [20] and include functions: ( ) = + , , ∈ (0, 1); ( ) = (log(1 + )) , ∈ (0, 1), ∈ (0, 1 − ); ( ) = ( log ( cosh ( √ ))) , for ∈ (0, 1), etc. It is also possible to construct examples of complete Bernstein functions which fulfil (1.1) and are not comparable to any regularly varying function, see e.g. [13]. This shows that our estimates apply to a class of random walks whose one-step transition probabilities may not be comparable to a regularly varying function which goes beyond the assumptions of [4].
In [18] global estimates for transition probabilities for a class of Markov chains on a uniformly discrete metric measure space were proved under the assumption that the one-step transition kernel is comparable to a regularly varying function times a homogenuos volume groth function. We mention here further related papers and monographs which focus on estimates of transition probabilities of random walks [1-3, 12, 14, 15, 21, 22, 24].
A class of subordinate random walks was introduced in paper [8] in the context of random walks on groups. In [7] authors established asymptotics of the transition probabilities of subordinate random walks on ℤ under the assumption that is regularly varying at zero. This result is valid in a specific region which depends on the time and the space variables. It implies that the corresponding estimates hold only in that region whereas (1.2) is true for all ∈ ℤ and ∈ ℕ. There are more papers where subordinate random walks were studied from potential-theoretic point of view, see [5,6,11,16,17]. Note that discrete subordination allows us to efficiently construct examples of random walks with the controlled tail behaviour. In particular, the regular variation at zero of the function is a necessary and sufficient condition for to belong to the domain of attraction of a stable law, see [7,16].
Let us comment on the structure and methods of the article. In Section 2 we give the precise definition of the subordinate random walk and we prove some auxiliary results which include an estimate for the time to leave a ball for the random walk . Our proof is an application of the concentration inequality from [19]. Section 3 is devoted to the proof of the on-diagonal bound for the kernel ( , , ). For this we use the Fourier analytic approach which was previously applied in [7] to find asymptotics of ( , , ) under the assumption that is a regularly varying function at zero. In Section 4 we prove a parabolic Harnack inequality which is in itself a valuable contribution and this is the main tool that we use to obtain off-diagonal bounds for ( , , ). To show this inequality we follow the elegant approach of [4], which was also applied in [18]. In Section 5 we obtain the global lower bound by the application of the parabolic Harnack inequality combined with the on-diagonal estimate. Section 6 is a twofold paragraph. In the first part we study the continuous time random walk which is constructed from with the aid of an independent Poisson process. For such a process we find the upper heat kernel estimate. To get this result we apply the marvellous approach of [10] where the authors study stability of heat kernel estimates for jump processes on metric measure spaces. In the second part we apply estimates for the continuous time random walk to prove hitting time estimates and, finally, upper bounds for ( , , ).

PRELIMINARIES
Let = 1 + ⋯ + be the simple (symmetric) random walk in ℤ which starts from the origin. This means ( )

⩾1
is a sequence of independent identically distributed random variables defined on a given probability space Here is the th unit vector in ℤ . Let be a Bernstein function such that (0) = 0, (1) = 1. Such a function admits the following integral representation for ⩾ 0 and a measure on (0, ∞) satisfying We consider a sequence of positive numbers which is related to the function and is defined as where is the Dirac measure at . One easily verifies that Let = 1 + ⋯ + be a random walk on ℤ + with increments that are independent of the random walk and have the distribution given by ℙ A subordinate random walk is defined as ∶= , for all ⩾ 0. Notice that the one-step transition probability (1, , ) of the random walk is of the form where ( , , ) = ℙ ( = ) stands for the -step transition probability of the simple random walk . We use the notation ( , , ) = ( , 0) and (1, , ) = ( , ) = ( − ).
In the course of study we always assume that is a complete Bernstein function. Recall that this means that the measure from (2.1) has a completely monotone density with respect to Lebesgue measure, see [20,Def. 6.1.]. We additionally require that has no drift term, that is = 0 in (2.1). Next assumption on the function is that it satisfies scaling condition (1.1). These assumptions are not explicitly stated in the results.

Auxiliary results
We repeatedly use the fact that ′ ⩽ | ( , )| ⩽ ′′ , ∈ ℤ , (2.4) for constants ′ , ′′ > 0 which depend only on the dimension . We recall that for any Bernstein function it holds ( ) ⩽ ( ), for all ⩾ 1, > 0, which implies We formulate bounds for the inverse function −1 which easily follow from (1.1) and take the form Throughout the paper we use the following decreasing function Notice that with this notation (1.

Lemma 2.2.
There exists a constant 1 > 0 such that This and the fact that ℙ

ON-DIAGONAL BOUNDS
In this section we establish the on-diagonal bounds. We apply a Fourier analytic method which is extracted from [7].
To finish the proof we need to show that for some 1 , 2 > 0 Notice that it suffices to prove (3.3) only for large enough, as the integrand in (3.3) is strictly positive if is small enough, and thus in the end of the proof we can change constants appropriately to estimate the expression in (3.2) for all . We observe that Indeed, this follows easily from the fact that For that we establish the following simple result. Proof of Claim 1. Scaling condition (1.1) implies that, for some 7 > 0, −1 7 ( , , ∈ (0, 1).
Next, we notice that Thus, by (3.5), for large enough, .
Since both of the side integrals converge to positive constants as goes to infinity, we conclude that (3.3) is valid for large enough and the proof is finished. □

Corollary 3.2.
There is a constant > 0 such that Proof. This follows by Theorem 3.1 combined with the Cauchy-Schwarz inequality. □

PARABOLIC HARNACK INEQUALITY
In this section we prove the parabolic Harnack inequality which is the main tool that we use to obtain off-diagonal bounds in Sections 5 and 6. We follow closely the elegant approach of [4] but we emphasize that for the case that we undertake in the paper it requires numerous adjustments and alterations.
Let  = ℕ 0 × ℤ and consider the -valued Markov chain Proof. By the Markov property, where the last equality follows by the semigroup relation. □ We introduce the notation where is the constant from Theorem 2.3. We fix the following two constants The main result of this section is the following theorem. for all ∈ ℤ and for large enough.
Proof. We have If ≤ then the right-hand side of the identity above is zero. If > then There exists a constant 1 ∈ (0, 1) such that . Indeed, we have (0) = ∅ and ≠ ∅ so it follows that ( ) ≠ ∅, for some ⩾ 1. Thus ∕ ( −2 ) ⩾ 1, which clearly yields the claim. We first assume that Hence = 0. This and the fact that .
Proof of Theorem 4.2. By multiplying the function by a constant, we can assume that Notice that if (0, ) = 0 for some ∈ ( , ∕ ) then (4.2) is trivially satisfied, as the parabolicity of implies that Let be the constant defined at (4.1). By Lemma A.2 of the Appendix, there exists a constant 0 ⩾ such that Let us fix ⩾ 0 , ( , ) ∈  and a set ⊆ ( + 1, , ∕ ) for which it holds We claim that for such a set there is a constant 1 ∈ (0, 1) such that Indeed, by our choice ⊆ ( , , ) and ( ) = ∅. Therefore, Proposition 4.4 and relation (2.5) yield where we can achieve that 1 < 1 by decreasing ′ in (2.4) if necessary. Let 1 , 2 and 3 be the constants from Proposition 4.4, Lemma 4.5 and Lemma 4.6 respectively. We set where 1 is the constant from relation (4.9) and * , * ∈ (0, 1) are the constants from the scaling condition (1.1).
Claim 2. There exists a constant 2 > 0 such that for all , , > 0 which satisfy the following two inequalities hold: We prove this claim in the end of the proof of the theorem and the value of the constant 2 is specified there, see (4.25).
We construct a sequence of points is large enough and under this condition the sequence = ( , ) is increasing and tends to infinity, cf. (4.18). This will finally contradict the fact that is bounded and whence the result will follow. Let us choose Evidently it suffices to study the case 2 −1∕( +2) 1 < 1∕ . Suppose that we have already defined the points We describe the procedure how to obtain . We first define by (4.14) With our choice of constants and using (4.8) one can easily verify that for defined in (4.7) it holds . Since the function is parabolic on is a martingale. Thus (4.12) and Lemma 4.5 imply )] and we mention that we could apply Lemma 4.5 because of (4.15). Thus we get a contradiction, so there must exist and whence ≠ . This in turn implies Suppose next that ( , ) [ ( ( , , ) , ( , , ) By Lemma 4.6 we have ) [ ( ( , , ) , ( , , ) )] which again gives a contradiction. Therefore ( , ) [ ( We want to apply Proposition 4.4 for and and (0) = ∅. Moreover, with the aid of (4.8), (4.14) and (1.1) one can verify that ( . Therefore ) ] where we used (4.13) in the last line. We conclude that .
Claim 3. There is a constant 3 > 0 such that the following two relations ) +2 and for all large enough. To finish the proof we are left to establish Claims 2 and 3.
Proof of Claim 2. We set where is the constant from Theorem 2.3, ′ is the constant from (2.4) and is defined in (4.1). We show that the claim is true with such a constant. We start by showing (4.12). Combining (2.5) and (4.11) we get Similarly, to prove (4.13) we apply (2.4) and (2.5) and obtain Proof of Claim 3. Notice that (4.23) is equivalent to

Using (A.2) and (A.3) we get
Hence, it is enough to define 3 for which This can be achieved by setting ) .
Indeed, with such a choice, for sufficiently large we apply the scaling condition and get Clearly (4.26) follows. With such 3 the validity of (4.24) is obvious. □

LOWER BOUND
The aim of this section is to prove the global lower estimate. We use a probabilistic method based on the parabolic Harnack inequality.
Proof. Let us set Near-diagonal bound: We start by proving that there exists a constant > 0 such that for ∈ ℕ and | − | ⩽ 1 , where 1 > 0 is a constant to be specified. We take ∈ ℕ and choose to satisfy = ∕ , is also parabolic on For < 0 we apply the above procedure together with (1.3), and this gives (5.2) for all . Estimate away from the diagonal: Let ( ) be the function defined at (2.7). We now show that there is > 0 such that for all ∈ ℕ and | − | ≥ 2 , where a constant 2 > 0 will be specified. We first claim that there is a constant 3 > 0 such that for all ∈ ℤ and for all , ∈ ℕ ℙ ( max

By Lemma 2.4 and Lemma 2.5 we get
This is true for all constants 3 > 0. We define specific constant 3 as Since 3 ⩾ 1 we can use lower scaling to obtain (5.4). We now set 2 = 3 3 and we notice that 1 < 2 , as 1 = 1∕ ⩽ 1∕3. Let and our task is to estimate the last sum from below. By the time reversal of the random walk we get and | − | ⩾ 2 = 3 3 , we have and whence, for | − | ⩾ 2 , by using (1.3) and (5.3) follows for all ∈ ℕ and | − | ⩾ 2 .
Intermediate estimate: We finally show that for all ∈ ℕ and for 1 < | − | < 2 . For any 1 ⩽ ⩽ we can write .

UPPER BOUND
In this section we aim to prove the global upper estimates for the transition probabilities of the random walk . Our strategy is to study the corresponding continuous time random walk and to estimate its transition kernel and hitting time of a ball, and then to use these results to get similar identities in the discrete time. The main reason why we switch to the continuous time random walk is to prove Proposition 6.16 which is a key result to establish the off-diagonal upper estimates which are our goal. Another possible approach would be to obtain the estimate for the hitting time of a ball from Proposition 6.16 directly in the discrete setting. This, however, seems to be a hard task and we do not address this problem in the present paper.

Estimates for the continuous time random walk
We study the continuous time version of the random walk which is constructed in the standard way, that is we take ( ) ∈ℕ to be a sequence of independent, identically distributed exponential random variables with parameter 1 which are independent of . Let 0 = 0 and = ∑

=1
. Then we define = if ⩽ < +1 . Equivalently, we can take ( ) ⩾0 to be a homogeneous Poisson process with intensity 1 independent of the random walk and then = .
The transition probability of the process is denoted by ( , , ) = ℙ ( = ) . We want to find the upper bound for ( , , ). Proposition 6.1. There is a constant > 0 such that for all , ∈ ℤ and for all ⩾ 1.
We first handle the on-diagonal part.

Lemma 6.2.
There exists a constant 2 > 0 such that for > 0 and , ∈ ℤ Proof. By independence and Theorem 3.1 we get By monotonicity, Σ 1 ⩽ . We next find a bound for Σ 2 and after that, we will show that − ⩽ 4 ( −1 ( −1 )) ∕2 for all > 0 and for some constant 4 > 0. Observe that Σ 2 = 0 for < 1. By (2.6) we get where in the last inequality we applied [23,Cor. 3]. It suffices to show that For ⩾ 1 this follows easily from (2.6) whereas for ∈ (0, 1) we observe that − < 1 and −1 ( −1 ) > 1. Finally, by the Cauchy-Schwarz inequality we obtain and the proof of (6.2) is finished. □ Before we prove the off-diagonal estimate in (6.1) we establish a series of auxiliary results. We follow here the elaborate approach of [10]. We use the following notation
Hence, for every ∈ ℕ we have Finally, by monotonicity of and by (2.5) we easily conclude the desired estimate. □ Lemma 6.4. There exist constants 3 , 4 > 0 such that for all ∈ ℤ and for all , > 0 Proof. We first consider the case ∈ (0, 1). Then exits from the ball ( , ) as soon as it jumps to some point other than . Observe that and this is precisely (6.3) with ′ 3 and ′ 4 . Next, assume that ⩾ 1. Since for any > 0 by Markov property and Lemma 6.3 we get Using again Lemma 6.3 we have and whence If we set 3  } we obtain (6.3) and the proof is finished. □ We now study the truncated process which is built upon the process . For any > 0 we denote by ( ) the process obtained by removing from the jumps of size larger than . More precisely, the process ( ) is associated with the following Dirichlet form which is defined for functions , from the domain of the Dirichlet form of the random walk , cf. [2,Sec. 5]. We write ( ) ( , , ) for the transition probability of ( ) and ( ) for its semigroup. We work also with the killed processes. For any non-empty ⊆ ℤ we denote by ( ) ⩾0 the semigroup of the killed process . Similarly we write for the semigroups of ( ), . Let Lemma 6.5. There exist constants 5 ∈ (0, 1) and 6 > 0 such that for any , , > 0 Proof. By Lemma 6.4 and (2.5) we get that for all ∈ ℤ and , > 0 Remark. In [10, Lemma 7.8] the authors assume more restrictive assumption on the function then our condition (1.1), namely they require the global scaling. The key tool to prove (6.4) is, however, [10, Lemma 2.1] which in our case is covered by Lemma 2.1. □ We notice that and whence This and Lemma 6.4 imply and the result follows if we choose 5 = 3 ∕4 < 1 and 6 = 4 + 1 .
Proof. We first observe that for ⩽ relation (6.12) is trivially satisfied, as in this case ∕ ⩾ 1.
Remark. In our case the assumption of [10, Lemma 7.11] is valid only for ⩾ 1. Since the lemma is proved by induction, we could repeat the argument and get the same result. □ Notice that Using this and the fact that > , we obtain We notice that This follows easily by (1.1). Combining this with (6.25) we get Hence, by (6.26), Further, observe that and, by the semigroup property, Using (6.8) and (6.27) we obtain Similarly, we show that

This yields
As in the proof of Lemma 6.10, we can replace 2 with and the proof is finished. We now finally prove the upper bound for the heat kernel of the process .
Proof of Proposition 6.1. Our aim is to prove that for all ⩾ 1 , ≠ . (6.28) We take arbitrary 0 , 0 ∈ ℤ such that 0 ≠ 0 and we set ∶= | 0 − 0 | ∕2. Assume that < . We show that in this case the on-diagonal bound from Lemma 6.2 is smaller than the bound in (6.28), that is Indeed, since 1∕2 ⩽ < , we can use Lemma A.1 (with = 4) to obtain Combining (6.29) with Lemma 6.2 and using (2.5) we get .

Full upper estimate
In this paragraph we establish the upper bound for the transition probability of the random walk . We follow approach of [4], cf. also [18], which is based on the application of the hitting time estimates. We start with results for the process and then we exploit them to obtain bounds for . Recall that Proposition 6.14. There exists a constant 14 > 0 such that for all ∈ ℤ , > 0 and ⩾ 1.
Proof. As before ( ) ∈ℕ 0 stand for the arrival times of the Poisson process ( ) ⩾0 that was used to define the process . More precisely, = for all ⩽ < +1 . Using the Markov inequality, we easily get that ℙ ( ⩽ 2 ) ⩾ 1 2 . By independence, Lemma 6.15 and (2.6), we obtain as claimed. □ In the following theorem we finally prove the upper bound for the transition probability of the random walk . In the proof we again apply the parabolic Harnack inequality.