The range of once-reinforced random walk in one dimension

We study once-reinforced random walk (ORRW) on $\mathbb Z$. For this model, we derive limit results on all moments of its range using Tauberian theory.

(number of previous jumps along b) ), which equals the hungry random walk in the limit → 0. Second, the case < 0 is equivalent to once-reinforced random walk or ORRW: here, every edge has initial weight 1, and once the walker goes along an edge, the weight changes to c > 1. The walker then chooses its next step according to the edge weights. Apparently, this is the same as the hungry random walker for = − log c < 0 on Z. Recent literature on the ORRW has focused on recurrence and transience on various graphs (see e.g., [3,6,9,16]). Here, we rather stick to Z but aim at concrete formulas for the asymptotics of the range of ORRW in Theorem 1. Our analysis is based on a simple decomposition of the inverse of the range process as given in (3). Notably, we cannot compute moments of ORRW itself. At least, we give some heuristics of the variance in Remark 2.5.
In studying the ORRW, we will not restrict ourselves to c > 1, but to c > 0. A scaling limit of the ORRW in this case was studied in [2,5,14]. More precisely, it was shown (see Theorem 1.2 in [5]) that (choose = − = 1 − c) for 1∕2 < c < 3∕2, the sequence X n = (X nt ∕ √ n) t≥0 has a limit Y as n → ∞ which solves More connections of our results to this equation are discussed in Remark 2.4.
The paper is organized as follows: in the next section, we give our main result, Theorem 1, which gives asymptotics of all moments of the range of the ORRW. Section 3.1 contains some preliminary steps for our proofs. The proof of Theorem 1 is given in Section 3.2.

RESULTS
Definition 2.1. Let c > 0 and X = (X n ) n=0,1,2,… be the stochastic process with X 0 = 0, and for n = 0, 1, 2, …, given X 0 , … , X n , and setting M n ∶= max k≤n X k as well as m n ∶= min k≤n X k , In other words, X n+1 = X n ± 1 with probability proportional to c or 1, if X has or has not visited X n ± 1 before time n. We call X = (X n ) n=0,1,2,… the ORRW on Z with parameter c. Its range by time n is given by Note that only the case c > 1 gives a reinforced walk (in the sense that it visits previously seen sites more likely), while the walk has self-avoiding properties for 0 < c < 1. For c = 1, it is just the symmetric Bernoulli walk. Since our proofs work in all cases, we do not distinguish them in the sequel.
The range process R = (R n ) n=0,1,… is a nondecreasing process with jumps of size 1, and is our main object of study. The following ideas are essential to understand our approach. The random time S k ∶= inf{n ∶ R n = k} is the first time the ORRW has range k (such that k  → S k is the generalized inverse of n  → R n ), and T i ∶= S i+1 −S i is the time between R n = i for the first time and R n = i+1. In order to study T i (for i = 1, 2, …), we note that T i = 1 with probability 1∕(1 + c). Otherwise, the random walk moves within its range (which is i at that time) until it first hits its maximum or minimum, which takes time i , the hitting time of {−1, i − 1} of a simple random walk starting in 0. Again, the chance to increase the range is 1∕(1 + c), and so on. The number of times the random walk needs a chance of 1∕(1 + c) to increase its range is a geometrically distributed random variable Y i with parameter 1∕(1 + c). (Note that Y i = 0 is possible, that is, we must use the shifted geometrical distribution.) In total, this gives (where we define the empty sum to be 0). Here, k i , k = 1, 2, … are independent and identically distributed as i above and also independent from Y i , i = 1, 2, …. Using (3), we can compute the generating function of S k (see Lemma 3.2) and then use P(R n > k) = P(S k < n) in order to obtain results on R n (see Lemma 3.3 and Proposition 3.4).
We are now ready to formulate our main result, which will be proved in Section 3.2. Throughout, we will write a n ∼ b n if a n ∕b n n→∞ −−−→ 1.
Theorem 1 (Asymptotic moments of the range). Let R n be as in (2) the range of the ORRW with parameter c > 0. Then, In particular, Remark 2.2 (The range for c = 1). The ORRW with c = 1 equals the symmetric Bernoulli walk. In this case, several results have been obtained for the range. An early example is [7], who states in his In this case, we compute and Feller's result for the expectation follows from (4). Moreover, using integration by parts, which gives Feller's result for the second moment.
In addition to these limiting results, [18,19] have computed the generating function as well as expectation and variance for S k , given through These results can as well be obtained as follows: Modifying (3) for the case c = 1, we can write where i+2 is the hitting time of {−1, i + 1} of a random walk starting in 0. This holds since the range increases if and only if such a hitting time is observed. We note that i+2 is the duration of play of a symmetric Gambler's ruin starting with 1 and a total of i + 2 units. It is a classical result that was, for example, derived in [1]. Summing then gives Vallois' results.

Remark 2.3 (J (c) for integer-valued c).
If c is an integer, that is, c = 1, 2, …, the calculations from (5) and (6) can be generalized and lead to specific expressions for J (c). We just give the necessary steps for = 1, 2, which can then be generalized for larger . First, note that a straight-forward calculation shows that, for c = 1, 2, … Hence, Next, and therefore, using integration by parts, For higher , more steps using integration by parts are necessary.
Remark 2.4 (Scaling limit of ORRW). Theorem 1 can be understood as results for the scaling limit where Y is given in (1); see [5] for the corresponding limit result. By this, we mean that the rangeR of Y satisfies While the convergence above was only shown for 1∕2 < c < 3∕2 in [5], we briefly argue how this converges comes about: Note that Y solves (1) iff For the ORRW, note that is a martingale, since (wp = with probability) wp c 1+c if X n = m n , For large n, we have that 1 1+c ∑ k<nt 1 X k =M k ∼ M nt by the law of large numbers, since every time with X k = M k there is an independent chance of 1∕(1 + c) of increasing M. Moreover, a straight-forward calculation gives that N has quadratic variation where the ∼ follows from 1 1+c ( as n → ∞ is the same as the limit of which must be a continuous martingale with quadratic variation t by time t, that is, a Brownian motion. This is enough to conclude that scaling limits of X satisfy (7).
. Although we are able to asymptotically compute all moments of R n ∕ √ n as in Theorem 1, we are unable to compute asymptotics of (even) moments of X n ∕ √ n. At least, we now give some thoughts and bounds of asymptotics of V[X n ∕ √ n] = E[(X n ∕ √ n) 2 ]. We observe that is a mean-zero-martingale, since (note that m k ≤ 0 ≤ M k ) and by symmetry which gives the intuitive result that E[X 2 n ] ≤ n for c > 1 and E[X 2 n ] ≥ n for c < 1. Moreover, since 0 ≤ (M n − |m n |) 2 implies 2M n |m n | ≤ M 2 n + |m n | 2 , it also gives the bounds From Figure 1, we see that the left-hand side (LHS) performs better for c > 1, while the right-hand side (RHS) is better for c < 1. The reason is that for c > 1, the process is more likely to switch between its maximum and minimum, such that M n ≈ |m n | ≈ R n ∕2, leading to M 2 n + |m n | 2 ≈ R 2 n ∕2, while for c < 1 switching becomes less likely and we rather have that R n ≈ M n or R n ≈ |m n |, which gives M 2 n + |m n | 2 ≈ R 2 n .

FIGURE 1
Simulating the ORRW for varying c, we observe 10 5 independent draws of X n ∕ √ n for n = 10 5 , and compute the observed variance. We compare this to the two bounds in ( * )

3
PROOF OF THEOREM 1

Some preliminaries
Before we come to the proof of Theorem 1, we need some general results. First, in Theorem 2, we recall a classical Tauberian result by Hardy and Littlewood, which will help us to interpret the generating function of S k from (3). Then, in Lemma 3.1, we recall the generating function of hitting times for a simple symmetric random walk.

Suppose that for some ,
.
Moreover, if > 1, and n  → a n is nondecreasing, Proof. The assertions are classical Tauberian results by Hardy and Littlewood; see for example, Chapter I.7.4 of [10]. Another self-contained proof is given in Proposition 12.5.2 in [12]. ▪ The following lemma is rather standard (see e.g., Chapter XIV.4 in [8]), but we provide a proof for completeness.
Proof. Recall cosh(r) ∶= e r +e −r 2 and note that cosh( s ) = 1∕s. For any r ∈ R, using that the stochastic process (e rZ n ∕E[e rZ n ]) n=0,1,2,… is a martingale. Therefore, using r = ± s , is a martingale as well. We apply the optional sampling theorem to the bounded martingale M T∧n to obtain From this, we read off the result. ▪

Proof of Theorem 1
Our analysis of the moments of R n will be done via an analysis of the generating function of the random variable S k , which we defined in (3). We start by computing the generating function of S k . Lemma 3.2 (Generating function of i , T i , and S k ). Fix s ∈ (0, 1), recall s from (9) and let Then, for i and T i as in (3), the generating functions are given by Moreover, the generating function of S k is given by Proof. The first claim follows directly from Lemma 3.1. For the generating function of T i , note that the generating function of Y i ∼ geo(1∕(1 + c)) is The form of the generating function of S k follows from (3). ▪

ACKNOWLEDGMENT
We thank Tanja Schilling for introducing us to the model of the hungry random walk. Open access funding enabled and organized by ProjektDEAL.