Partial sums of random multiplicative functions and extreme values of a model for the Riemann zeta function

We consider partial sums of a weighted Steinhaus random multiplicative function and view this as a model for the Riemann zeta function. We give a description of the tails and high moments of this object. Using these we determine the likely maximum of $T \log T$ independently sampled copies of our sum and find that this is in agreement with a conjecture of Farmer--Gonek--Hughes on the maximum of the Riemann zeta function. We also consider the question of almost sure bounds. We determine upper bounds on the level of squareroot cancellation and lower bounds which suggest a degree of cancellation much greater than this which we speculate is in accordance with the influence of the Euler product.


Introduction
In this paper we investigate a model for the Riemann zeta function provided by a sum of random multiplicative functions. To define these, let (f (p)) p be a set of independent random variables uniformly distributed on the unit circle (Steinhaus variables) where p runs over the set of primes and let f (n) = p vp ||n f (p) vp . Alternatively, one can take (f (p)) p to be independent random ±1's with equal probability (Rademacher variables), and let f (n) be the multiplicative extension of these to the squarefree integers.
The study of random multiplicative functions as a model for the usual deterministic multiplicative functions was initiated by Wintner [33]. He considered the Rademacher case as a model for the Möbius function and proved that the partial sums satisfy almost surely, thus allowing him the assertion that "Riemann's hypothesis is almost always true". We shall focus instead on the case of Steinhaus random multiplicative functions. In light of their orthogonality relations E[f (m)f (n)] = 1 m=n 1 one can think of Steinhaus f (n) as a model for n it with t ∈ R. This point of view has been fruitfully used over the years with arguably the first instance being the pioneering work of Bohr [9] (although the f (p) appeared in a different guise there). Given that for large t ∈ [T, 2T ], the above reasoning suggests that for Steinhaus f (n) the sum M f (T ) = n T f (n) √ n provides a good model for the zeta function. We investigate various aspects of this sum, starting with the value distribution of |M f (T )|.
In the case of the zeta function we have Selberg's famous central limit theorem which states that for V = L 1 2 log log T with L ∈ R, fixed, 1 as T → ∞ where µ denotes Lebesgue measure. Regarding the uniformity of V , Selberg's original proof in fact allowed V ≪ (log 2 T log 3 T ) 1/2 which was recently improved to V ≪ (log 2 T ) 3/5−ǫ by Radziwi l l 1 [28]. It is expected that this asymptotic holds for all V ≪ log 2 T and that beyond this range the distribution must change, if only slightly (see Conjecture 2 of [28]). Jutila [23] has given Gaussian upper bounds in the range 0 V log 2 T whilst, under the assumption of the Riemann hypothesis, Soundararajan [30] was able to extend similar bounds into the range V ≪ log 2 T log 3 T . This allowed for near sharp bounds on the moments of the Riemann zeta function. For our sum M f (T ) we prove the following. Theorem 1. Let h(T ) → ∞ arbitrarily slowly and suppose (log 2 T ) 1/2 log 3 T V log T /(log log T ) h(T ) . Then .
If V = L 1 2 log log T with L > 0 fixed then e −x 2 /2 dx. 1 As stated, these results differ by those in the cited work by a factor of log 2 T on account of our different normalisation.
Remark. The range of V in the lower bound (3) can be increased to o(log log T ) by applying large deviation theory in Lemma 8 below (see Lemma 3.1 of [3]). Since (3) is sufficient for our purposes, and there is only a small gap remaining in the range of V , we have left it as is.
Thus, in contrast to the zeta function we are able to essentially understand the distribution in the range of larger V , whilst in the intermediate range the distribution is undetermined. The lower bound (3) suggests that it remains log-normal in this range, which would certainly be in analogy with the zeta function. Here, we remark that for the unweighted sum n T f (n), Harper [20] has shown that there definitely is a change in distribution around the intermediate range, going from something with tails of the order e −2V when 1 V √ log log T , to something log-normal thereafter. At any rate, we believe that in the larger range V log log T , the estimate (2) should indeed reflect the true behaviour of the zeta function. Here we note the factor of V −1 in the term log((log T )/V ) of (2) which becomes significant when V (log T ) θ with θ > 0.
As a quick corollary to these tail bounds we can derive "likely" bounds for the maxima of independently sampled copies of M f (T ).
Corollary 1. Let f 1 , . . . , f N be chosen independently. Then for N = T log T we have for all ǫ > 0, whilst if ǫ < 0 the probability is o (1). If N = log T then for all ǫ > 0, whilst if ǫ < 0 the probability is o(1).
Since the zeta function at height T oscillates on a scale of roughly 1/ log T (which can be seen either by considering its zeros or its approximation by a Dirichlet polynomial) one might expect that by sampling it at T log T independent points on the interval [T, 2T ] one can pick up the maximum. From this point of view (4) represents a model for max t∈[T,2T ] |ζ( 1 2 + it)| and is in agreement with a conjecture of Farmer-Gonek-Hughes [14] which states that Similarly, (5) can be thought of as a short interval maximum max h∈[0,1] |ζ( 1 2 +it+ih)|, t ∈ [T, 2T ] and is in agreement with the leading order of a very precise conjecture of Fyodorov-Hiary-Keating [15]. We remark that much work has gone into this latter conjecture, including a proof to leading order, independently by Arguin-Belius-Bourgade-Radziwi l l-Soundararajan [1] and Najnudel [27], and an upper bound to second order by Harper [21].
We shall prove (2) of Theorem 1 by considering the moments of |M(T )| whilst for (3), which is just out reach with moment bounds, we rely on the methods of Harper [19]. The moments were initially considered by Conrey-Gamburd [12] who proved 2 that for fixed k ∈ N, where c k is an explicitly given constant. The case of real k was considered by Bondarenko-Heap-Seip [10] with refinements in the low moments case coming from Heap [22] and then Gerspach [16] who gave a fairly complete resolution of the problem by applying ideas from Harper's proof of Helson's conjecture [19]. As a result, we know that for all real, fixed k > 0. Concerning tail bounds, one often requires the moments in a uniform range of k. The case of large k was considered in [11], however the viable range of k was somewhat lacking for the lower bounds. Here, we are able to fix this deficiency and prove the following.
Theorem 2. For 10 k C log T / log log T we have We also give some partial results for k in other ranges, including larger k (see Proposition 4) and, by detailing Gerspach's [16] proof for low moments, uniformly small k (see Theorem 5). We remark that the proof of Theorem 2 is fairly elementary and does not require the probabilistic machinery of Harper [20] who proved bounds of the same quality for the unweighted sum n T f (n). Our main tool is a hypercontractive inequality due to Weissler [34].
Another motivation for this work was to investigate the problem of almost sure bounds. Due to its connection with partial sums of the Möbius function, almost sure bounds for the sum n T f (n) with f (n) a Rademacher random multiplicative function have been extensively investigated. Improving the initial work of Wintner, in an unpublished work Erdös showed that the almost sure bound in (1) can be improved to ≪ T 1/2 (log T ) A . Halász [17] then gave a significant improvement by proving the bound a.s.
Although the terms f (n), f (n + 1), · · · are not necessarily independent, one might reasonably expect an almost sure bound on the level of the iterated logarithm, which would give √ 2T log log T . By carrying out a suggestion of Halász to remove the term log 3 T from the exponential in (8), Lau-Tenenbaum-Wu [25] were in fact able to prove a result on this level by showing that Around the same time, Basquin [7] independently proved the same bound using a connection with sums over smooth numbers and an interesting observation interpreting these sums as martingales.
Regarding omega theroems, the current best is due to Harper [18] who, improving on Halász [17], showed that almost surely for Rademacher f (n). Likely, many of these results have similar counterparts for Steinhaus random multiplicative functions 3 . Turning to our case, as a first attempt one can apply the Rademacher-Menshov Theorem 4 to show that M f (T ) ≪ (log T ) 3/2+ǫ almost surely. Somewhat surprisingly, the machinery of Basquin [7] and Lau-Tenenbaum-Wu [25] does not improve this by much since on applying a partial summation argument to (9) we get M f (T ) ≪ (log T )(log log T ) 2+ǫ almost surely (at least, for Rademacher functions). We are able to give a further improvement over this. In terms of lower bounds, we prove the following. 3 Although perhaps with slightly smaller powers of the double logarithms since there is more chance of cancellation with Steinhaus variables. 4 Loosely, this states that if ∞ n=1 (log n) 2 E|X n | 2 < ∞, then the series ∞ n=1 X n converges almost surely. Thus  Thus, we have a considerable gap in our upper and lower bounds. The upper bound of Theorem 3 is consistent with squareroot cancellation and represents the behaviour of a typical random sum. Indeed, one of the main inputs in the proof is a bound for the (2 + ǫ)th moment. If one could find a way to effectively input lower moments this could probably be improved, however we have not been able to do so. We note that from Chebyshev's inequality and bounds for low moments in (6) we get that as T → ∞ further suggesting that improvements of Theorem 3 might be possible.
The lower bound of Theorem 4 better displays the multiplicative nature of the problem. It suggests the sum is potentially being dictated by its Euler product since and by the law of the iterated logarithm [24] we have lim sup In any case, our proof of Theorem 4 certainly relies on a connection with the Euler product. One of the main inputs is that the event A in which M f (T ) ≥ exp((L + o(1)) √ log log T ) for infinitely many integers T > 0 is a tail event, in the sense that any change to a finite set of values (f (p)) p∈S , with S is a finite subset of primes, does not change the outcome. Since the values (f (p)) p are independent, by the Kolmogorov zero-one law, A has probability either 0 or 1. By the Gaussian lower bound (3), A must have positive probability, and hence, actually has probability 1.
It is interesting to note that, again, the machinery of the bound (10) gives little more than M f (T ) = O(1) almost surely, at least with a direct application. Acknowledgements. The first author would like to thank Max Planck Institute for Mathematics for their warm hospitality during a visit in February 2020 (when this project started), and also the PPG/Mat -UFMG and CNPq (grant number 452689/2019-8) for financial support.

Proof of Corollary 1
In this short section we deduce Corollary 1 from Theorem 1. Let us first deal with (4). Set V = c √ log T log log T with c > 0. By the independence of the trials, By Theorem 1 we have . A similar proof gives (5).

Moment bounds
In this section we prove Theorem 2 and give some additional bounds for the moments in other ranges of k. We begin by proving Theorem 2.
3.1. Proof of Theorem 2. The implicit upper bound of Theorem 2 is from [11] and follows from Rankin's trick along with asymptotics for the tail sum p y p −1−σ . As mentioned in the introduction, we only need to improve the range of k in the lower bounds. We show that this, in fact, follows from the same essential ingredient which was a hyper-contractive inequality due to Weissler [34]. This can be stated as follows. For ρ > 0 and a given random sum where Ω(n) denotes the number of not-necessarily-distinct prime factors of n. Then the following appears in [6, section 3] (although in a slightly different form).
Lemma 2 (Weissler's inequality). Let 0 < p q and let 0 ρ p/q. Then This was originally proved for power series in one variable on the unit disk by Weissler [34]. Bayart [6] then extended this to multivariable power series using Minkowski's inequality. By the Bohr correspondence, these results apply to Dirichlet polynomials, or in our case, sums of random multiplicative functions.
If 0 < k 10 then we may replace e −k 2 log k−k 2 log 2 k−Ak 2 by some positive absolute constant C.
Proof. By Weissler's inequality with p = 2k, q = 2⌈k⌉ and ρ = α k := k/⌈k⌉ we have for real k > 0, Let K = ⌈k⌉ to ease notation. Then the expectation on the right hand side is given by where * denotes the sum where the products n 1 · · · n k and n k+1 · · · n 2k are restricted to squarefree numbers and S(Y ) is the set of Y -smooth numbers with Y T . We proceed to remove the condition n j T in each summation variable.
For a given δ > 0, the tail sum for n 1 takes the form * where in the second line we have used that the condition n 1 · · · n K = n K+1 · · · n 2K is multiplicative. By symmetry we acquire 2K such error terms. After removing the restrictions n j T in the main term we may write the resulting sum as an Euler product whose coefficient of p −1 is K 2 α 2 k = Kk. Thereby, we obtain the lower bound In order to demonstrate the second term is little 'oh' of the main term we consider the ratio for some c. Then this ratio becomes exp(−ck + O(k)) which is 1/2 provided c is large enough. If 0 < k 10 then we choose δ = 1/ log Y and Y = T 1/c for some c. In this case the ratio is exp(−c + O(k)) which again is 1/2 provided c is large enough. With these choices we acquire the lower bound where we have used π(K 2 ) ≪ K 2 / log K in the first product. Using this again for the error term in the exponential, when k 10 we acquire the lower bound since Y = T 1/ck in this case. After raising this to the power k/K the result follows in this range of k by (11). For 0 < k 10 the result follows similarly.

Larger k.
Proposition 4. When k c log T / log log T we have for some positive absolute C.
Proof. First suppose that k is an integer. Then where d k,T (n) = n 1 ···n k =n, n j T 1. Removing the divisor restriction n j T this is for any σ > 0 where in the second inequality we have used that d k (n) 2 d k 2 (n) for k 1. This last inequality follows by comparison on prime powers and induction along with the formula Choosing σ = k/ log T and noting that ζ(1 + σ) ≪ max(1/σ, 1) the result follows for integer k. We can then interpolate to non-integral k by using Hölder's inequality on noting that terms of the form (log T ) k are absorbed into e O(k 2 ) .
3.3. Uniformly small k. For upper bounds on uniformly small moments we make use of the recent progress of Gerspach [16]. His result is stated for fixed k, however with a careful reading of the proof one can get uniform bounds. We will give the main details. Interestingly, it appears that there is a slight blow up of the constant as k → 0. We do not know if this is an artefact of the proof or a result of some deeper change in the distribution around V ≈ √ log log T .
for some absolute constant C.
Outline of modified proof. One can check that the uniform version of Proposition 4 of [16] is given by the inequality where A, B are positive absolute constants, J = ⌊log 3 T ⌋ and The manipulations of Proposition 5 which lead to the application of Parseval's theorem (e.g. see Theorem 6 below) merely add an extra factor of B k , and so, with a possibly different A, we find that the first term of the above is Now, uniformly for 0 < k 1 we have where we have used d k (p m ) k which is valid for m 1 and k in this range. Therefore, on changing the constant B from before, we arrive at the uniform bound We now focus on the remaining expectation. Following [16], we break the range of integration down into various sub-ranges. By symmetry in law, the expectation of the integral over t < 0 is equal to that over t > 0, so we focus on this latter range. We then break this down as where X = log(e j /(j + 1)) log 2 , Y = log(e −j log T ) log 2 and F = F j (1/2 − 2(j + 1)/log T + it) for short. Again by symmetry in law, the expectation of the first integral of (14) is the same as the that of the first term of the first sum. Therefore, we concentrate on the ranges in these three sums. Combining uniform versions of Propositions 10, 11 and 12 of [16] we find that for Z (j + 1)/ log T , These follow in the same way by applying Lemma 8 of [16] which in fact holds for uniformly small exponents b and c there (see "Euler product result 1" of [20]). Applying (15) we find that the expectation of the kth power of (14) is bounded above by Since X 2j and Y 2 log log T , applying this in (13) gives (1))k log T ) + 1 for some absolute constant C ′ . Since the last two terms are of a lower order this is and so the result follows.

Tail bounds: Proof of Theorem 1
Theorem 1 consists of two statements. The first gives upper and lower bounds for the distribution in the range (log 2 T ) 1/2 log 3 T V log T /(log log T ) h(T ) whilst the second gives lower bounds when V = L log 2 T (small range). We further split the first of these into the ranges (log 2 T ) 1/2 log 3 T ≪ V log log T (medium range) and log log T V ≪ log T / log log T (large range). We will deal with these in order starting with the large range.
4.1. Large range V . We begin with upper bounds since this is simpler.

Proof. By Chebyshev's inequality and Theorem 2 we have
provided 10 k C log T / log log T . If 1 k 10 then the same bound holds with the factor e −k 2 log k−k 2 log 2 k+O(k 2 ) replaced by some absolute constant (by (6)). Then for 10 log 2 T V ≪ log T / log log T we may take k = V / log((log T )/V ) in which case the right hand side becomes which simplifies to the desired quantity. When log 2 T V 10 log 2 T the same choice of k gives the result.
The lower bounds is where we gain the slight restriction on the size of V in the large range.
Otherwise, we have for any given fixed ǫ > 0. Then For a given V we wish to show that there exists a k = k V and ǫ > 0 such that To motivate our choice of k later we note that if indeed Φ(u) ≈ e −u 2 / log(log T /u) then a quick check shows that such a value of k must occur at k = V / log(log T /V ). Consider the upper tail. For this we have for any δ > 0. Again, we must consider separately the ranges 1 k 10 and 10 k C log T / log log T so that the double logarithms in Theorem 2 make sense. We consider the latter range since the former range can be dealt with similarly using the less complicated bounds E[|M f (T )| 2k ] ≍ (log T ) k 2 . Continuing, by Theorem 2 the above is The factor in front of the integral is Therefore, if we choose δ = ǫ this has negative leading term in the exponential and hence is o(1). Removing the double logarithm in the above we get the upper bound Now consider the lower tail. Applying a similar argument we have By (7) this is The factor in front of the integral simplifies to On setting k = V / log(log T /V ) this becomes Again, choosing δ = ǫ this is o(1), although this time with the proviso We have therefore shown that for k = V / log((log T )/V ) and ǫ satisfying (19), Since Φ is a non-increasing function we infer For the right hand inequality, by Theorem 2 with the above choice of k, we have This gives the second bound (18)  and note this is o(1). Then we get and the first bound (17) follows.

Medium range.
In the medium range (log 2 T ) 1/2 log 3 T V log log T we make use of bounds for low moments. Lemma 3 gives the lower bounds (20) E uniformly in the range 0 < k 1 for some absolute constant C > 0 whilst Theorem 5 in the range 1/ √ log log T k 1 gives the uniform bound Proof. Given the range of V it suffices to prove the bound For the upper bound, by Chebyshev's inequality and (21) we have For the lower bound we proceed similarly to Lemma 6. As before let Let 0 < k 1 and ǫ > 0 to be chosen later. Again we have for any δ > 0. From (22) and the bounds (20) and (21) this is The factor in front of the integral is exp 2δ(1 + δ 2 )k 2 log log T − 2kδV (1 + ǫ) + 2 log 3 T which on choosing k = V / log log Y and δ = ǫ becomes We have therefore shown that for k = V / log log T and ǫ 2/ log 3 T we have where the implicit constant may be taken to be 2. Since Φ is non-increasing and V (log 2 T ) 1/2 log 3 T we infer By (20) with the above choice k = V / log log T , we have Choosing ǫ = 2/ log 3 T we get

Small range.
We now turn to proving the remaining lower bound (3) of Theorem 1 which states that for V = (L + o(1)) √ log log T with L > 0 fixed, We make use of Harper's methods [19] following the proof of his Corollary 2 there.
We begin with the equivalent of Lemma 8 of [19]. LettingÊ denote the conditional expectation with respect to the variables (f (p)) p √ T andP the corresponding conditional probability, this states that if A denotes the event in which thenP(A) ≫ 1 for any realisation of the (f (p)) p √ T . We omit the proof of this since it follows more or less verbatim; the only difference being a factor of 1/ √ n in the sums. Then, sincê ≪ log log T X + 1 log T by the prime number theorem. Therefore, by Chebyshev's inequality the probability that this subtracted term is e 2V / √ log T , is ≪ 1 e 2V √ log T . Since this is much smaller than our target probability we can ignore this term.
Returning to the first term we have and since by the prime number theorem this is t after letting t → T /t. We now note that we may add the condition n ∈ S(T ) in the sum with no change, and then after applying a small shift we find this is Writing the integral as T we find that the expectation of the subtracted term is As before, this is seen to give a negligible contribution to the probability. Finally, we apply Parseval's Theorem for Dirichlet series.
Theorem 6 (Parseval's Theorem, (5.26) of [26]). For a given sequence of complex numbers (a n ) ∞ n=1 consider the Dirichlet series A(s) = ∞ n=1 a n n −s and let σ c denote its abscissa of convergence. Then for any σ > max(0, σ c ) we have Applying this we find that At this point we notice a difference to the case covered by Harper. The denominator of the integral on the right can get rather small around t ≈ 0, which is not the case for the sum n T f (n). To pick this up, we lower bound by the integral over the range [−1/2 log T, 1/2 log T ]. In this way we get the lower bound log T 4(log 2 T ) 2 and have thus reduced the problem to the study of the probability We now proceed similarly with Jensen's inequality although our ensuing analysis of the leading term is considerably simplified. We have after applying the Euler product formula.
Let us remove the second sum in the exponential. First note that since and p x log p/p ≪ log x we can remove the term p −2it at a cost of O(1). Now, by the Prime Number Theorem. Therefore, which, similarly to before, results in a negligible contribution compared to our target bound. Inputting these developments into (23) we have reduced the problem to lower bounding the probability where the extra log 3 T term has come from (24). We can now complete the proof with the following lemma.
where in the last equality we used that sin(a + b) = sin(a) cos(b) + sin(b) cos(a) and sin(a − b) = sin(a) cos(b) − sin(b) cos(a). Define Thus, we have shown that Now notice that (cos(θ p )) p are i.i.d. with E cos(θ p ) = 0 and E cos(θ p ) 2 = 1/2. Then, since sin(x) 2 = 1−cos(2x) by Chebyshev's estimate p x log p/p ≪ log x. Splitting the sum at p = T 1/ log log T we find that by Chebyshev's estimate again and then Mertens' Theorem. Since the tail sum is we find that EΣ 2 T = (1/2) log log T + O(log 3 T ). Now observe that each factor in Σ T is bounded by 1, due to the fact that | sin(x)| ≤ x. Hence, by the Central Limit Theorem for triangular arrays (see, for instance, [29], pg. 334, Theorem 2), we have that Σ T 1 2 log log T → d N (0, 1), as T → ∞ (where → d means convergence in distribution).

Almost sure bounds
In this section we are going to prove Theorem 3 using the Borel-Cantelli Lemma as our main tool. As is typical, this consists of two main steps: a "sparsification" step where the set of points T is discretised to some sparser subset, and then a step where we bound the resultant probabilities (typically via Chebyshev's inequality and moment bounds). The sparsification step of Lau-Tenenbaum-Wu [25, Lemma 2.3] loses a factor of a logarithm which is crucial for us. Instead, we make use of a theorem of [8] (Lemma 11 below) which involves a finer analysis. Our first two lemmas below provide the necessary moment bounds. Lemma 9. Let (a(n)) n∈N be a sequence of complex numbers such that a(n) = 0 only for a finite number of n. Then, for any non-negative integer ℓ, we have that where τ ℓ (n) = n 1 ·...·n ℓ =n 1.

Omega bounds
In this section we prove Theorem 4. We require the following estimate on y-smooth numbers less than x.
Lemma 12. Let 2 y x and let Ψ(x, y) denote the number of integers less than or equal to x all of whose prime factors are less than or equal to y. Then for y √ log x we have In particular, for fixed y we have Ψ(x, y) ≪ ǫ x ǫ for all ǫ > 0.
Let (X n ) n∈N be a sequence of independent random variables. Let A be an event that is measurable in the sigma algebra σ(X 1 , X 2 , ...). We say that A is a tail event if for any fixed y ∈ N, A is independent from X 1 , X 2 , ..., X y . The Kolmogorov zero-one law says that every tail event either has probability 0 or 1. Let λ(T ) be an increasing function such that λ(T ) → ∞ as T → ∞. In the next Lemma we will show that the event A λ = |M f (T )| ≥ exp((1 + o(1))λ(T )), for infinitely many integers T > 0 is a tail event with respect to the (f (p)) p i.e. that any change on the values (f (p)) p∈S with S a finite subset of primes, will not change the outcome.
Lemma 13. Let A λ be as above. Then A λ is a tail event.
Proof. Let y > 0 and f y (n) be the multiplicative function such that for each prime p and any power m ∈ N, f y (p m ) = f (p m )1 p>y . Let B y,λ be the event in which |M fy (T )| ≥ exp((1 + o(1))λ(T )) for infinitely many integers T . We are going to show that A λ and B y,λ are the same event, and since the values (f (p)) p are independent and B y,λ does not depend on the first values (f (p)) p≤y , we can conclude that A λ is a tail event.
Firstly, we will show that B y,λ ⊂ A λ . Let g y (n) be the multiplicative function such that at each prime p and any power m ∈ N, g y (p m ) = µ(p m )f (p m )1 p≤y . Then f y = g y * f and hence M fy (T ) = n≤T g y (n) √ n M f (T /n).
Now observe that the set {n ∈ N : g y (n) = 0} has at most 2 π(y) elements. Thus, in the event B y,λ , by the pigeonhole principle applied to (27), we obtain n ≤ p≤y p such that |M f (T /n)| ≥ exp((1+o(1))λ(T )) 2 π(y) = exp((1 + o(1))λ(T )) for infinitely many integers T . Thus B y,λ ⊂ A λ . Now we will show that A λ ⊂ B y,λ . Firstly we partition: A λ = (A λ ∩C y )∪(A λ ∩C c y ), where C y is the event in which M fy (T ) ≪ exp(2λ(T )). Clearly the event A λ ∩ C c y is contained in B y,λ . Now let h y (n) be the multiplicative function such that at each prime p and any power m ∈ N, h y (p m ) = f (p m )1 p≤y . Then f = h y * f y and hence, for any U > 0 Now we will show that in the event A λ ∩ C y , the second sum in the right hand side above is o(1) for a suitable choice of U. Let u y (n) denote the indicator function of y-smooth numbers. Then, by partial integration and Lemma 12: Now, by Lemma 12, the set {n ≤ U : h y (n) = 0} has at most Ψ(U, y) ≪ (log U) π(y) = (10λ(T )) π(y) elements. Hence, in the event A λ,L ∩ C y , by the pigeonhole principle applied to (28), we always find infinitely many integers T and n = n(T ) ≤ U such that |M fy (T /n)| ≥ √ n exp((1 + o(1))λ(T )) (10λ(T )) π(y) ≥ exp((1 + o(1))λ(T )).
Proof of Theorem 4. We argue as in Lemma 3.2 of [4]. Let λ(T ) = L √ log log T . We have that the event A λ is the event ∞ ≥ δ > 0, where in the second line above we used the Gaussian lower bound (3). Thus, by the Kolmogorov zero-one law, we conclude that P(A λ ) = 1.