The null hypothesis of common jumps in case of irregular and asynchronous observations

This paper proposes novel tests for the absence of jumps in a univariate semimartingale and for the absence of common jumps in a bivariate semimartingale. Our methods rely on ratio statistics of power variations based on irregular observations, sampled at different frequencies. We develop central limit theorems for the statistics under the respective null hypotheses and apply bootstrap procedures to assess the limiting distributions. Further we define corrected statistics to improve the finite sample performance. Simulations show that the test based on our corrected statistic yields good results and even outperforms existing tests in the case of regular observations.


INTRODUCTION
Drawing inference for jumps based on discrete observations of the underlying process is difficult. Increments between any two observations are superpositions of continuous movements and jumps, and whenever the data comes at relatively low frequencies it is possible that continuous movements in one period of time are large enough to be either attributed to jumps or to cancel out existing jumps in the same time period. The situation becomes even worse in a multivariate context, as the observations typically come in asynchronously and with varying frequencies. An illustration is given in Figure 1 where we have plotted several bivariate processes in continuous time alongside with their discrete observations. Even though all of these processes are of a different origin, the observed data actually coincides in all four cases. Hence, it is rather difficult to decide based on discrete observations only which of the four models is the most plausible. Still, this is the only thing we can do in practice, and in this work we are interested in the null hypothesis that common jumps of a bivariate process exist. In fact this question was tackled and solved prominently in Jacod and Todorov (2009), but only under the additional assumption that observations come in at equidistant and synchronous times. Both assumptions are rarely justified in practice, in particular for high-frequency observations, which is why practitioners usually ignore a lot of observations (and then work with one-minute observations, e.g.) in order to artificially construct a sampling scheme which is close enough to justify the assumptions of regularity. Both from a theoretical and a practical point of view this is unsatisfactory, and one would like to understand what happens when the underlying observations come in randomly at irregular and asynchronous times.
It has turned out that the construction of a test in two dimensions is difficult unless one properly understands the univariate situation. For this reason we provide two statistical procedures: First, we want to decide whether or not jumps are present in a realized path of a stochastic process in one dimension, based on discrete irregular observations of the process and under the null hypothesis that jumps exist. Second, we want to decide whether common jumps are present or not in two realized processes, based on irregular and asynchronous observations and under the null hypothesis that there exist common jumps. Thus, this work can not only be understood as a generalization of the results in Jacod and Todorov (2009), but it complements the univariate discussion from Aït-Sahalia and Jacod (2009) as well. Note also that the opposite null hypothesis of no common jumps in a bivariate process was tackled in Martin and Vetter (2018), but the formal procedures for the two problems are quite different. In both settings the goal essentially is to find statistics whose asymptotic behavior can be controlled under the respective null hypotheses, and this is achieved for test statistics of a different kind.
In the following let̃be a one-dimensional (1D) process observed at times̃, and let X = (X (1) , X (2) ) be a bivariate process where X (l) is observed at times ( ) , , l = 1, 2, with all observations coming from the time interval [0, T]. Then, in a high-frequency setting, for some k ≥ 2, our tests are based on the ratio statistics ∑ ≥ ∶̃, ≤ (̃̃, −̃̃− , ) 4 ∑ ≥1∶̃, ≤ (̃̃, −̃̃− 1, ) 4 , and ∑ , ≥ ∶ (1) , ∧ (2) , ≤ ( (1) (1) , (2) − , , ≤ ( (1) (1) , (2) −1, The common feature of both statistics is that they have a different asymptotic behavior, depending on whether, for the first one,̃has jumps in [0, T] or not, or whether, for the second one, X (1) and X (2) have common jumps in [0, T] or not. In this paper, we investigate the asymptotics of these statistics both under the null hypothesis that (common) jumps are present and under the alternative that (common) jumps do not exist. Whereas the limits in probability can be shown to be of a similar form as in the setting of equidistant observation times, it is much harder to prove central limit theorems under the null hypothesis that (common) jumps are present. Limit theorems for jump processes observed at discrete frequencies are rare in the literature, and we know from the earlier works of Bibinger and Vetter (2015) and Martin and Vetter (2018) that one needs additional conditions on the nature of the sampling scheme. Unlike in the setting of equidistant observation times these limiting variables in the central limit theorem turn out to be not mixed normal but have a more complicated distribution. We use a bootstrap method similar to the one introduced in Martin and Vetter (2018) to estimate the quantiles of this distribution and to construct a feasible testing procedure. It thus seems that these methods are of universal applicability when dealing with estimation and testing based on asynchronous data.
Finally, we conduct a simulation study to check the finite sample performance of our tests, in which we use the models from Jacod and Todorov (2009) to compare our results with the ones from the setting of equidistant observations. We find that the finite sample performance of our tests is rather poor, especially under the null hypothesis, which is inherent when working with these kinds of statistics and is a problem that occurs for the tests in the equidistant setting as well. This is due to a rather large contribution of terms which vanish asymptotically in the central limit theorem. As an illustration, we have plotted the empirical rejection rates of the bivariate test in Figure 2.
We improve the performance of our tests drastically, however, if we construct an estimator which (partially) corrects for those asymptotically vanishing terms. In particular, as can be seen from Figure 2 as well, our results already look fine for a moderate sample size of 1,600 observations on average. Our method is also superior to tests based on equidistant observations if the available data is asynchronous and needs to be artificially synchronized, so that Alternative: idiosyncratic jumps F I G U R E 2 Empirical rejection rates for the tests from Theorem 7 and Corollary 2. The solid line corresponds to the uncorrected test with 160 equidistant observations, whereas the dashed line is constructed from irregular data with 1,600 observations on average. The dotted line was simulated using the corrected test with 1,600 observations on average. In the irregular setting the observation times were simulated from Poisson sampling and each curve is based on 5,000 simulated paths. Null hypothesis and alternative are given by models II-j and II-d0, respectively; compare Section 4.2 for details [Colour figure can be viewed at wileyonlinelibrary.com] a lot of data points have to be discarded. Compare our results with the ones from Jacod and Todorov (2009) for a sample size of 160 observations. Further, we find that using our corrected statistic we are also able to improve the existing tests from Aït-Sahalia and Jacod (2009), Todorov (2009) andJacod (2014) in the setting of equidistant observation times.
The remainder of the paper is organized as follows: As the structure of the results and the formal arguments are simpler in the setting where we test for jumps in a 1D process we first derive in Section 2 two statistical tests for jumps within a stochastic process. The formal setting and the estimator Φ ( ) , , is introduced in Section 2.1. In Section 2.2, we derive the consistency of this estimator under both hypotheses, we cover the asymptotics of the estimator in the form of a central limit theorem in Section 2.3, and in Section 2.4 we use a bootstrap method to derive two feasible tests, one using the original statistic Φ ( ) , , and one using a corrected estimatorΦ ( ) , , ( ) based on the central limit theorem. Building on the results from Section 2 we proceed similarly in Section 3 to derive two statistical tests for deciding whether or not two processes jump at a common time. In Section 4, we examine the finite sample properties of the tests derived in Sections 2 and 3 by means of a simulation study, before we conclude in Section 5. The appendix containing the proofs is split into two parts: Appendix (Section 6) contains the main structure of the proofs, leaving out most technical arguments. Appendix (Section 7 in Supporting Information), available online, contains all proofs in details and thus fills the gaps left in Appendix (Section 6).

A TEST FOR THE ABSENCE OF JUMPS
In this section we will derive a statistical test based on high-frequency observations which allows to decide whether an observed path of a process contains jumps or not.

Settings and test statistic
First we specify the mathematical model for the process and the observation times. Let X be a 1D Itô semimartingale on the probability space (Ω,  , P) of the form Here, W denotes a standard Brownian motion, is a Poisson random measure on R + × R whose predictable compensator satisfies ν(ds, dz) = ds ⊗ (dz) for some -finite measure on R endowed with the Borelian -algebra, b and are progressively measurable processes, and is a predictable function on Ω × R + × R. For a more detailed discussion of the components of X consult Section 2.1 of Jacod and Protter (2012). We write ΔX s = X s − X s− with − = lim ↗ for a possible jump of X in s.
Furthermore, we define a sequence of observation schemes ( ) ∈N via where the ( , ) ∈N 0 are increasing sequences of stopping times with t 0,n = 0. By we denote the mesh of the observation times up to some fixed time horizon T ≥ 0. Formally we will develop a statistical test which allows to decide whether a realized path X( ) contains jumps in a given time interval [0, T] or not. Specifically we want to decide based on the observations ( , ( )) ∈N 0 to which of the following two subsets of Ω the realization belongs: Here, Ω ( ) is the set of all for which the path of X up to T has at least one jump and Ω ( ) is the set of all for which the path of X is continuous on [0, T].
All our test statistics are based on the increments and we denote by  , , = ( − , , , ], i ≥ k ≥ 1,  , =  ,1, , the corresponding observation intervals. For convenience we set  , , = ∅ and accordingly Δ i,k,n X = 0 for i < k. Further we define for ∈ N and a function ℎ ∶ R → R the functionals Considering these functionals for g ∶ x  → x 4 and k ≥ 2 we build the statistic whose asymptotics we use to construct a statistical test for the absence of jumps.
Remark 1. In the setting of equidistant observation times t i,n = i∕n our statistic becomes . (2) On the other hand in Aït-Sahalia and Jacod (2009) a test is constructed based on the statistic where at the lower observation frequency n∕k only increments over certain observation intervals  , , enter the estimation. Intuitively it seems that using the statistic (2) should be better than using (3), because in (2) we utilize the available data more exhaustively by using all increments at the lower observation frequency. This intuition is confirmed by Proposition 10.19 in Aït-Sahalia and Jacod (2014) where central limit theorems are developed for both (2) and (3) and it is shown that (2) has a smaller asymptotic variance. □

Consistency
In order to derive results on the asymptotic behavior of Φ ( ) , , we need to impose certain structural assumptions on the Itô semimartingale X and the observation scheme. Furthermore, we introduce the notation and abbreviate G n (t) = G 1,n (t).

Condition 1.
The process b s is locally bounded and the process s is càdlàg. Furthermore, there exists a locally bounded process Γ s with | ( , s, z)| ≤ Γ s ( ) (z) for some deterministic bounded function which satisfies ∫ ( ( ) 2 ∧ 1) ( ) < ∞, and the process fulfills ∫ 0 | | > 0 almost surely. Additionally the following assumptions on the observation scheme hold: (i) The sequence of observation schemes ( n ) n is exogenous, that is, independent of the process X and its components, and fulfills | | P → 0.
(ii) The functions G n (t), G k,n (t) convergence pointwise on [0, T] in probability to strictly increas- The structural assumptions on b, , are not very restrictive and occur elsewhere in the literature in similar form. The assumption that almost surely does not vanish on [0, T] and Condition 1(ii) are only needed to derive the asymptotic behavior of Φ ( ) , , on Ω ( ) . Φ ( ) , , converges on Ω ( ) also without these assumptions.
We will see in the proof of Theorem 1 that which yields the asymptotic behavior of Φ ( ) , , on Ω ( ) . On Ω ( ) we have ( ) = 0 and we expand the fraction by n to get an asymptotic result. To describe the limit in that case we define Remark 2. We obtain for all t ≥ s ≥ 0 from the series of elementary inequalities which holds for any a 1 , … , a k ≥ 0, ∈ N. Here, the second inequality follows from the Cauchy-Schwarz inequality. Equality in (5) holds for a 1 ≥ 0, a 2 = · · · = a k = 0 respectively a 1 = · · · = a k > 0. The relations of G n and G k,n are preserved in the limit which yields for all t ≥ s ≥ 0. Hence, we get → 1 on Ω ( ) and that Φ ( ) , , converges on Ω ( ) to a random variable which is strictly greater than 1 if ( ) , > ( ) by Remark 2, we will construct a test with critical region for an appropriate sequence of decreasing random positive numbers c ( ) , , , ∈ N. In the following we illustrate the result from Theorem 1 by looking at two prominent observation schemes: first Poisson sampling which is truly random and asynchronous and second equidistant sampling for which results exist in the literature and which will serve as a kind of benchmark.
Example 1. Consider the observation scheme where t i,n − t i−1,n are i.i.d. Exp(n ), > 0, distributed. We will call this observation scheme Poisson sampling as the observation times t i,n correspond to the jump times of a Poisson process with intensity n . In this setting, Condition 1(ii) is fulfilled as shown in Proposition 1 from Hayashi and Yoshida (2008) with and as proven in Section 7.3 with This yields that the limit under the alternative is (k + 1)∕2. □ Remark 3. In the case of equidistant synchronous observations, that is, Hence in this setting Φ ( ) , , converges to a known deterministic limit on Ω ( ) as well which also allows to construct a test using Φ ( ) , , for the null hypothesis of no jumps (compare Section 10.3 in Aït-Sahalia and Jacod (2014)). This is not immediately possible in the irregular setting, unless the law of the generating mechanism is known to the statistician. □

Central limit theorem
In this section, we derive a central limit theorem for Φ ( ) , , which holds on Ω ( ) . Denote by i n (s) the index of the interval  , associated with ∈  , and define k,n,− (s) + k,n,+ (s) is the ( ∶ ∈ N)-conditional variance of This identity is illustrated in Figure 3. The following condition summarizes the assumptions we need additionally to Condition 1 to derive a central limit theorem.

Condition 2.
The process X and the sequence of observation schemes ( n ) n fulfill Condition 1. Further the following additional assumptions on the observation schemes hold: (ii) The integral for all bounded continuous functions ∶ R → R, ℎ ∶ R 2 → R, p = 1, … , P, and any ∈ N. Here, Γ(⋅, dy) is a family of probability measures on [0, T] with uniformly bounded first moments and ∫ 0 Γ( , {(0, 0)}) = 0. □ Part (i) of Condition 2 guarantees that | n | T vanishes sufficiently fast, while part (ii) of Condition 2 yields that the ( k,n,− (s), k,n,+ (s)) converge in law in a suitable sense.
Because of the exogeneity of the observation times we may assume in the following that the probability space has the form where  denotes the -algebra generated by X and its components and  denotes the -algebra generated by the observation scheme ( n ) n .
To describe the limit in the upcoming central limit theorem we define where the sequence (S p ) p≥0 denotes an enumeration of the jump times of X and the k (s) = ( k,− (s), k,+ (s)), s ∈ [0, T], are independent random variables which are distributed according to Γ(s, dy). The U s , s ∈ [0, T], are i.i.d. standard normal distributed random variables. Both the k (s) and the U s are independent of X and its components and defined on an extended probability space (Ω, ,P). Note that ( ) , is well-defined because the sum in (10) is almost surely absolutely convergent and independent of the choice for the enumeration (S p ) p≥0 ; compare Proposition 4.1.3 in Jacod and Protter (2012).
Here, the limit ( ) , ∕( ( ) ) in (11) is defined on the extended probability space (Ω, ,P). Further the statement of the -stable convergence on Ω ( ) means that we have for all bounded and continuous functions g and all -measurable bounded random variables Y . For more background information on stable convergence in law we refer to Jacod and Protter (2012), Jacod and Shiryaev (2003), and Podolskij and Vetter (2010).
Example 2. Condition 2 is fulfilled in the setting of Poisson sampling introduced in Example 1. Part (i) is satisfied by Lemma 8 from Hayashi and Yoshida (2008) which states for all q ≥ 1 and < q. That part (ii) is fulfilled is proven in Section 7.3.1. □

Testing procedure
In this section we develop a statistical test for testing the null hypothesis that t  → X t ( ) has jumps in [0, T] (i.e., ∈ Ω ( ) ) against the alternative that t  → X t ( ) is continuous on [0, T] (i.e., ∈ Ω ( ) ). To employ Theorem 2 for this purpose we have to estimate the distribution of the limiting variable ( ) , and therefore the distribution of k (S p ) for the jump times S p . Because the distribution of k (S p ) depends on the unknown observation scheme around S p , of which we observe only a single realization, we use a bootstrap method to estimate this distribution from the realization of the observation scheme close to S p . For this method to work we need some sort of local homogeneity which will be guaranteed by Condition 7(i).
To formalize the bootstrap method let L n and M n be sequences of integers which tend to infinity. Set̂, for m = 1, … , M n where the random variable V n,m (s) attains values in {−L n , … , L n } with probabilities Here, (̂, , ,− ( ),̂, , ,+ ( )) is chosen from the ( , ,− ( ( )+ , ), , ,+ ( ( )+ , )), i = −L n , … , L n , which make up the 2L n + 1 realizations of (̂, ,− ( ),̂, ,+ ( )) which lie 'closest' to s, with a probability proportional to the interval length | ( )+ , |. This corresponds to the probability with which a random variable which is uniformly distributed on the union of these intervals  ( )+ , , i = −L n , … , L n , but otherwise independent from the observation scheme, would fall into the interval  ( )+ , . Due to the structure of the predictable compensator ν the jump times S p of the Itô semimartingale X are also evenly distributed in time. This explains why we choose such a random variable V n,m (s) for the estimation of the law of (̂, , ,− ( ), , , ,+ ( )).
Using the estimators (2.4) for realizations of k (s) we build the following estimators for realizations of ( ) Here an increment which is large compared to a given threshold is identified as a jump and the local volatility is estimated by for a sequence (b n ) n with b n → 0 and | n | T ∕b n → 0. Further the U n,i,m are i.i.d. standard normal distributed random variables which are independent of  and defined on the extended probability space (Ω, ,P).
Condition 3. Assume that the process X and the sequence of observation schemes ( n ) n satisfy Condition 2. Furthermore, let the sequence (b n ) n fulfill | | ∕ P → 0 and suppose that (L n ) n and (M n ) n are sequences of integers converging to infinity with L n ∕n → 0. Additionally, as n → ∞, for all > 0 and any ∈ R 2× , ∈ N, and s p ∈ (0, T), p = 1, … , P, with s i ≠ s j for i ≠ j. (ii) The set {s ∈ [0, T] ∶ s = 0} is almost surely a Lebesgue null set, and the process is itself an Itô semimartingale, that is, of the form (1). (iii) On Ω ( ) we have consistently estimate the distribution of ( k,− (s), k,+ (s)) and thereby that̂( ) , , ( ) yields a valid estimator for ( ) ( ) on Ω ( ) . It is essentially satisfied whenever the observation schemes satisfy two properties: The distribution of intervals around a time point s is in a suitable sense close to the distribution of intervals around any time point in a neighbourhood of s. We refer to this condition as local homogeneity. The second property regards asymptotic independence, so that the estimation around one time point does not affect estimation around any other time point in the limit. We are confident that part (i) holds when we sample from renewal processes and we give a formal proof for Poisson sampling.
The other two conditions in 3 are rather mild. Part (ii) is needed for the convergence of the volatility estimatorŝ( , −),̂( , +) at jump times S p , and it is satisfied in most volatility models. The same holds for part (iii) which guarantees that Φ ( ) , , converges under the alternative to a value different from 1, which is the limit under the null hypothesis. It can be seen from (5) in Remark 2 that (iii) holds whenever the lenghts of the observation intervals are asymptotically of a similar size. , has asymptotic level in the sense that we havẽ for all ( ) The test is consistent in the sense that we havẽ for all ( ) Note that to carry out the test introduced in Theorem 3 the unobservable variable n is not explicitly needed, even though √ occurs in the definition of c ( ) , , . This factor actually cancels as it also enters as a factor in̂( ) , , (1 − ). What remains is the dependence of b n and L n on n, though, but for these auxiliary variables only a rough idea of the magnitude of n usually is sufficient. Similar observations hold for all tests constructed later on as well.
The simulation results in Section 4.1 show that the convergence in (19) is rather slow, because certain terms in √ (Φ ( ) , , − 1) which vanish in the limit contribute significantly for a small n. Our goal is to diminish this effect by including estimates for those terms in the testing procedure. To this end, note the relation and the main part in the proof of Theorem 11 therefore is to prove the stable convergence This result obviously stems from the jumps of X, but V(g, [k], n ) T − kV(g, n ) T contains asymptotically vanishing terms as well which are due to the continuous part and are mostly captured in the small increments. To estimate their contribution we define using the same , as in (15). It can be shown that ( ) , , is a consistent estimator for 2 ( ) , − ( ) 1, . We then define for ∈ (0, 1) the adjusted estimator where we partially correct for the contribution of these asymptotically vanishing terms.
Corollary 1. Let ∈ (0, 1). If Condition 3 is fulfilled, it holds with the notation from Theorem 3 for all ( ) for all ( ) The closer is to 1 the faster is the convergence in (21), but also the slower is the convergence in (22). In particular, we cannot fully remove the contribution of these terms, because they actually drive the asymptotics under the alternative. Hence an optimal should be chosen somewhere in between. Our simulation results in Section 4.1 show that it is possible to pick a very close to 1 without significantly worsening the power compared to the test from Theorem 3.
Example 3. The assumptions on the observation scheme in Condition 3 are fulfilled in the Poisson setting. That part (iii) is fulfilled has been shown in Example 1 and that part (i) is fulfilled is proven in Section 7.3.2. □ In fact, for our testing procedure to work in the Poisson setting we do not need the weighting from (14). All intervals could also be picked with equal probability. This is due to the fact that the interval lengths ( | ( )+ , ( )+ , |) =−( −1),…,−1,1,…, −1 are (asymptotically) independent of | ( )+ , ( ), | with the same distribution. The weighting is important, however, if the interval lengths of consecutive intervals are dependent as illustrated in the following example.
Example 4. Define an observation scheme by t 2i,n = 2i∕n and t 2i+1,n = (2i + 1 + )∕n, ∈ N 0 , with ∈ (0, 1) (compare Example 33 in Bibinger and Vetter (2015)). Let us consider the case k = 2. The observation scheme is illustrated in Figure 4. It can be easily checked that Condition 1 holds with G(t) = (1 + 2 )t and G 2 (t) = t. Further it can be shown similarly as in Bibinger and Vetter (2015) that Condition 2 is fulfilled for Γ defined via for all s > 0. Hence in order for the distribution of̂, ,1 ( ) to approximate Γ(s, ⋅) the variable i n (s) + V n,m (s) has to pick the intervals of length (1 + )∕n with higher probability than those with length (1 − )∕n, because it holds The sampling scheme from Example 4

A TEST FOR THE ABSENCE OF COMMON JUMPS
In this section we will derive a statistical test based on high-frequency observations which allows to decide whether two processes jump at a common time or not. The methods we use are similar to those in Section 2. However, the form of the occuring variables and the proofs will be different because of special effects due to the asynchronicity of the data.

Settings and test statistic
As a model for the stochastic process we again consider an Itô semimartingale X = (X (1) , X (2) ) * , which is now two-dimensional (2D), defined on the probability space (Ω,  , P) and of the form where W = (W (1) , W (2) ) * is a 2D standard Brownian motion and is a Poisson random measure on R + × R 2 whose predictable compensator satisfies ν(ds, dz) = ds ⊗ (dz) for some -finite measure on R 2 endowed with the Borelian -algebra. b is a 2D progressively measurable process, is a (2 × 2)-dimensional process and is a 2D predictable process on Ω × R + × R 2 .
(1) , (2) ≥ 0 and s ∈ [−1, 1] are all univariate progressively measurable. The observation scheme here consists of two (in general different) increasing sequences of stopping times we again denote the mesh of the observation times up to T ≥ 0. Formally we will develop a statistical test which allows to decide to which of the following two subsets of Ω the which generated the observed path t  → X t ( ) belongs: We denote the observation intervals of X (l) by  ( ) the increments of X (l) over those intervals. As in Section 2.1 we set  ( ) , , = ∅ and Δ ( ) , , Further we define the following functionals for ∈ N and ℎ ∶ R 2 → R , , (1) , Δ (2) , , (2) )1 { (1) , , ∩ (2) , , ≠∅} which are generalizations of the famous estimator for the quadratic covariation based on asynchronous observations from Hayashi and Yoshida (2005). Computing these functionals for whose asymptotic behavior for k ≥ 2 is the foundation for the upcoming statistical test. By we denote the sum of the squared co-jumps of X (1) and X (2) .
Remark 4. In the setting of equidistant observation times ( ) , = ∕ , l = 1, 2, the statistic Φ ( ) , , is equal to In Jacod and Todorov (2009) a test for common jumps is constructed based on the statistic where at the lower observation frequency n∕k only increments over the intervals  ( ) , , , l = 1, 2, enter the estimation. Further in Section 14.1 of Aït-Sahalia and Jacod (2014) a test for common jumps based on is discussed. As argued in Remark 1 it seems advantegeous to use the statistic (28) over (27). However, in the asynchronous setting it is not clear which observation intervals should be paired, because there is no one-to-one correspondence of observation intervals in one process to observation intervals in the other process. To use the available data as exhaustively as possible we therefore decided to include products of squared increments over all overlapping observation intervals at the lower observation frequency in the numerator of Φ ( ) , , . □

Consistency
In this section we investigate under which conditions Φ ( ) , , converges to a certain limit. The following structural assumptions which are similar to those in Condition 1 are needed to obtain an asymptotic result for Φ ( ) , , on Ω ( ) .
From Theorem 4 we know that Φ ( ) , , converges in probability to 1 under the null hypothesis, and from Theorem 5 we conclude that Φ ( ) , , converges stably to a limit which is almost surely different from 1 if the alternative hypothesis holds and some mild additional conditions are satisfied: If  almost surely which also yields Φ ( ) , , ≠ 1. Hence, we will construct a test with critical region for a (possibly random) sequence (c ( ) , , ) ∈N . As in Section 2 we will consider the situation where the observation times are generated by Poisson processes as an example for a random and irregular sampling scheme which fulfills the conditions we need for the testing procedure to work.
Example 5. Consider the extension of the setting from Example 1 to two dimensions where the observation times of X (1) and X (2) originate from the jump times of two independent Poisson processes, that is, the ( ) , − ( ) −1, are Exp(n l )-distributed with l > 0 and independent for i, n ≥ 1 and l = 1, 2. That Condition 5(i) is fulfilled in this setting follows from (12). The convergences of G 1,n and H 1,n have been shown in Proposition 1 of Hayashi and Yoshida (2008), the convergences of G k,n and H k,n are proven in Lemma 7. That Condition 5(iii) is fulfilled in the Poisson setting is proven in Section 7.3.1. □

Central limit theorem
Before we start to derive a central limit theorem for Φ ( ) , , on Ω ( ) , we restrict ourselves to the case k = 2. This simplifies notation, and also from the simulation results in Section 4.1 for the test from Section 2 it seems to be optimal to choose k as small as possible.
First we introduce the following notation to describe intervals around some time s at which a common jump might occur which are illustrated in Figure 5 and Limits of these variables will occur in the central limit theorem. To ensure convergence of the Z n (s) we need to impose the following assumption on the observation scheme.

Condition 6.
The process X and the sequence of observation schemes ( n ) n fulfill Conditions 4 and 5(i)-(ii). Further the integral for all bounded continuous functions ∶ R → R, ℎ ∶ R 6 → R and any ∈ N. Here Γ(⋅, dy) is a family of probability measures on [0, T] with uniformly bounded first moments and Using the limit distribution of Z n (s) implicitly defined in Condition 6 we set Here, (S p ) p≥0 is an enumeration of the common jump times of X (1) and X (2) , ( (1) ,  (1) ,  (2) ,  (2) , , )( ) is distributed according to Γ(s, ⋅) and the vectors ( (1),− , (1),+ , (2) , (3) , (4) ) are standard normally distributed and independent for different values of s. Similarly as for (10) we obtain that the infinite sum in the definition of ( ) 2, has a well-defined limit. Using ( )

2,
we are able to state the following central limit theorem.

Condition 7.
The process X and the sequence of observation schemes ( n ) n fulfill Conditions 5 and 6. (L n ) n and (M n ) n are sequences of integers converging to infinity with L n ∕n → 0 and the sequence (b n ) n fulfills | | ∕ P → 0. Additionally, as n → ∞ for all > 0 and any ∈ R 6 , s p ∈ (0, T), p = 1, … , P, ∈ N, with s i ≠ s j for i ≠ j. which can be shown analogously to Remark 2.
Denote by the subset of Ω where either X (1) or X (2) has a jump in [0, T] but there exists no common jump. This is the subset of Ω ( ) on which ( ) do not vanish under Condition 7(i)-(ii). (31) with

Theorem 7. Let Condition 7 be fulfilled. The test defined in
has asymptotic level in the sense that we havẽ for all ( ) ⊂ Ω ( ) with P( ( ) ) > 0. Further the test is consistent in the sense that we havẽ for all ⊂ Ω ( ) with P( ) > 0, and even for all ⊂ Ω ( ) with P( ) > 0 if we have 2 ( ) 2, > ( ) almost surely.
Using this expression we then define for ∈ (0, 1) bỹ the adjusted estimator.
Corollary 2. Let ∈ (0, 1). If Condition 7 is fulfilled, it holds with the notation of Theorem 7 for all ( ) ⊂ Ω ( ) with P( ( ) ) > 0 and Similarly as discussed in Section 2.4 after Example 3 we can omit the weighting in (34) for obtaining a feasible testing procedure also within the 2D Poisson setting. However, in the setting where both processes X (1) and X (2) are observed at the observation times introduced in Example 4 with different 1 ≠ 2 it can be easily verified that the weighting in (34) is necessary and leads to a correct estimation of the distribution of ( ) 2, .

SIMULATION RESULTS
We conduct simulation studies to verify the effectiveness and to study the finite sample properties of the developed tests.

Testing for jumps
In our simulation the observation times originate from a Poisson process with intensity n which corresponds to = 1 in Example 1 and yields on average n observations in the time interval [0, 1].
We simulate from the model where X 0 = 1 and the Poisson measure has the predictable compensator This model is a 1D version of the model used in Section 6 of Jacod and Todorov (2009). Their test serves as a benchmark for our bivariate result, and for this reason it makes sense to work with their parameter specifications as well. Precisely, we use the parameter setting from Table 1 with 2 = 8 × 10 −5 in all cases. We further set n = 100, n = 1, 600 and n = 25, 600, so in a trading day of 6.5 hr this corresponds to observing X (1) and X (2) on average every 4 min, every 15 s and every second. Also = 0.03 and = 0.49 are chosen as in Jacod and Todorov (2009); see Remark 5.5 therein for the reasoning behind this choice of the truncation parameters. Specific for our procedure is the choice of b n , L n , and M n for which we refer to Section 5 of Martin and Vetter (2018). We use = 1∕ √ for the local interval in the estimation of s and = ⌊ln( )⌋, = ⌊10 √ ⌋ in the simulation of thê, , ,− ( ),̂, , ,+ ( ). Generally, our procedure is rather robust to the choice of these three parameters.
The cases I-j to III-j correspond to the presence of jumps of diminishing size. When there are smaller jumps we choose a situation where there are more jumps such that the overall contribution of the jumps to the quadratic variation is roughly the same in all three cases. The fraction of the quadratic variation which originates from the jumps matches the one estimated in real financial data from Todorov and Tauchen (2011). In all three cases where the model allows for jumps we only use paths in the simulation study where jumps were realized. In the fourth case Cont we consider a purely continuous model.
We applied the two testing procedures from Theorem 3 and Corollary 1 for k = 2, 3, 5 and the results are displayed in Figures 6 and 7. In Figure 6 the results from Theorem 3 are presented. In the left column the empirical rejection rates are plotted against the theoretical value of the test and in the right column we show estimated density plots based on the simulated values of Φ ( ) , , . In Figure 7, we present the results from the test in Corollary 1 for = 0.9 in the same way. The density plots here show the estimated density ofΦ ( ) , , ( ). Figure 6 can be summarized as follows: • The power of the test from Theorem 3 is very good, in particular for n = 1, 600 and n = 25, 600.
• The empirical rejection rates match the asymptotic values rather well only at the highest observation frequency n = 25, 600 and only for I-j and II-j.
• We observe over-rejection in all cases. The empirical rates match the asymptotic values better in the cases where there are on average larger jumps.
• The results are better the smaller k is.
• The density plots show the convergence of Φ ( ) , , to 1 in the presence of jumps and to (k + 1)∕2 under the alternative as predicted from Example 1.
The overall tendencies remain the same for Figure 7, but with a notable difference: The observed rejection rates of the adjusted test from Corollary 1 match the asymptotic values much better than those from Theorem 3, while the power remains good in all cases. In particular, the empirical size of the test is at least reasonable in the case III-j when n = 25, 600, while for I-j and II-j we get good results already for n = 1, 600. The density plots support this observation,

F I G U R E 6
Simulation results for the test from Theorem 3. The dotted lines correspond to n = 100, the dashed lines to n = 1, 600, and the solid lines to n = 25, 600. In all cases N = 10, 000 paths were simulated.  , , ( ) clusters around the value 1 + (1 − )(k − 1)∕2 instead of (k + 1)∕2, but the values are still large enough to be well distinguished from 1. Figure 8 illustrates how the performance of the test from Corollary 1 depends on the choice of the parameter . We investigate the empirical rejection rates for k = 2 in the cases III-j and

F I G U R E 7
Simulation results for the test from Corollary 1. The dotted lines correspond to n = 100, the dashed lines to n = 1, 600, and the solid lines to n = 25, 600. In all cases N = 10, 000 paths were simulated. Cont with n = 25, 600 for the test with level = 5% as a function of . We plot the empirical rejection rate under the null hypothesis III-j, which serves as a proxy for the type-I error of the test, together with one minus the empirical rejection rate under the alternative Cont, which serves as a proxy for the type-II error of the test. The sum of both error proxies becomes an indicator for the overall performance of the test in dependence of . As expected we observe a decrease in F I G U R E 8 This graphic shows for k = 2, = 5%, and n = 25, 600 the empirical rejection rate in the case Cont (dotted line) and 1 minus the empirical rejection rate in the case III-j (dashed line) from the Monte Carlo simulation based on Corollary 1 as a function of ∈ [0, 1] [Colour figure can be viewed at wileyonlinelibrary.com] the type-I error as increases and an increase in the type-II error. While we observe an approximately linear decrease in the type-I error, the type-II error is equal to zero until 0.8, then slightly increases and starts to steeply increase at = 0.9. In this example the overall error is minimized for a relatively large value of close to = 0.9 which explains our choice in Figure 7.
Further we carried out simulations in the same models, but based on equidistant observation times t i,n = i∕n. In this specific setting the test from Theorem 3 coincides with tests discussed in Aït-Sahalia and  and in Chapter 10.3 of Aït-Sahalia and Jacod (2014). The simulation results are presented in Figures 9, 10 and 11 in the same fashion as in Figures 6, 7 and 8 for the irregular observations. Our main observations here are twofold: • The tests from Theorem 3 and Corollary 1 based on irregular observations both are not significantly worse than those obtained in the simpler setting of equidistant observation times.
• The adjustment technique introduced for Corollary 1 can be used to improve the finite sample performance both of our test based on irregular observations as well as of existing tests based on equidistant observations.

Testing for disjoint jumps
We simulate according to the model used in Section 6 of Jacod and Todorov (2009), which directly allows to compare the performance of our approach to the performance of their methods in the case of equidistant and synchronous observations. The model for the process X is given by − 3 3 ( , 3 ), (2) = (2)

2
(2) + 2 ∫ R − 2 2 ( , 2 ) + 3 ∫ R where 0 < l i < h i for i = 1, 2, 3, and the initial values are X 0 = (1, 1) T . We consider the same twelve parameter settings which were discussed in Jacod and Todorov (2009) of which six allow for common jumps and six do not. In the case where (common) jumps are possible, we only use the simulated paths which contain (common) jumps, that is, we only use paths in the simulation where ([0, 1], R) ≠ 0 whenever i ≠ 0, i = 1, 2, 3. For the parameters we set 2 1 = 2 2 = 8 × 10 −5 in all scenarios and choose the parameters for the Poisson measures such that the contribution of the jumps to the total variation remains approximately constant and matches estimations from real financial data; see Huang and Tauchen (2005). The parameter settings are summarized in Table 2; compare table 1 in Jacod and Todorov (2009).
The first six cases in Table 2 describe situations where common jumps might be present, the other six cases situations where there exist no common jumps. In the cases I-j, II-j, and III-j there exist only common jumps and no disjoint jumps and the Brownian motions W (1) and W (2) are uncorrelated. In the cases I-m, II-m, and III-m we have a mixed model which allows for common and disjoint jumps and also the Brownian motions are positively correlated. In the cases I-d0, II-d0, and III-d0 the Brownian motions W (1) , W (2) are uncorrelated while in the cases I-d1, II-d1, and III-d1 the processes are driven by the same Brownian motion W (1) = W (2) . The prefixes I, II, and III indicate an increasing number of jumps present in the observed paths. Since our choice of parameters is such that the overall contribution of the jumps to the quadratic variation is roughly the same in all parameter settings, this corresponds to a decreasing size of the jumps. Hence in the cases I-* we have few large jumps while in the cases III-* we have many small jumps.  As a model for the observation times we use the Poisson setting discussed in Examples 1 and 2 for 1 = 2 = 1 and we set T = 1. As in Section 4.1 we choose n = 100, n = 1, 600, and n = 25, 600, and we set = 0.03 and = 0.49 for all occuring truncations as in Jacod and Todorov (2009). We use = 1∕ √ for the local interval in the estimation of ( ) − , ( ) , and = ⌊ln( )⌋, = ⌊10 √ ⌋ in the simulation of thê, ( ). In Figures 12-15, we display the results from the simulation for the testing procedures from Theorem 7 and Corollary 2. First we plot in Figure 12 for all twelve cases the empirical rejection rates from Theorem 7 in dependence of ∈ [0, 1] as for the plots in the left column of Figure 6. The six plots on the left show the results for the cases where the hypothesis of the existence of common jumps is true.

TA B L E 2 Parameter settings for the simulation
We come up with similar conclusions as for Figure 6: • In all cases the test has very good power against the alternative of idiosyncratic jumps.
• Whenever common jumps are present, the test overrejects.
• At least for n = 25, 600 the observed rejection rates match the asymptotic level quite well. This does not hold in the cases III-j and III-m, so when the jumps are on average very small. Figure 13 shows the corresponding density estimators in all twelve cases. If there are common jumps it is visible that Φ ( ) 2, , converges to 1 as n → ∞, but most of the time the density peaks at a value significantly larger than 1. This gives the overrejection in Figure 12. Under the alternative of disjoint jumps Φ ( ) 2, , tends to cluster around 1.5 which corresponds to the results obtained in Example 1 for the 1D setting.
Our simulation results indicate that the empirical sizes of the tests from Theorem 7 are always worse than the results in the equidistant setting which are displayed in figure 4 of Jacod and Todorov (2009), while the power of our test is much better. This effect is partly due to the fact that idiosyncratic jumps, although their contribution is asymptotically negligible, are included in the estimation of the asymptotic variance in the central limit theorem of Jacod and Todorov (2009). Hence their test consistently overestimates the asymptotic variance which yields lower rejection rates. Further, in the asynchronous setting the asymptotically negligible terms in Φ ( ) 2, , are larger relative to the asymptotically relevant terms than in the setting of synchronous observation times. This increases the rejection rates for our tests. The test from Corollary 2 outperforms the test from Theorem 7 in the simulation study. This can be seen in Figures 14 and 15 which show the results from the Monte Carlo simulation for the test from Corollary 2 with = 0.75 in the same fashion as for Theorem 7 in Figures 12 and 13. Our findings are similar to the univariate case: • The power of the test from Corollary 2 is practically as good as for the test from Theorem 7.
• Whenever common jumps are present, the empirical rejection rates match the asymptotic level much better than in Figure 12. In the cases I-j, I-m, II-j, II-m we get good results already for n = 1, 600, and for III-j, III-m we do so at least for n = 25, 600. Figure 15 shows that the adjusted estimatorΦ ( ) 2, , ( ) is much more centered around 1 than Φ ( ) 2, , if there exist common jumps which can be seen, for example, in the cases III-j and III-m. FurtherΦ ( ) 2, , ( ) clusters around a value very close to 1 also if there exist no common jumps. However, in the cases *-d0 and *-d1 the peak of the density still occurs at a value which is noticeably larger than 1. Figure 16 illustrates similarly as Figure 8 for the test for jumps the performance of the test from Corollary 2 in dependence of . We choose the cases III-j and III-d0 as representatives F I G U R E 16 This graphic shows for = 5% and n = 1, 600 the empirical rejection rate in the case III-j and 1 minus the empirical rejection rate in the case III-d0 from the Monte Carlo simulation based on Corollary 2 as a function of ∈ [0, 1] [Colour figure can be viewed at wileyonlinelibrary.com] for the null hypothesis and the alternative. As expected the level as well as the power of the test decrease as increases. Here we get the lowest overall error for a value of close to 0.75.

CONCLUSIONS
This paper has provided two tests for jumps in case of irregular observations. The first one regards the univariate setting, where the null hypothesis is that the realization of̃exhibits jumps. The second test regards the bivariate setting, and in this case the null hypothesis corresponds to common jumps in X (1) and X (2) . Both test statistics are based on similar ratios of certain functionals, and these ratios converge to 1 under the respective null hypotheses, but converge to quantities distinct from 1 if the alternatives hold, at least under mild additional assumptions. Whereas the test statistic Φ ( ) , , in the univariate setting is a rather natural generalization of the test statistic proposed in Aït-Sahalia and Jacod (2009) for equidistant observations, the construction of the corresponding statistic Φ ( ) , , in the case of bivariate processes is more sophisticated as asynchronicity of the observations comes into play. In contrast to Jacod and Todorov (2009) we apply a Hayashi-Yoshida type intuition and use increments over overlapping intervals in order to identify common jumps in both components.
As the observations come in irregularly, it becomes obvious that the asymptotic distributions of Φ ( ) , , and Φ ( ) , , not only depend on the underlying processes but on the nature of the sampling schemes as well. We prove associated stable central limit theorems in both cases, but we need conditions which guarantee the convergence of the respective asymptotic conditional variances. These assumptions essentially have to be stated locally and ensure the convergence of certain functionals around any finite family of possible jump times. They can be shown to hold for Poisson sampling in particular, which is the prime example of irregular sampling. In order to obtain feasible tests we provide a bootstrap procedure for which we estimate the asymptotic conditional variances in both limit theorems via a resampling from similar test statistics around the presumed jump times. This procedure works under an additional local homogeneity condition.
An extensive simulation study shows that the proposed tests have a good performance in terms of power, but do not keep the asymptotic level for small and moderate values of n. For this reason adjusted statistics based on an auxiliary parameter are proposed which try to correct for terms within the test statistics which are negligible to first and second order in the asymptotics, but play a major role in finite samples. These adjustments yield much better empirical levels without practically affecting the power of the test. As our test in the univariate setting coincides with the ones from Aït-Sahalia and Jacod (2009) and Aït-Sahalia and Jacod (2014) in case of equidistant observations, this adjustment also improves the performance of their standard procedure.
Directions for future research are manifold. From a purely theoretical point of view a full asymptotic analysis of functionals of the form V(h, [k], n ) for an entire class of functions h is important, not just for the specific choice of f(x 1 , x 2 ) = (x 1 x 2 ) 2 . With a view on the statistical problem at hand, the comparison of the finite sample properties of our test and the one from Jacod and Todorov (2009) does not yield a clear recommendation which one to choose. More accurate expansions of the test statistics to higher order could help here, and one could also provide a local power analysis for the proposed tests in specific parametric models, for example for a Brownian motion plus a compound Poisson process. Todorov, V., & Tauchen, G. (2011). Volatility jumps. Journal of Business & Economic Statistics,29(3), 356-371.

SUPPORTING INFORMATION
Additional supporting information may be found online in the Supporting Information section at the end of this article.

APPENDIX STRUCTURE OF THE PROOFS
Throughout the proofs we will assume that the processes b s , s , and Γ s are bounded on [0, T]. By Conditions 1 and 4 they are all locally bounded. A localization procedure then shows that the results for bounded processes can be carried over to the case of locally bounded processes (see e.g., section 4.4.1 in Jacod and Protter (2012)).
Furthermore, we introduce the decomposition X t = X 0 + B(q) t + C t + M(q) t + N(q) t of the Itô semimartingale (1) or (23) with Here, q is a parameter which controls whether jumps are classified as small jumps or big jumps. We have already applied some of the techniques used in the upcoming proofs in Martin and Vetter (2018), where the other null hypothesis of no common jumps was solved by a Hayashi-Yoshida type estimator as well. For these similar parts we thus keep the discussion brief and add references to Martin and Vetter (2018) for more detailed arguments. Parts in the proofs which are new or specific to the tests introduced in this paper will be discussed in full detail.

Proofs for Section 2
Proof of Theorem 1. We obtain the following two representations for Φ ( ) , , Step 2. Next we prove for all q > 0 and ∈ N.
To this end note that on the set Ω(n, q, r) where two different jumps of N(q) are further apart than | n | T and any jump time of N(q) is further away from j2 −r than k| n | T for any j ∈ {1, … , ⌊T2 r ⌋} it holds As in the proof of Proposition A.3 of Martin and Vetter (2018) it can be shown that Condition 2(ii) yields the -stable convergence for standard normally distributed random variables (U s,− , U s,+ ) which are independent of  and of the k,− (s), k,+ (s). Using this stable convergence, Proposition 2.2 in Podolskij and Vetter (2010) and the continuous mapping theorem we then obtain Because of P(Ω( , , )) → 1 as n → ∞ for any q, r this convergence yields (A8).
Step 3. Finally we need to show for all > 0. (A9) can be proven using that (̃( , )) ∈N is a sequence of right continuous elementary processes approximating and that Γ(⋅, dy) has uniformly bounded first moments, together with the boundedness of the jump sizes of X respectively N(q).
Combining the results from Steps 1 to 3 we obtain (A6). ▪ The structure of the proof of Theorem 3 and especially of (A10) therein is identical to the structure of the proof of Theorem 4.2 and (A.27) in Martin and Vetter (2018).

Proof of Theorem 3. For proving (19) we need to show
which follows from Theorem 2 and for all > 0 and any ∈ [0, 1]. Hence, it remains to prove (A10) for which we give a proof below. For proving (20) we observe that Φ ( ) , , converges on Ω ( ) to a limit strictly greater than 1 by Theorem 1 and Condition 3(ii). On the other hand we will show in Section 7.1. Hence, we obtain (20). ▪ As a prerequisite for the proof of (A10) we need the following lemma which will be proven in Section 7.1.

Lemma 1. Suppose Conditions 2 and 3(ii) are fulfilled and S p is a stopping time with Δ
≠ 0 almost surely. Then it holdŝ

Proposition 1. Suppose Condition 3 is fulfilled. Then it holds
for any -measurable random variable Υ and all > 0.
The proof of Proposition 1 requires Lemma 1 and will be given in Section 7.1.
Proof. We havẽ which is almost surely larger than 0 by Condition 3(ii) and hence (22) follows as in the proof of (20). ▪

Proofs for Section 3
Proof of Theorem 4. From (2.2) in Martin and Vetter (2018) we obtain (1) , Δ (2) is similar to the proof of Theorem 3.2 in Martin and Vetter (2018). This yields (30) because by Condition 6 we have ∫ 0 | (1) (2) | > 0 which together with the fact that H 1 is strictly increasing guarantees ( ) 1, The proof of Theorem 6 has the same structure as the proof of Theorem 2.
Proof of Theorem 6. We prove from which (32) easily follows.
Step 1. We introduce discretized versions̃( , ),̃( , ) of , C as in Step 1 in the proof of Theorem 2 with the only difference that by (S q,p ) p we denote an enumeration of the jump times of N (1) (q)N (2) (q) and not of N(q). Thereby we turn the focus to the common jumps. We then show The proof for (A17) will be given in Section 7.2.
Step 3. Finally we consider for all > 0 which can be proven using that (̃( , )) ∈N is a sequence of right continuous elementary processes approximating and that the first moments of Γ(⋅, dy) are uniformly bounded, together with the boundedness of the jump sizes of X respectively N(q).
Combining (A17) for all > 0 and any ∈ [0, 1]. Hence, it remains to prove (A20). For proving (38) we observe that Φ ( ) 2, , converges on the given F to a random variable which is under Condition 7(iv) almost surely different from 1 by Theorem 5. Note to this end that which will be proven in Section 7.2. ▪ As a prerequisite for the proof of (A20) we need the following lemma which will be proven in Section 7.2. Lemma 2. Suppose Conditions 6 and 7 (ii) are fulfilled and S p is a stopping time with Δ ≠ 0 almost surely. Then for l = 1, 2 it holdŝ ( ) for any-measurable random variable Υ and all > 0.
The proof of Proposition 2 requires Lemma 2 and will be given in Section 7.2. The proof for (A20) based on Proposition 2 is then identical to the proof of (A10) based on Proposition 1 and is therefore omitted here.
Proof of Corollary 2. Using arguments from the proof of Theorem 5 and the proof of (A.26) in Martin and Vetter (2018) we derive on Ω ( ) . Replacing (32) with (A26) in the proof of (37) yields (39).
Under the alternative we obtain using Theorem 5 and (A25) where the limit is almost surely different from zero by Condition 7(iv). We then obtain (40) as in the proof of Theorem 7 because of c ( ) 2, , 1 Ω ( ) = P (1); compare (A21). ▪