## 1. Introduction

Frequent observations from modern remote sensing platforms such as weather radar can provide nearly continuous observations of weather systems. Effective utilization of such frequent observations and maximum extraction of their information content for model initialization pose a great challenge. A common practice for sequential data assimilation (DA) algorithms such as the three-dimensional variational (3DVAR) technique and ensemble Kalman filter (EnKF; Evensen, 1994) is to group frequent observations into small batches and perform the analyses at frequent intervals through so-called intermittent assimilation cycles (e.g. Hu and Xue, 2007; Dowell and Wicker, 2009). This approach involves frequent stopping and restarting of the prediction model, which can introduce shock to the prediction system every time a new analysis is performed. In the case of EnKF, the writing and reading of a full ensemble of states at least twice each cycle carry very high data input/output (I/O) costs.

Assimilating radar data at volume-scan or subvolume-scan intervals can be computationally very expensive given the high frequency of the data. Using longer assimilation cycles can save computational costs, where observations taken over a chosen time window are often all assumed to be valid at the analysis time. This approach is common in assimilating frequent radar data. It can, however, introduce large timing error when the weather system is fast evolving, as in the case of a fast moving convective storm. Another way to reduce the computational cost is to discard some observations not close enough to the analysis time (e.g. Hu and Xue, 2007; Zhang *et al.*, 2009). The obvious drawback is that some valuable observations are not used.

A better approach to more fully utilize observations collected over time is to employ four-dimensional assimilation algorithms. In contrast to three-dimensional algorithms, four-dimensional algorithms use observations distributed over time simultaneously and at the times when they are collected. Sakov *et al.* (2010; S10 hereafter) proposed a generic asynchronous ensemble Kalman filter (AEnKF) that allows for the assimilation of asynchronous observations before, at and after the analysis time. The algorithm has a close relationship with the ensemble Kalman smoother (EnKS) (Evensen and van Leeuwen, 2000). The four-dimensional local ensemble Kalman filter (4D-LEnKF) of Hunt *et al.* (2004) and the four-dimensional local ensemble transform Kalman filter (4D-LETKF) of Hunt *et al.* (2007) can be considered specific implementations of the AEnKF algorithm. As pointed out by S10, in the case of a perfect, linear model, the analysis ensemble mean and ensemble perturbations in EnKF can be written as the linear combination or linear transform of the forecast ensemble perturbations. This transform matrix, calculated from the background forecast ensembles at the observation times, can be used for the assimilation of observations at other times as long as the evolution of ensemble perturbations is linear (Evensen, 2003). When the transform matrix is used for the assimilation of observations at other times, the Kalman gain in the EnKF formula contains covariances involving ensemble priors at different times; they are therefore referred to as asynchronous covariances. Through the asynchronous covariances between background states at the observation times and the analysis time, AEnKF can directly use asynchronous observations to update the model state at the analysis time. In addition, AEnKF can be implemented for different EnKF variants in principle (S10).

Hunt *et al.* (2004) showed that for the Lorenz-96 (Lorenz, 1996) model, the performance of their 4D-LEnKF is considerably better than that of the standard EnKF and EnKF using time-interpolated data. In Hunt *et al.* (2007), 4D-LETKF is compared with the National Centers for Environmental Prediction (NCEP) spectral statistical interpolation (SSI) 3DVAR system using a T62 model in a perfect model scenario; they also found 4D-LETKF-based forecasts to be more accurate than those from the SSI analyses. These studies show positive impacts using four-dimensional algorithms even when the linear model assumption is not strictly valid. More recently, Compo *et al.* (2011) applied the ensemble square root filter (EnSRF; Whitaker and Hamill, 2002; hereafter WH02) to a global reanalysis project that assimilated surface pressure observations only, and mentioned in passing the use of hourly observations not taken at the 6 h analysis times through an extension of the EnSRF algorithm. Their implementation did not seem to apply time localization.

For a storm-scale radar DA problem, the model dynamics and physics are more highly nonlinear. Additionally, some observation operators are also nonlinear. The performance of an AEnKF algorithm in storm-scale applications has yet to be examined. It would be interesting to see how well an asynchronous extension of the serial EnSRF (Whitaker and Hamill, 2002) would work, given that radar DA studies have almost exclusively used the EnSRF algorithm or algorithms that are very similar (e.g. Snyder and Zhang, 2003; Zhang *et al.*, 2004; Tong and Xue, 2005; Xue *et al.*, 2006; Snook *et* *al.*, 2011).

In this article we develop an AEnKF implementation of EnSRF, which we refer to as the 4DEnSRF. As the first step to evaluate the algorithm, we employ observing system simulation experiments (OSSEs) that use simulated radar data. With OSSEs the truth is known, allowing us to unambiguously assess the performance of the algorithms. The OSSE framework also allows us to simulate radar data in different configurations and to perform experiments that are not easy to do with real data. We compare the 4DEnSRF with the regular EnSRF. The rest of this article is organized as follows. In section 2 we review the general EnSRF algorithm and then describe our 4DEnSRF algorithm and its implementation. Model settings, radar observation simulation and OSSE configurations are described in section 3. The OSSE results are discussed in section 4 and a summary and conclusions are given in section 5.