A Gaussian-mixture ensemble transform filter

Authors


Abstract

We generalize the popular ensemble Kalman filter to an ensemble transform filter, in which the prior distribution can take the form of a Gaussian mixture or a Gaussian kernel density estimator. The design of the filter is based on a continuous formulation of the Bayesian filter analysis step. We call the new filter algorithm the ensemble Gaussian-mixture filter (EGMF). The EGMF is implemented for three simple test problems (Brownian dynamics in one dimension, Langevin dynamics in two dimensions and the three-dimensional Lorenz-63 model). It is demonstrated that the EGMF is capable of tracking systems with non-Gaussian uni- and multimodal ensemble distributions. Copyright © 2011 Royal Meteorological Society

1. Introduction

We consider dynamical models given in the form of ordinary differential equations (ODEs):

equation image(1)

with state variable equation image. Initial conditions at time t0 are not precisely known and are treated as a random variable instead, i.e. we assume that

equation image

where π0(x) denotes a given probability density function (PDF). The solution of (1) at time t with initial condition x0 at t0 is denoted by x(t;t0,x0).

The evolution of the initial PDF π0 under the ODE (1) up to a time t > t0 is provided by the continuity equation

equation image(2)

which is also called Liouville's equation in statistical mechanics literature (Gardiner, 2004). Let us denote the solution of Liouville's equation at observation time t by π(x,t). In other words, solutions x(t;t0,x0) with x0π0 constitute a random variable with PDF π,t).

For a chaotic ODE (1), i.e. for an ODE with positive Lyapunov exponents, the PDF π,t) will be spread out over the whole chaotic attractor for t → ∞. This in turn implies a limited solution predictability in the sense that the time-evolved PDF will become increasingly independent of the initial PDF π0. Furthermore, even if the initial PDF is nearly Gaussian, with mean equation image and small covariance matrix P, the solution equation image will become increasingly unrepresentative of the expectation value of the underlying random variable it is supposed to represent.

To counteract the divergence of nearby trajectories under chaotic dynamics, we assume that we have uncorrelated measurements equation image at times tj, j ≥ 1 with measurement-error covariance matrix equation image, i.e.

equation image(3)

where the notation equation image is used to denote a normal distribution in equation image with mean equation image and covariance matrix equation image. The matrix equation image is called the forward operator. The task of combining solutions to (1) with intermittent measurements (3) is called data assimilation in geophysical literature (Evensen, 2006) and filtering in statistical literature (Bain and Crisan, 2009).

A first step to perform data assimilation for nonlinear ODEs (1) is to approximate solutions to the associated Liouville equation (2). In this article, we rely exclusively on particle methods (Bain and Crisan, 2009), for which Liouville's equation is naturally approximated by the evolving empirical measure. More precisely, particle or ensemble filters rely on the simultaneous propagation of M independent solutions equation image, i = 1,…,M of (1) (Evensen, 2006). We associate the empirical measure

equation image(4)

with weights γi > 0 satisfying

equation image

Here δ(·) denotes the Dirac delta function. Hence our statistical model is given by the empirical measure (4) and is parametrized by the particle weights {γi} and the particle locations {xi}. In the absence of measurements, the empirical measure πem with constant weights γi is an exact (weak) solution to Liouville's equation (2) provided the xi(t) values are solutions to the ODE (1). Optimal statistical efficiency is achieved with equal particle weights γi = 1/M.

The assimilation of a measurement at tj leads via Bayes' theorem to a discontinuous change in the statistical model (4). Sequential Monte Carlo methods (Bain and Crisan, 2009) are primarily based on a discontinuous change in the weight factors γi. To avoid a subsequent degeneracy in the particle weights, one resamples or uses other techniques that essentially lead to a redistribution of particle positions xi. See, for example, Bain and Crisan (2009) for more details. The ensemble Kalman filter (EnKF) relies on the alternative idea of replacing the empirical measure (4) by a Gaussian PDF prior to an assimilation step (Evensen, 2006). This approach allows for the application of the Kalman analysis formulae to the ensemble mean and covariance matrix. The final step of an EnKF is the reinterpretation of the Kalman analysis step in terms of modified particle positions while the weights are held constant at γi = 1/M. We call filter algorithms that rely on modified particle/ensemble positions and fixed particle weights ensemble transform filters. A new ensemble transform filter has recently been proposed by Anderson (2010). The filter is based on an appropriate transformation step in observation space and subsequent linear regression of the transformation on to the full state space. The approach developed in this article relies instead on a general methodology for deriving ensemble transform filters as proposed by Reich (2011); see section 2 below for a summary. The same methodology has been developed for continuous-in-time observations by Crisan and Xiong (2010). In this article, we demonstrate how our ensemble transform filter framework can be used to generalize EnKFs to Gaussian-mixture models and Gaussian kernel density estimators. The essential steps are summarized in section 3, while an algorithmic summary of the proposed ensemble Gaussian-mixture filter (EGMF) is provided in section 4. The EGMF can also be viewed as a generalization of the continuous formulation of ensemble square-root filters (Tippett et al., 2003) as provided by Bergemann and Reich (2010a), Bergemann and Reich (2010b) and the EnKF with perturbed observations, as demonstrated by Reich (2011). The article concludes with three numerical examples in section 5. We first demonstrate the properties of the newly proposed EGMF for one-dimensional Brownian dynamics under a double-well potential. This simulation is extended to the associated two-dimensional Langevin dynamics model with only velocities being observed. Finally we consider the three-variable model of Lorenz (1963).

We mention that alternative extensions of the EnKF to Gaussian mixtures have recently been proposed, for example by Smith (2007), Frei and Künsch (2011) and Stordal et al. (2011). However, while the cluster EnKF of Smith (2007) is an example of an ensemble transform filter, it fits the posterior (analysed) ensemble distribution back to a single Gaussian PDF and hence works only partially with a Gaussian mixture. Both the mixture ensemble Kalman filter of Frei and Künsch (2011) and the adaptive Gaussian-mixture filter of Stordal et al. (2011) approximate the model uncertainty by a sum of Gaussian kernels and utilize the ensemble Kalman filter as a particle update step under a single Gaussian kernel. Resampling or reweighting of particles is required to avoid a degeneracy of particle weights due to changing kernel weights. A related filter algorithm based on Gaussian kernel density estimators has previously been considered by Anderson and Anderson (1999).

2. A general framework for ensemble transform filters

Bayes' formula can be interpreted as a discontinuous change of a forecast PDF πf into an analyzed PDF πa at each observation time tj. On the other hand, one can find a continuous embedding π(x,s) with respect to a fictitious time s ∈ [0,1] such that π(·,0) = πf and πa = π(·,1). As proposed by Reich (2011), this embedding can be viewed as being induced by a continuity (Liouville) equation

equation image(5)

for an appropriate vector field equation image. The vector field g is not uniquely determined for a given continuous embedding π,s) unless we also require that it is the minimizer of the kinetic energy

equation image

over all admissible vector fields equation image, where equation image is a positive-definite mass matrix (Villani, 2003). Admissibility means that g = v satisfies (5) for given π and ∂π/∂s.

Under these assumptions, a constrained variational principle (Villani, 2003) implies that the desired vector field is given by g = M−1xψ, where the potential ψ(x,s) is the solution of the elliptic partial differential equation (PDE)

equation image(6)

for given PDF π, mass matrix M and negative log-likelihood function

equation image(7)

Here equation image denotes the expectation value of a function f(x) with respect to a PDF π. We finally replace (5) by

equation image(8)

with an underlying ODE formulation

equation image(9)

in fictitious time s ∈ [0,1]. As for the ODE (1) and its associated Liouville equation (2), we may approximate (9) and its associated Liouville equation (8) by an empirical measure of type (4). Furthermore, one and the same empirical measure approximation can now be used for both the ensemble propagation step under the model dynamics (1) and the data-assimilation step (8) using constant and equal weights γi = 1/M. The particle filter approximation is closed by finding an appropriate numerical solution to the elliptic PDE (6). This is the crucial step that will lead to different ensemble transform filter algorithms.

The basic numerical approach to the data-assimilation step within an ensemble transform filter formulation consists, then, of the following sequence of steps.

  • (i)Given a current ensemble of solutions xi(s), i = 1,…,M, one fits a statistical model equation image.
  • (ii)Solve the elliptic PDE
    equation image(10)
    for a vector field equation image. The solution is not uniquely determined and an appropriate choice needs to be made. See the discussion above.
  • (iii)Propagate the ensemble members under the ODE
    equation image(11)
    We assume that a forecast ensemble of M members equation image, i = 1,…,M, is available at an observation time tj to provide the initial conditions for the ODE (11). Solutions at s = 1 yield the analyzed ensemble members, which are then used as the new initial conditions for (1) at time t = tj, and (1) is solved over [tj,tj+1] up to the next observation point.

If the statistical model is a Gaussian with mean equation image and covariance matrix equation image, then the outlined approach leads to a continuous formulation of the ensemble square-root filter analysis step at time tj (Bergemann and Reich, 2010a,b), i.e.

equation image(12)

for s ∈ [0,1]. It follows that M = P−1 and

equation image(13)

3. An ensemble transform filter based on Gaussian-mixture statistical models

We now develop an ensemble transform filter algorithm based on a L ≥ 1 component Gaussian-mixture model, i.e.

equation image(14)

where πGauss,l(x) denotes the normal distribution in equation image with mean equation image and covariance matrix Pl. The Gaussian-mixture parameters, i.e. equation image, l = 1,…,L, need to be determined from the ensemble xi, i = 1,…,M, in an appropriate manner using for example the expectation-maximization (EM) algorithm (Dempster et al., 1977; Smith, 2007); see section 3.3 for more details. Note that equation image and αl ≥ 0. To simplify notation, we write πl instead of πGauss,l from now on.

An implementation of the associated continuous formulation of the Bayesian analysis step proceeds as follows. To simplify the discussion we derive our filter formulation for a scalar observation variable, i.e. K = 1, yobs(tj) − Hx(tj) ∼ N(0,R) and

equation image(15)

The vector-valued case can be treated accordingly provided the error covariance matrix R is diagonal. We first decompose the vector field equation image in (11) into two contributions, i.e.

equation image(16)

To simplify notation we drop the explicit s dependence in the following calculations. We next also decompose the right-hand side of the elliptic PDE (10) into two contributions:

equation image(17)

We now derive explicit expressions for uA(x) and uB(x).

3.1. Gaussian-mixture Kalman-filter contributions

We define the vector field uA(x) through the equation

equation image(18)

together with

equation image(19)

which implies that the potentials ψA,l(x), l = 1,…,L, are uniquely determined by

equation image(20)

for all l = 1,…,L. It follows that the potentials ψA,l(x) are equivalent to the ensemble Kalman filter potentials for the lth Gaussian component. Hence, using (12) and (18), we obtain

equation image(21)

3.2. Gaussian-mixture exchange contributions

The remaining contributions for solving (5) are collected in the vector field

equation image(22)

which therefore needs to satisfy

equation image(23)

and hence we may set

equation image(24)

for all l = 1,…,L. To find a solution of (24), we introduce functions equation image such that

equation image(25)

with y = Hx and equation image. Now the right-hand side of (24) gives rise to

equation image(26)

and (24) simplifies further to the scalar PDE

equation image(27)

The PDE (27) can be solved for

equation image(28)

under the condition f(0) = 0 by explicit quadrature, and one obtains

equation image(29)

with marginalized PDF

equation image(30)

where equation image, and the standard error function

equation image(31)

Note that

equation image(32)

We finally obtain the expression

equation image(33)

for the vector field uB(x,s).

The expectation values equation image, l = 1,…,L, can be computed analytically, i.e.

equation image(34)

or estimated numerically. Recall that equation image and therefore

equation image(35)

It should be kept in mind that the Gaussian-mixture parameters equation image can be updated directly using the differential equations

equation image(36)
equation image(37)
equation image(38)

for l = 1,…,L. Here equation image is chosen such that

equation image(39)

Furthermore, uA(x,s) exactly mirrors the update of the Gaussian-mixture means equation image and covariance matrices Pl, while uB(x,s) mimics the update of the weight factors αl by rearranging the particle positions accordingly.

As already eluded to, we can treat each uncorrelated observation separately and sum the individual contributions in uA(x,s) and uB(x,s), respectively, to obtain the desired total vector field (16).

3.3. Implementation aspects

Given a set of ensemble members xi, i = 1,…,M, there are several options for implementing a Gaussian-mixture filter. The trivial case L = 1 leads back to the continuous formulations of Bergemann and Reich (2010a), Bergemann and Reich (2010b). More interestingly, one can chose LM and estimate the mean and covariance matrices for the Gaussian-mixture model using the EM algorithm (Dempster et al., 1977; Smith, 2007). The EM algorithm self-consistently computes the mixture mean equation image and covariance matrix Pl via

equation image(40)

for l = 1,…,L using weights βi,l defined by

equation image(41)

The EM algorithm can fail to converge and a possible remedy is to add a constant contribution δI to the empirical covariance matrix in (40), with the parameter δ > 0 appropriately chosen. We mention that more refined implementations of the EM algorithm, such as those discussed by Fraley and Raftery (2007), could also be considered. It is also possible to select the number of mixture components adaptively. See, for example, Smith (2007). The resulting vector fields for the ith ensemble member are given by

equation image(42)

and, using (33),

equation image(43)

with weights βi,l given by (41) and yi = Hxi.

Another option to implement an EGMF is to set the number of mixture components equal to the number of ensemble members, i.e. L = M, and to use a prescribed covariance matrix B for all mixture components, i.e. Pl = B and equation image, l = 1,…,L. We also give all mixture components equal weights αl = 1/M. In this setting, it is more appropriate to call (14) a kernel density estimator (Wand and Jones, 1995). Then

equation image(44)

and

equation image(45)

with weights βi,l given by (41), equation image and αl = 1/M. The Kalman-filter-like contributions (44) can be replaced by a formulation with perturbed observations (Evensen, 2006; Reich, 2011) which yields

equation image(46)

where equation image, i = 1,…,m are independent, identically distributed Gaussian random numbers with mean zero and variance R. A particular choice is B = cP, where P is the empirical covariance matrix of the ensemble and c > 0 is an appropriate constant. Assuming that the underlying probability density is Gaussian with covariance P, the choice

equation image(47)

is optimal for large ensemble sizes M in the sense of kernel density estimation (Wand and Jones, 1995). Recall that N denotes the dimension of phase space. The resulting filter is then similar in spirit to the rank histogram filter (RHF) suggested by Anderson (2010) with the RHF increments in observation space being replaced by those generated from a Gaussian kernel density estimator. Another choice is BP, in which case (45) can be viewed as a correction term to the standard ensemble Kalman filter (46). We will explore the kernel estimator in the numerical experiment of section 5.4.

Note that localization, as introduced by Houtekamer and Mitchell (2001) and Hamill et al. (2001), can be combined with (42)–(43) and (44)–(45), respectively, as outlined in Bergemann and Reich (2010a). For example, one could set the covariance matrix B in (44)–(45) equal to the localized ensemble covariance matrix. Localization will be essential for a successful application of the proposed filter formulations to high-dimensional systems (1). The same applies to ensemble inflation (Anderson and Anderson, 1999).

We also note that the computation of the particle-mixture weight factors (41) can be become expensive since it requires the computation of equation image. This can be avoided by using either only the diagonal part of Pl in πl(xi) (Smith, 2007) or a marginalized density such as (30), i.e. πl(yi), yi := Hxi, instead of the full Gaussian PDF values πl(xi). Some other suitable marginalization could also be performed.

The vector field uB(x,s) is responsible for a transfer of particles between different mixture components, according to the observation-adjusted relative weight αl of each mixture component. These transitions can be rather rapid, implying that ||uB(x,s)|| can become large in magnitude. This can pose numerical difficulties and we eliminated those by limiting the l-norm of uB(x,s) through a cut-off value ucut. Alternatively, we might want to replace (30) by a PDF that leads to less stiff contributions to the vector field uB(x,s) such as the Student's t-distributions (Schaefer, 1997). Hence a natural approximative PDF is provided by the the scaled t-distribution with three degrees of freedom, i.e.

equation image(48)

We also introduce the shorthand equation image with equation image and equation image. A first observation is that

equation image(49)

i.e. the first two moments of φl match those of (30). The second observation is that φl can be integrated explicitly, i.e.

equation image(50)

Hence the relation (32) suggests the alternative formulation

equation image(51)

where equation image.

The differential equation (16) needs to be approximated by a numerical time-stepping procedure. In this article we use the forward Euler method for simplicity. However, the limited region of stability of an explicit method such as forward Euler implies that the step size Δs needs to be made sufficiently small. This issue has been investigated by Amezcua et al. (2011) for the formulation with L = 1 (standard continuous ensemble Kalman filter formulation) and a diagonally implicit scheme has been proposed that overcomes the stability restrictions of the forward Euler method while introducing negligible computational overhead. The computational cost of a single evaluation of (16) for given mixture components should be slightly lower than for a single EnKF step, since no matrix inversion is required. Additional expenses arise from fitting the mixture components (e.g. using the EM algorithm) and from having to use a number of time steps 1/Δs > 1.

4. Algorithmic summary of the EGMF

Since the proposed methodology for treating nonlinear filter problems is based on an extension of the EnKF approach, we call the new filter the ensemble Gaussian-mixture filter (EGMF). We now provide an algorithmic summary.

First a set of M ensemble members xi(t0) is generated at time t0 according to the initial PDF π0.

In between observations, the ensemble members are propagated under the ODE model (1), i.e.

equation image(52)

for i = 1,…,M and t ∈ [tj−1,tj].

At an observation time tj, a Gaussian-mixture model (14) is fitted to the ensemble members xi, i = 1,…,M. One can, for example, use the classic EM algorithm (Dempster et al., 1977; Smith, 2007) for this purpose. In this article we use a simple heuristic to determine the number of components L ∈ {1,2}. An adaptive selection of L is, however, feasible (see e.g. Smith, 2007). Alternatively, one can set L = M and implement the EGMF with a Gaussian kernel density estimator with equation image, αl = 1/M. The covariance matrix B can be based on the empirical covariance matrix P of the whole ensemble via B = cP, with the constant c > 0 appropriately chosen. At this stage covariance localization can also be applied.

The vector fields uA(x,s) and uB(x,s) are computed according to (42) and (43), respectively, (or alternatively use (45)–(46)) for each independent observation and the resulting contributions are summed up to a total vector field equation image. Next the ensemble members are updated according to (16) for x = xi, i = 1,…,M. Here we use a simple forward Euler discretization with step size Δs chosen sufficiently small. After each time step the Gaussian-mixture components are updated, if necessary, using the EM algorithm. The analyzed ensemble members xi(tj) are obtained after 1/Δs time steps as the numerical solutions at s = 1 and provide the new initial conditions for (52) with time t now in the interval [tj,tj+1].

Ensemble-induced estimates for the expectation value of a function f(x) can be computed via

equation image(53)

without reference to a Gaussian-mixture model.

5. Numerical experiments

In this section we provide results from several numerical simulations and demonstrate the performance of the proposed EGMF in comparison with standard implementations of the EnKF and an implementation of the RHF (Anderson, 2010). We first investigate the Bayesian assimilation step without model equations.

5.1. Single Bayesian assimilation step

We test our formulation first for a single assimilation step, where the prior is a bimodal Gaussian

equation image(54)

and the likelihood is

equation image(55)

The posterior distribution is again bimodal Gaussian and can be computed analytically; see Figure 1. Here we demonstrate how an EnKF, the RHF and the proposed EGMF approximate the posterior for ensemble sizes M = 50,2000 and for xi(0) ∼ πprior, i = 1,…,M (see Figure 2). Both the RHF and the EGMF are capable of reproducing the Bayesian assimilation step correctly for M sufficiently large, while the EnKF leads to a qualitatively incorrect result. The transformation of the ensemble members (particles) under the dynamics (16) is displayed in Figure 3.

Figure 1.

The prior distribution, the likelihood from a measurement and the resulting posterior distribution. The prior as well as the posterior are bimodal Gaussians.

Figure 2.

Numerically obtained posterior for ensemble sizes (a) M = 50 and (b) M = 2000. Results are shown from the EGMF, RHF and EnKF analysis step. While the EGMF and the RHF converge to the correct posterior distribution, the EnKF leads to qualitatively incorrect results for both ensemble sizes.

Figure 3.

The rearrangement of the particles under the dynamics of the EGMF analysis step. Rapid transitions between the Gaussian-mixture components are induced by the vector field uB.

We now present results from three increasingly complex filtering problems.

5.2. A one-dimensional model problem

As a first numerical example we consider Brownian dynamics under a one-dimensional potential V(x), i.e.

equation image(56)

where w(t) denotes standard Brownian motion and the potential is given by

equation image(57)

(see Figure 4). The true reference trajectory is started at x(0) = −3.14 (see Figure 5). Measurements of x(t) are collected every 10 time units with two different values of the measurement-error variance R (R = 4,36).

Figure 4.

The potential energy V used in both the Brownian and Langevin dynamics models.

Figure 5.

The reference solution from which observations are generated by adding Gaussian noise with mean zero and variance R.

The initial PDF is given by the bimodal distribution

equation image(58)

Depending on the distribution of ensemble members, we use either a single Gaussian (L = 1) or a bi-Gaussian mixture (L = 2) in the EGMF assimilation step. A single Gaussian is used whenever more than 90 of the particles are located near either the right (i.e. xi > 0) or left (i.e. xi < 0) potential well. The computed variances are modified such that equation image to avoid singularities in the EM algorithm. The model equation (56) is discretized with the forward Euler method and time step Δt = 0.1. The total simulation interval is t ∈ [0,100000]. The assimilation equations with (42) and (43) are discretized with the forward Euler method and step size Δs = 0.05. The l-norm of uB(x,s) is limited to a value of ucut = 5/Δs.

The performance of the EGMF is compared with an EnKF with perturbed observations, an ensemble square-root filter and a RHF. The particle positions are adjusted during each data-assimilation step of a RHF such that the particles maintain equal weights γi = 1/M. The adjustment is made similarly to the proposals of citesr:anderson10 except that the posterior is approximated by piecewise constant functions.

For this simple, one-dimensional problem the densities can be directly propagated through a discretization of the associated Fokker–Planck equation. Bayes theorem reduces to a multiplication of the prior PDF approximation from the Fokker–Planck approximation with the likelihood at each grid point. We have used a grid with mesh size Δx = 0.125 over x ∈ [−10,10] to provide an independent and accurate filtering result. Periodic boundary conditions are used for the diffusion operator such that the spatial discretization leads to a stochastic matrix. It is found from the numerical simulations that R = 36 leads to a pronounced bimodal behaviour of the solution PDF π; see Figure 6 for two posterior PDF approximations from the Fokker–Planck approach.

Figure 6.

Two posterior PDFs for measurement-error variance R = 36 as obtained from the Fokker–Planck approach. A distinct bimodal behaviour can be observed, which motivates the use of a binary Gaussian-mixture model for the EGMF.

Typical filter behaviours over the time interval t ∈ [0,10000] with regard to the reference trajectory can be found in Figures 7, 8, 9 and 10, respectively, for M = 50 ensemble members. The EGMF and the RHF display a behaviour similar to that from the discretized Fokker–Planck approach while significantly different results are obtained from the EnKF implementation with perturbed observations. Similar results are obtained for an ensemble square-root filter (not displayed). The EGMF uses a bi-Gaussian approximation in 97 of the assimilation steps for R = 36 and in 47% of the assimilation steps for R = 4.

Figure 7.

Estimated ensemble mean computed from a direct simulation of the assimilation problem using a discretized Fokker–Planck equation for measurement-error variance (a) R = 36 and (b) R = 4.

Figure 8.

Ensemble mean from the EGMF for measurement-error variance (a) R = 36 and (b) R = 4. It can be observed that the EGMF leads to results similar to those from the discrete Fokker–Planck approach (Figure 7).

Figure 9.

Ensemble mean from a RHF for measurement-error variance (a) R = 36 and (b) R = 4. The results are for M = 50 particles. It can be observed that the RHF leads to results similar to those from the discrete Fokker–Planck approach (Figure 7).

Figure 10.

Ensemble mean from an EnKF with perturbed observations for measurement-error variance (a) R = 36 and (b) R = 4. The results are for M = 50 particles. The EnKF does not behave as well as the RHF, the EGMF and the discretized Fokker–Planck approach. Similar results are obtained for an ensemble square-root filter.

We also provide the root-mean-square (RMS) error between the computed mean from the three different filters and the mean computed from the Fokker–Planck approach in Table I for R = 36 and different ensemble sizes M. The RHF converges as M → ∞ to the solution of the Fokker–Planck approach for this one-dimensional model problem. The EGMF provides better results than the EnKF but does not converge, since the limiting distributions are not exactly bi-Gaussian. Note that the EGMF should converge for M → ∞ and the number of mixture components sufficiently large. Note also that the RHF does not converge to the analytic filter solution as M → ∞ in the case of more than one dynamic variable (i.e. N > 1). See also the following example.

Table I. RMS errors for ensemble means obtained from EnKF, RHF and EGMF compared with the expected value computed by a Fokker–Planck discretization with error variance R = 36 and M = 20,50,100 particles/ensemble members.
 RHFEGMFEnKF
M = 200.65510.76831.0283
M = 500.37170.51270.8798
M = 1000.26910.40330.8412

5.3. A two-dimensional model problem

We discuss another example from classical mechanics. The evolution of a particle with position equation image and velocity equation image is described by Langevin dynamics (Gardiner, 2004) with equations of motion

equation image(59)

where the potential V(q) is given by

equation image(60)

the friction coefficient is γ = 0.25, w(t) denotes standard Brownian motion and σ2 = 0.35. A reference solution, denoted by (qr(t),vr(t)), is obtained for initial condition (q0,v0) = (1,1) and a particular realization of w(t).

Let us address the situation in which the reference solution is not directly accessible to us and instead we are only able to observe Q(t) subject to

equation image(61)

where ξ(t) again denotes standard Brownian motion and c = 0.2. In other words, we are effectively only able to observe particle velocities.

We now combine the model equations and the observations within the ensemble Kalman–Bucy framework. The ensemble filter relies on the simultaneous propagation of an ensemble of solutions equation image, i = 1,…,M. In our experiment we set M = 50. The filter equations for an EGMF with a single Gaussian, ensemble Kalman–Bucy filter (Bergemann and Reich, 2011), respectively, become

equation image(62)
equation image(63)

with mean

equation image(64)

and variance/covariance

equation image(65)

The equations are solved for each ensemble member with different realizations wi(t) of standard Brownian motion and step size Δt = 0.01. The observation interval in (61) is τ = Δt. The extension of the EGMF to a Gaussian mixture with L = 2 is straightforward. One substitutes y = v, R = c/Δt and yobs = ΔQ(tn)/Δt with

equation image(66)

into (42) and (43) and sets Δs = 1 in the numerical time-stepping procedure for the assimilation step. The l-norm of uB(x,s) is limited to a value of ucut = 0.25/Δs. Assimilation is performed at every model time step. We perform a total of two million time steps/data-assimilation cycles. In the same manner, one can implement a RHF for this problem.

The computed ensemble means equation image and equation image, in comparison with the reference solution, can be found in Figure 11 for the continuous EGMF (using L = 1 and L = 2 mixture components as appropriate) and the ensemble Kalman–Bucy filter (continuous EGMF with L = 1). The RMS error with respect to the true reference solution is 2.3331 for the ensemble Kalman–Bucy filter and 1.9148 for the EGMF, which amounts to a relative improvement of about 20%. The EGMF uses a bi-Gaussian distribution in about 25% of the assimilation steps. For comparison we show the results from the RHF in Figure 12 for M = 50 particles. To interpret the behaviour of the RHF, one needs to look at the potential energy function V(q) (see Figure 4). The RHF assimilation scheme apparently pushes the solutions occasionally into the flat side regions of the potential energy curve, resulting in a relatively large RMS error of 3.9375. Qualitatively similar results are obtained for the RHF with M = 800 particles. Recall that we do observe velocities and not positions in this example and that the RHF uses the ensemble-generated covariance matrix P to regress filter increments linearly on to state space.

Figure 11.

(a) The reference solution qr(t) and (b) the estimated (ensemble mean) solution from the continuous EGMF over the first quarter of the simulation interval t ∈ [0,20000]. The estimated solution mostly follows the reference solution, with the exception of a number of missed transitions.

Figure 12.

(a) The estimated (ensemble mean) solution from the ensemble Kalman–Bucy filter over a quarter of the simulation interval. The results look qualitatively similar to the results from the EGMF filter. However, in terms of RMS errors, the EGMF outperforms the ensemble Kalman–Bucy filter by about 20% over the complete simulation interval. (b) The estimated (ensemble mean) solution from the RHF shown in (a). The reader should note the enlarged range of the vertical axis. At several instances the filtered solution deviates strongly from the reference solution. To interpret this behaviour we need to have a closer look at the potential energy V(q) (compare with Figure 4). Apparently the RHF interprets the data as corresponding to solutions with positions in the flat side regions of the potential energy function.

5.4. Lorenz-63 model

The three-variable model

equation image(67)

of Lorenz (1963) is used as a final test for the EGMF method. Only the x variable is observed every 0.20 time units with an observational error drawn from a normal distribution with mean of zero and variance of eight. The model time step is Δt = 0.01. A total of 101 000 assimilation steps is performed for each experiment, with only the last 100 000 steps being used for the computation of RMS errors. We have implemented an ensemble square-root filter, a RHF and an EGMF using formulation (45)–(46) with B = cP, P the empirical covariance matrix of the ensemble. The parameter c is chosen from the interval c ∈ [0.4,1.0]. The number of ensemble members is set to M = 25 and no covariance localization is applied. The internal assimilation step size is Δs = 1/4 and the l-norm of uB(x,s) is limited to a value of ucut = 0.125/Δs. We have computed the RMS errors for ensemble inflation factors between 1.0 and 1.3, and only report the optimal results in Figure 13 as a function of c for the EGMF. The overall smallest RMS error is achieved for c = 0.6 with a value of 4.1114. The associated RMS errors for the ensemble square-root filter are 4.4813 and 4.6596 for the RHF, respectively. An increase in the number of ensemble members to M = 100 leads to a reduction in the RMS error for the RHF to 4.3276, while the ensemble square-root filter yields its optimal performance for M = 50 with a RMS error of 4.3785. Both values are significantly larger than the optimal RMS error for the EGMF with M = 25. A better performance is observed for the EnKF with perturbed observations and M = 25, for which we obtain 4.1775 as the smallest RMS error which, in fact, is close to the performance of the EGMF with c = 1.

Figure 13.

RMS errors from the EGMF for a range of values of the scaling parameter c and ensemble size M = 25. A series of experiments has been conducted with the Lorenz-63 model for each fixed scaling parameter c ∈ [0.4,1.0] and a range of ensemble inflation factors. We only display the optimal results. The overall smallest RMS error is achieved for c = 0.6 with a value of 4.1114. The corresponding optimal RMS errors for an ensemble square-root filter are 4.4813 and 4.6596 for the RHF, respectively. The EnKF with perturbed observations leads to a RMS error of 4.1775, which is only slightly worse than the performance of the EGMF with c = 1.

6. Summary

We have extended the popular EnKF to statistical models provided by Gaussian mixtures. The EGMF is derived using a continuous reformulation of the Bayesian analysis step and consists of a combination of EnKF steps for each mixture component and an exchange term. The exchange term is determined for each measurement by a scalar elliptic PDE, which can be solved analytically. We have demonstrated by means of two numerical examples that the EGMF performs well when bimodal PDFs arise naturally from the model dynamics. The EGMF provides a valuable and easy to implement alternative to sequential Monte Carlo methods and other nonlinear filter algorithms. In this article, we have used the standard EM algorithm to assign Gaussian-mixture models to ensemble predictions. More refined methods such as those discussed by Fraley and Raftery (2007) will be considered in future work, in order to provide a robust and accurate clustering of ensemble predictions. Alternatively, one can implement the EGMF with a Gaussian kernel density estimator. In this case, the empirical covariance matrix of the ensemble can be used as a base for kernel bandwidth selection (Wand and Jones, 1995). With this choice, the EGMF becomes closely related to the RHF of Anderson (2010). The feasibility of our approach has been demonstrated for the Lorenz-63 model. Further work is required to assess the merits of Gaussian kernel density estimators in comparison with EnKF and RHF implementations for high-dimensional systems. Encouraging results have also been reported by Stordal et al. (2011) for their related adaptive Gaussian-mixture filter applied to the Lorenz-96 model (Lorenz, 1996; Lorenz and Emanuel, 1998).

Ancillary