Using subdivision surfaces and adaptive surface simplification algorithms for modeling chemical heterogeneities in geophysical flows

Authors


Abstract

[1] We present a new method for investigating the transport of an active chemical component in a convective flow. We apply a three-dimensional front tracking method using a triangular mesh. For the refinement of the mesh we use subdivision surfaces which have been developed over the last decade primarily in the field of computer graphics. We present two different subdivision schemes and discuss their applicability to problems related to fluid dynamics. For adaptive refinement we propose a weight function based on the length of triangle edge and the sum of the angles of the triangle formed with neighboring triangles. In order to remove excess triangles we apply an adaptive surface simplification method based on quadric error metrics. We test these schemes by advecting a blob of passive material in a steady state flow in which the total volume is well preserved over a long time. Since for time-dependent flows the number of triangles may increase exponentially in time we propose the use of a subdivision scheme with diffusive properties in order to remove the small scale features of the chemical field. By doing so we are able to follow the evolution of a heavy chemical component in a vigorously convecting field. This calculation is aimed at the fate of a heavy layer at the Earth's core-mantle boundary. Since the viscosity variation with temperature is of key importance we also present a calculation with a strongly temperature-dependent viscosity.

1. Introduction

[2] Convective flows govern much of the dynamics of the Earth. Examples of such flows are convection in the Earth's mantle, convection in magma chambers and the dynamics of the world oceans. The observed geochemical differences of the mid-ocean ridge basalts and the ocean island basalts imply the existence of chemically distinct reservoirs [e.g., Zindler and Hart, 1986]. The size and location of these reservoirs however has remained unclear. The traditional model where a depleted, well mixed upper layer is separated from an enriched lower mantle seems unlikely in the light of recent results from seismic tomography and geodynamic modeling (for a recent review see van Keken et al. [2003]). Kellogg et al. [1999] proposed a compositional stratification in the deep mantle as source region for the observed heterogeneities. While this idea has stirred much attention most investigators favor a thin, heavy layer near the core-mantle boundary. This layer could coincide with the seismologically observed D″ layer which exhibits high variations in seismic wave speed. Nowadays these time-dependent flows are often studied by means of three dimensional (3-D) numerical models which solve the equations for the transport of heat and momentum alternatingly. These flows are often driven by a temperature difference. But often there is also an active or passive chemical component that has to be considered. In magma chambers we observe partial melt zones and chemically driven flows. The dynamics of convective ocean circulation is often influenced not only by the temperature but also by the salt content, a phenomenon termed double-diffusive convection. One characteristic of these flows is that the chemical diffusivity is much lower than the thermal diffusivity. This reaches from a factor of 100 for a heat-salt system in water up to almost infinity for solid state diffusion in the Earth's mantle. The implementation of a chemical component with a very low diffusivity into a numerical model is difficult. The reason for this is theso-called numerical diffusion introduced by Eulerian discretization schemes. They artificially enhance the diffusion of an advected field. Therefore a Lagrangian method is often used to simulate the chemical component. This can be done by means of independent tracer-particles advected with the flow or by interconnected tracers forming a tracer-line (2-D) or tracer-surface (3-D). In the two dimensional case the tracer-line starts up with a small ensemble of interconnected tracers and once the distance between two neighboring tracers exceeds a threshold a new tracer is inserted into the line where the position of the new tracer is interpolated from the position of the neighboring tracers.

[3] For 2-D flows van Keken et al. [1997] compared three different methods: a field approach, tracer particles and a tracer line (also called marker-chain) method for a convective scenario related to the Earth's mantle. They concluded that there is no generally favorable method. While the field approach is computationally very efficient the chemical heterogeneities disappear due to numerical diffusion after some time which depends on the grid resolution and the numerical scheme chosen. The tracer approach does not suffer from artificial diffusion but needs a huge amount of tracer particles in order to give accurate results when calculating a concentration field from the tracer distribution. The tracer-line method in contrast is initially computationally very efficient and does also not suffer from artificial diffusion. Since the flow has however regions where the largest Lyapunov exponents are larger than unity (i.e., nearby tracers diverge exponentially in time) the number of tracers needed, and therefore the computational resources required, increases exponentially with time. Therefore the tracer-line method seems to be best suited when either only a calculation for a short time period is required or the chemical component has a density contrast connected with it which limits the entrainment.

[4] When applying these methods in 3-D numerical simulation the constraints imposed by the limited computational resources are even more strict. For the field approach it is almost impossible to, for example double, the spatial resolution of the computational grid since this would result in, at best, an increase of computer time by a factor of 16 (eight time as many computational nodes and the restriction on the length of the time step).

[5] For the tracer method the number of tracers required increases with the spatial resolution in the second horizontal direction. To be able to carry out tracer statistics in 3-D comparable to the 2-D calculations the number of tracers has thus to be increased in the order of 103. Apart from being computational expensive this is also often impossible due to the limited memory available. An example of this method in 3-D is given in [Tackley, 2002]. He applied a tracer ratio method which is described in detail in [Tackley and King, 2003]. They claimed to be able to avoid statistical noise and found it sufficient to use ∼5 tracers per computational cell. But even with such a low number of tracers their investigations are limited to boxes with an aspect-ratio of four. A comparison of their results with the method presented here would certainly be interesting but exceeds the scope of this paper.

[6] This makes the computationally efficient tracer-line (in 3-D tracer-surface) method very interesting, at least for the certain class of problems mentioned above. Unfortunately the transition from a tracer-line in a 2-D flow to a tracer-surface in a 3-D flow is not as straightforward as with the field approach or with the tracer methods. One has to start with a mesh of tracer with a triangular or quadrilateral connectivity. In such meshes each tracer is typically connected to 4 (quadrilateral) or 6 (triangular) neighboring tracers. This connectivity, also called valence, determines the spline function needed when interpolating the position of a new tracer. Once the new tracer has been inserted into the mesh the valence of the neighboring tracers changes. Since the interpolating spline function used in this work does depend on the valence of the mesh we thus need different interpolation functions for every different connectivity in the mesh. The refinement of polygonal meshes is a very active field of research in computer graphics called Subdivision Surfaces and has developed interpolation schemes for different connectivity.

[7] When a density difference is connected to a chemical component it can act as a restoring force. The governing flow is often spatially heterogenous and the spatial location of the heterogeneities is varying in time (e.g., the location of an upwelling plume). The restoring force of the density contrast may result in a situation where a highly deformed, and therefore highly refined region, returns to a simple geometry. In order to limit the computational expenses we also need a surface simplification algorithm which removes excess elements of the surface.

[8] We thus need two algorithms: one for grid refinement and one for grid simplification. At least the grid simplification has to be adaptive in order to account for the changing flow field.

[9] While the algorithms used for refinement [Catmull and Clark, 1978; Doo, 1978; Dyn et al., 1987] are already known for more than 20 years they have only recently been used in computer graphics in fields such as movie production and computer games. They have developed a wealth of different interpolation schemes which have different properties. The research in the area of surface simplification has also been motivated by these fields (see Heckbert and Garland [1995] for a review). The ultimate aim of these techniques is the visual appearance of objects and their mathematical background and geometrical properties are rigorously defined.

[10] In this paper we want to show that these techniques can be successfully applied in geophysical fluid dynamics. In section 2 we give a brief introduction to the field of subdivision surfaces. Section 3 introduces a surface simplification algorithm based on a quadratic penalty method. In section 4 we will discuss a simple application to geophysical flows.

2. Subdivision Surfaces

[11] The basic idea of subdivision is to define a smooth curve (2-D) or surface (3-D) as the limit of a sequence of successive refinements. For a curve connecting a small number of points in a plane this is sketched in Figure 1. On the left side four points are connected through straight line segments. The middle sketch shows a refined version where three points have been added in-between the existing 4 points. On the right hand side the polygon has been refined one more time. The shape and smoothness of the resulting curve or surface depends on the rules chosen. To construct the curve in Figure 1, the new points are constructed by taking the average of nearby points: two to the left and two to the right with weights 1/16(−1, 9, 9, −1) respectively (ignoring the boundaries). When repeating the process ad infinitum the continuity of the resulting curve gets C1. An important property of this refinement, which is important when considering more complex curves and surfaces, is the local definition of the scheme, which means that far away points are not important in constructing new points.

Figure 1.

Example of subdivision for curves in a plane. On the left figure, 4 points are connected with straight line segments. To the right a refined version where 3 points have been inserted “in between.” After one more step the curve starts to become smooth.

[12] For subdivision surfaces the geometry of the initial mesh is important. Figure 2 shows a refinement for a triangular scheme. A problem arises from the existence of extraordinary vertices which have a valence not equal to 6 for triangular meshes (Figure 3). Such extraordinary vertices are problematic because the scheme has to define an extra set of rules for a valence up to infinity and it has to be shown that the properties of the interpolated surface, such as continuity, does not change at those points.

Figure 2.

A triangular subdivision scheme adds new vertices along the edges.

Figure 3.

An extraordinary vertex with valence N = 3 in the middle.

[13] An important characteristic of a subdivision scheme is the question whether it is “interpolating” in which case the original points are part of the refined set like in Figure 1 or whether it is an “approximating” scheme where the original points (the control vertices) are removed.

[14] Approximating subdivision schemes for arbitrary topology meshes are typically modifications of spline based schemes. The algorithms of Doo and Sabin [Doo, 1978; Doo and Sabin, 1978; Sabin, 1976] and Catmull and Clark [1978] are generalizations of quadric and B-cubic spline, respectively. A generalization of quadric box splines for arbitrary triangulation was given by Loop [1994]. In this scheme each triangle is split into four new triangles at each iteration step. The position of the new vertices which are generated by splitting the edges of the triangles are computed by a weight average of neighboring vertices as given in Figure 4a. The position of the old vertices is adjusted by a weighted average of adjacent vertices. The weights used are dependent on the valence of the vertex. For regular vertices with a valence of six the weights are given in Figure 4b whereas for extraordinary vertices Loop [1994] suggested the weights as given in Figure 4c.

Figure 4.

Subdivision coefficients for a three dimensional box spline as used by the Loop scheme. (a) Coefficient for the vertices which are added by edge-splitting. (b) The coefficients for the repositioning of existing vertices with valence 6. (c) The coefficients for extraordinary vertices with valence k are given. The choice of β is not unique. Loop [1994] suggests equation image.

[15] Interpolating schemes can be defined as modifications to approximating schemes. Narsi [1991] and Halstead et al. [1993] proposed such extensions for the Doo-Sabin and the Catmull-Clark scheme, respectively. Both cases require a linear system of equations for the interpolation constraints to be solved. It remains unclear however under which conditions the linear system remains solvable. An alternative approach are subdivision schemes which are interpolating by design such as the “Butterfly” scheme [Dyn et al., 1990] or the quadrilateral scheme by Kobbelt [1996]. As the Butterfly scheme is interpolating, local, simple to implement and leads to C1 surfaces we will discuss it as an example of subdivision in greater detail. Figure 5 shows the original 8-point stencil from Dyn et al. [1990] where the name is derived from the shape of the nearby vertices used in the interpolation. The weights used for constructing a new vertex are, based on the labeling in Figure 5:

display math
Figure 5.

The solid lines indicate the 8-point stencil for the original Butterfly scheme. The values are calculated for the new vertex denoted by a circle in the middle. The dashed lines give the additional points for the 10-point stencil of the modified Butterfly scheme.

[16] In this case w is a tension parameter, which controls how “tightly” the limit surface is pulled toward the control net. If w is set to equation image a simple linear interpolation is carried out and the surface is not smooth. A shortcoming of this scheme is its inapplicability for extraordinary vertices (e.g., point a in Figure 5, which has a valence smaller than 5) The scheme was therefore extended to use a 10-point stencil by the same authors [Dyn and Levin, 1994]. The new scheme was similar to the Butterfly scheme with two vertices added (Figure 5). The new weights are:

display math

The total weight of the points is still unity and the new scheme includes the old scheme as a subset by choosing w = 0. It can be shown that the Butterfly scheme reproduces a polynomial of degree 3. However this extension does not address the smoothness problem of extraordinary vertices. Zorin and Sweldens [1996a] derived an extension to the scheme allowing it to handle vertices with arbitrary valence. See Zorin and Sweldens [1996b] for the mathematical derivation. If one of the endpoint vertices a is extraordinary, the new vertex is computed by a weighted sum of the extraordinary vertex and its neighbors (see Figure 6 for the stencil). The weights for vertices with valence N = 6 are the same as for the 10-point Butterfly scheme. For other valences the weights are:

display math
Figure 6.

The stencil for an extraordinary vertex in the modified Butterfly scheme.

[17] Using this subdivision scheme which allows for arbitrary mesh topology it is also possible to adaptively refine the mesh in regions of high deformations which is interesting with respect to an application of this method in computational fluid dynamics. Another possibility of adapting the mesh refinement to the flow structure is to use an adaptive mesh simplification algorithm in turns with the mesh refinement. Such an adaptive mesh simplification also leads to an irregular mesh with vertices having valences different from 6. This allows for the simplification of once highly refined regions where the surface geometry later has become more simple due to the time-dependent nature of the flow.

[18] In order to clarify the differences between the interpolating Butterfly scheme and the approximating Loop scheme we have plotted the limit surface generated by the recursive subdivision of a tetrahedron in Figure 7. For the Butterfly scheme the volume of the resulting is increased compared to the initial tetrahedron. The approximating Loop scheme yields a contraction of the initial volume. This is true however only for convex bodies. The limit surface of both surfaces is smooth.

Figure 7.

Limit surfaces for the subdivision of a tetrahedron with an initial resolution as indicated by the white mesh. Both subdivision surfaces are close to the initial mesh representation. For the interpolating butterfly scheme the corner vertices of the tetrahedron are still part of the final subdivision surface. For the approximating Loop scheme all vertices are redefined. The main difference occurs at the corners where the Butterfly scheme interpolates around the corner thus increasing the total volume compared to the initial model. The Loop schemes reduces the volume of the final model by approximating the corners. Please note that the behavior generated by the averaging of vertex position reflects a diffusive process similar to the treatment of a diffusive term in a finite-difference formulation.

3. Surface Simplification Algorithms

[19] With the possibility of creating models with greater detail the simplifications of polygon surfaces have become more and more important. Examples are range data captured by 3-D scanners, isosurfaces extracted from volume data with the “marching cube” algorithm, terrain data which is acquired by satellite, and polygonal modes in computer-aided geometric design generated by subdivision of curved parametric curves. The simplification of these surfaces makes storage, tranmission, computation and displaying more efficient. Owing to its importance there have been many different approaches to the subject. Most of them are surveyed by Heckbert and Garland [1995] who categorize the different approaches in three classes: Vertex Decimation, Vertex Clustering, and Iterative Edge Contraction. For our work we have chosen a method proposed by Garland and Heckbert [1997] which uses iterative contraction of vertex pairs as a generalization of edge contraction. Beside its computational efficiency and high quality approximation of the remaining model this method is able to join unconnected regions of the model together, a process termed aggregation. For our fluid dynamical application this is of key importance. An example is an entrained patch of chemically heavy material which is returning into the main reservoir.

[20] For the pair contraction algorithm Garland and Heckbert [1997] first select pairs of vertices which could be contracted (Figure 8). A pair (v1, v2) can be contracted when it forms an edge in the polygon model or if ∥v1, v2∥ < t where t is a threshold parameter. The second rule allows for aggregation. Next they define the cost of the contraction for every valid vertex pair. To do this they associate a 4x4 matrix Q with every vertex and define the error Δ(v) to be the quadratic form Δ(v) = vTQv. The definition of Q follows that by Ronfard and Rossignac [1996] and constructs the matrix as a sum of the squared distance of the vertex v to the plane defined by the triangles that meet at the vertex (see Garland and Heckbert [1997] for a derivation). Having this error of moving a vertex associated with every vertex we can calculate the costs for a contraction (v1,v2) → equation image as a sum of the single errors: equation image. Before calculating equation image we must choose the position of the new vertex equation image however. This could be done by taking the midpoint of (v1,v2). A better method however is to calculate the position of equation image where the error equation image is a minimum. Since the error function Δ is a quadratic, finding its minimum is a linear problem and we have to solve for ∂Δ/∂x = ∂Δ/∂y = ∂Δ/∂z = 0.

Figure 8.

Edge contraction of vertex v1 and v2 to form the new vertex equation image. The shaded triangles are removed during contraction.

[21] The algorithm can be summarized as follows: (1) compute the Q matrices for all vertices; (2) select valid vertex pairs; (3) calculate the optimal position equation image for every valid vertex pair; (4) calculate the error equation image for every potential contraction; (5) place all pairs in a list sorted by increasing error; (6) iteratively contract pairs from the list starting from the pair which has the lowest error associated. Update costs for all valid pairs involved in the contraction. In addition to that Garland and Heckbert [1997] took special care to preserve boundaries and to prevent inversion of the face normal.

4. Simulating a Passive Component

[22] We will first discuss how these methods can be used to simulate the transport of a scalar field which does not influence the flow and is therefore termed passive. An example of such a field is the distribution of rare elements in the Earth's mantle.

[23] The equations for 3-D thermally driven convection in a highly viscous, incompressible fluid are solved in the primitive variable formulation. For spatial discretization we employed a second order correct finite volume scheme. Time stepping was performed by a second order, fully implicit method. The resulting algebraic system of equations was solved by a multigrid method, using SIMPLER as a smoother. The algorithm for solving the thermal convection problem is described by Trompert and Hansen [1996]. The concentration is defined by a closed, triangular mesh which encloses the chemical component. After every Levy-Courant limited timestep of the fluid dynamical model the vertices of the mesh are advanced by solving the set of first order equations

display math

by using a 4th order correct Runge-Kutta method. Within the grid-cells the velocities are interpolated using a tri-linear interpolation. The accuracy of this method has been discussed by Schmalzl et al. [1995]. Once the initial mesh gets advected by the flow field it is getting deformed and stretched. The question of whether a triangle has to be split in order to maintain the accuracy of the tracer mesh depends on two factors: First the length of a triangle edge and second the curvature of the mesh at this triangle. Topologically it is only possible to split pairs of triangle. Splitting a single triangle would result in a T-split which destroys the continuity of mesh. The curvature of the mesh is calculated by evaluating the sum of the scalar product of the normal vector of the triangle with the normal vectors of neighboring triangles (Figure 9). For an edge-split we use the criterium:

display math

where equation image is the normal vector of the triangle, ledge is the length of the edge under consideration. Gridwidth is the width of the finite-volume cells. The exponent m is a positive, even number which has been chosen so that only higher curvatures influence the splitting behavior. The factors 0.3 and 0.7 are examples for weights and ensure that the length of a triangle edge gets at most 3 times the width of the finite-volume cells and is at least 0.3 times that width. This behavior is desirable because triangle that are too large may lead to inaccuracies whereas triangles which are too small are computationally inefficient.

Figure 9.

Sketch for splitting the edges between two triangles. First the length of the edge and the sum of the scalar product of the normal vectors of the two triangles and the four neighboring vertices are evaluated. If the criterion for an edge-split is fulfilled a new vertex is inserted at the position of the filled circle by using the Butterfly interpolation scheme.

[24] The routine for mesh refinement is called every fifth time step. The simplification step can be carried out at larger intervals since the excess triangles only influence the performance but not the accuracy.

[25] In Figure 10 we show two snapshots of the advection of a passive sphere in a convective flow. The Rayleigh number has been chosen as 10000 which results in a stationary flow with a roll pattern. Figure 10a shows the initial sphere and a linear temperature gradient. In Figure 10b the situation after 6 overturns is presented. The initial sphere has been streaked out by the flow field. As time progresses this filementation due to the differential rotation in the velocity field increases. Since the flow field is effectively two-dimensional and time independent the flow is non-chaotic and the distance of initially nearby tracers should increase linear in time. We therefor consider this as a good test case opposed to more complex flows which exhibit chaotic behavior. In Figure 11a we show the evolution of the number of triangles versus time. Time is non-dimensional thermal diffusion time as defined in Trompert and Hansen [1996]. The full length of the simulation (time = 1.8) corresponds to 36 overturns. The solid line indicates the number of triangles using the refinement criterion as defined above but without applying the simplification. The number of triangles increases rapidly and almost exponentially with time. The dotted line indicates the number of triangles when a simplification step is carried out every 25 model iterations. The initial model consisting of 3000 triangles is simplified to 1000 triangles. As time progresses the number of triangles is increasing almost linear with time. Because the refinement is done every 5 steps but the simplification every 25 steps the number of triangles oscillates. In Figure 11b the total volume which is encapsulated by the surface relative to the start volume in percent is given. The volume is decreasing in time and after 26 overturns the structure has lost about 14% of its initial volume. At this time the filementation is very fine and we thus find the error acceptable.

Figure 10.

Advection of a passive sphere in a convective flow with a Rayleigh number of Ra = 10000. The left picture shows the initial sphere (green) and the initial temperature distribution as indicated by the color slice. In the right plot the situation after 9 overturns is shown. The initial sphere has been streaked out due to the differential rotation of the convection roll.

Figure 11.

The left plot gives the evolution of the number of vertices for the configuration shown in Figure 10. The solid lines indicate a calculation carried out on a mesh with 32 gridpoints in each direction. The dotted line shows the evolution for a computational mesh with 16 gridpoints in each direction. Both calculations have been carried out with a simplification step. The dashed line gives the values on a grid with 16 gridpoints but without the simplification. The right plot shows the evolution of the volume as an indication for the accuracy of the method. The total length of the simulation (T = 1.8) corresponds to 36 overturns.

5. Simulating an Active Component

[26] To calculate the velocity using the momentum equation the concentration C(x) is needed in terms of a scalar field rather than a tracer surface. The process of mapping the 3-D surface on a discretely sampled grid, which we term rastering, is described in this section.

[27] To obtain a field representation of the concentration distribution a cell value is calculated for each control volume (CV). It ranges from 0 for those CVs lying completely outside the tracer-surface to 1 for those within. Intermediate values are assigned to control volumes that are intersected by the surface. To calculate the values for those boundary volumes, a ray-triangle-intersection technique is applied: For each ray given as

display math

with

display math

NX, NY, NZ being the number of control volumes in x-, y- and z-direction respectively and Δx, Δy and Δz the size of these control volumes. For each of these rays the intersection points ti with the tracer-surface are determined. The cell value of control volume lying in between two intersection points t1i · Δxt2 are determined as follows:

display math

assuming that the boundaries of the computational domain are outside the tracer-surface by definition. Figure 12 illustrates this calculation: The control volumes i = 2 and i = 4 are intersected by the surface (thick lines). The assigned cell values correspond to the relative position of the surface within the control volume.

Figure 12.

Calculation of the cell values. For clearness, only one dimension is shown. t1 and t2 denote the intersection points of the tracer surface with the ray.

[28] The calculation of the tis is carried out using the algorithm described by Möller and Tumbore [1997] by cycling through all triangles of the tracer-surface for each ray given by equation (1). Afterward the procedure is repeated for rays in y- and z-direction. The resulting cell value is normalized to unity.

[29] The total number of rays can be approximated by 3 × NX2, assuming NX = NY = NZ. The whole procedure therefore results in a computational effort proportional to 3 × NX2 × NT + τOH with NT being the number of triangles the tracer-surface consists of and τOH some overhead that is nearly independent of the number of triangles. Owing to the direct dependency on the number of triangles tested, the total time spent on the rastering process can be dramatically reduced by decreasing NT itself. Therefore a bounding box is calculated for each triangle in advance to the intersection tests. The intersection test is then performed only for those triangles whose bounding box is intersected by the given ray. To illustrate the benefit of this technique, a spherical tracer-surface has been rastered using a resolution of NX = NY = NZ = 128 control volumes. Without the bounding box technique, a total number of about 377 × 106 triangles have to be tested for intersection with the rays given by equation 1. This number drops to about 48 × 103 using the precalculated bounding boxes. In terms of computational time spent on rastering the effort can be reduced from 43.0 seconds down to 1.7 seconds for cases without and with bounding boxes, respectively. The latter number corresponds to the overhead time mentioned above.

[30] We have tested the implementation for a flow driven by compositional density differences by investigating a three dimensional Rayleigh-Taylor instability where a layer of light material is underlying a layer of heavier material. The results where in close agreement both with the analytically predicted growth rate for a disturbance that lead to a 2-D flow pattern [Chandrasekhar, 1961] and with numerical studies for fully 3-D flow patterns as presented in Kaus and Podlatchikov [2001].

[31] In Figure 13 we show a comparison with experimental and 2-D boundary integral calculations for a rising plume as presented in Manga et al. [1993]. The plume rises from an initial half sphere placed at the stress-free boundary. Different from the calculation of Manga et al. [1993] our calculations have been carried out in the limit of zero Reynolds number whereas in the experiments a Reynolds number of Re ≈ 0.01 has been employed. An estimate of the difference between our calculations and the calculation presented in Manga et al. [1993] based on the difference in tail width, plume length, and plume head diameter results in a difference of <5%. For comparison we have also added a calculation where the chemical component has been calculated using a field approach. For the same grid resolution as employed for the surface tracking method we observe strong effects of numerical diffusion Figures 13e–13h.

Figure 13.

The rise of a plume with constant viscosity from a free-slip surface. The dashed bar at the bottom of the upper row indicate the mesh spacing. (a) The plume starts from an initial half sphere of light material. The results are comparable to both experimental and numerical solution using a boundary integral method as presented by Manga et al. [1993]. (e–h) We calculated the same problem using a field approach. The effects of numerical diffusion are illustrated by the spreading iso-lines.

[32] Since the module for calculating the composition is just an extension to the code presented by Trompert and Hansen [1996] which has been verified and benchmarked extensively we considered these tests sufficient.

6. Thermochemical Convection

[33] For the Rayleigh-Taylor instability the fluid motion ceases after the potential energy stored in the compositional field has been released. This time-limited motion also leads to a stretching of the interface which can be handled for most cases. If we consider thermochemical convection however this is different. The flow is typically time-dependent and thus exhibits regions where the largest Lyapunov exponents are larger than one. This results in initially nearby tracers separating exponentially in time. This is different from 3-D stationary flows at infinite Prandtl number shown in Figure 11 which do not have these chaotic regions [Schmalzl et al., 1995]. For some cases interesting to the dynamic of the Earth's interior the growth of the interfacial surface can be limited. When the density contrast connected with the composition is larger then the density contrast caused by the temperature variation the compositional different material is forming a layer near one of the horizontal boundaries. This leads to layered convection. The large scale convective flow is modulating this layer and compositional different material is entrained into the large scale flow. An example of such a flow is a chemically dense layer at the core mantle boundary which has been detected by seismology and termed D″ layer. Different numerical studies with 2-D models [e.g., Kellogg et al., 1999] have investigated this scenario. One of the central questions is connected with the stability of such a dense layer and the shape of the layer. In these models thin streaks of material is entrained by upwelling plumes. We have decided to focus the investigation on the stability and shape of layer. The thin filaments of chemically dense material must therefore be removed. We do this by carrying out a loop subdivision step every 25 model iterations. As discusses in section 2 the loop subdivision has some properties which are similar to a diffusive process thus removing fine-scale structures from the interface. By doing so we can limit the number of triangles needed. In Figure 14 we present such a model. We consider Rayleigh-Bénard convection of a constant property flow with a Rayleigh number of Ra = 106, stress-free boundary conditions with an aspect ratio of two by two by one. The grid resolution of the finite volume mesh is 64 × 64 × 32 with equidistant mesh spacing. A compositionally dense blob of material is inserted into the flow at time T = 0. The influence of the composition on the flow is described by the non-dimensional chemical Rayleigh number which is Rac = −3 × 106 in this model. The initial blob in Figure 14a sinks to the lower boundary and spreads (Figure 14b). There is a high temperature drop over the thin layer as indicated by the temperature cross-sections in Figure 14c. Material gets entrained in thin filaments and disappears due to the diffusive properties of the loop subdivision scheme. This results in a material loss of the chemical reservoir over time as plotted in Figure 15b. We consider the lost volume as entrained into the bulk flow and distribute this mass over the bulk volume. The density difference between the chemical layer and the bulk flow is thus decreasing as time proceeds. In Figures 14d–14f we see the fully developed thermochemical flow. Some interesting points to note are the thickness of the layer relative to the numerical resolution of the finite volume mesh. The mesh-spacing is noted by the color markings along the axes. The thickness of the chemical layer is typically only two mesh-spacing wide. It would be impossible to maintain such a thin layer in an Eulerian approach. Different from the 2-D investigations the material is not piled up in hill-like structures by the large-scale flow. We account this to the greater degree of freedom in the 3-D calculations where the heavy material has the possibility to flow around upwellings. Another interesting point to note is the sheet-like shape of entrained material (Figure 14f). The chemically distinct signature of hot spots has often been attributed to the entrainment of material from the core-mantle boundary, whereas more linear surface structures like mid-ocean ridges do not reflect this entrained material. Considering the often sheet like entrainment this seems hard to understand. The number of vertices needed for the triangle mesh increases only slightly during the calculation (Figure 15a).

Figure 14.

Thermal convection with a constant viscosity fluid at a Rayleigh number of Ra = 106. The temperature distribution is indicated by the two temperature cross-sections. The position of the heavy chemical component is given by the green surface.

Figure 15.

Evolution of the number of vertices needed to represent the chemical component (left) and the volume of the chemical component encapsulated by this surface (right) for the calculation shown in Figure 14.

[34] The real situation in the Earth's mantle is of course more complex. As a last example of the capabilities of the method we present a simulation with the same resolution as above but with a strongly temperature and depth dependent viscosity. The viscosity depends exponentially on the temperature [see Trompert and Hansen, 1996] by a factor of 10000 and varies with depth by a factor of 3. The surface Rayleigh number is Ra = 103 whereas the compositional Rayleigh number is Rac = −3 × 103 in this model. Figure 16a shows again the initial configuration with a linear temperature gradient and the initial blob of heavy material. In Figure 16b the material which has been placed lower and is therefore warmer and less viscous spreads out at the lower boundary. Owing to the strong temperature dependence of the viscosity a stagnant lid develops on the upper boundary (Figure 16c). Across this immobile lid the temperature can only be transported conductively which leads to high internal temperatures as can be seen by the colored temperature cross-sections. The chemical component creates a layer at the bottom of the domain. The heat transport over this layer is enhanced by convection within this layer and thus is more efficient than the transport across the upper boundary. The average temperature within the center of the domain is therefore warmer than T = 0.5 as shown in the horizontally averaged temperature-depth profile in Figure 16f. Owing to the almost isothermal interior of the cell the entrainment of material is very similar to the iso-viscous calculation in Figure 14 and also exhibits the sheet-like structure of the entrained material.

Figure 16.

Evolution of a dense chemical component (green surface) in a flow where the viscosity depends strongly on the temperature. After an initial period (a–b) a stagnant lid forms at the top of the domain and (c–e) a dense layer accumulate at the lower boundary. Both influence the temperature transport and lead to an almost isothermal interior of the cell. (f) The horizontally averaged temperature-depth profile is presented. The entrainment behavior is comparable to the iso-viscous case presented in Figure 14.

7. Conclusion

[35] We have presented a new method for investigating the transport of an active chemical component in a convective flow. Using a triangle mesh we are able to track the evolution of a scalar field with negligible diffusivity in a hydrodynamic code.

[36] When a density difference is connected to this scalar, the concentration field has to be calculated by a triangle-ray intersection test. For a large number of triangles this test dominates the calculation in terms of the computer time needed. By using precalculated bounding boxes we were able to reduce the computational effort significantly.

[37] A problem arises in time-dependent flows where the surface-area increases exponentially with time. This inhibits the use of this method for some problems where long time series are required. When a strong density contrast is connected to the chemical component however we show that small scale structures can be removed by using an approximating subdivision scheme. Different from numerical diffusion connected to Eulerian schemes the amount of this artificial diffusion can be controlled easily.

[38] We suggest that the method can be useful in different fields of geophysical fluid dynamics such as meteorology, oceanography, and solid Earth fluid dynamics. As an example application we presented calculations from the field of convection in the Earth's mantle. The calculations aim at explaining the observed strong seismic heterogeneities at the core-mantle boundary by a thin, chemically dense layer. We were able to simulate such a layer which only occupied 2–3 grid cells in the underlying finite-volume grid. While the front-tracking method proposed may not be well suited for all problems there is certainly a class of problems where it can be used efficiently. With relatively low computational expenses it is possible to track the evolution of a scalar component in the flow while avoiding the numerical diffusion connected with Eulerian schemes and the statistical noise from tracer method. In the future this method may further benefit from the rapid development in computer graphics such as new subdivision schemes or the hardware implementation of subdivision schemes as presented by Bischoff et al. [2000].

[39] Concerning the investigation of the Earth's mantle the new tool presented should enable researchers to numerically model the fate of a thin, geochemically distinct, layer in the Earth's mantle in 3-D. So far most calculations have been carried out in two spatial dimensions [e.g., Hansen and Yuen, 1989; Christensen and Hofmann, 1994]. However comparison of mixing properties of 2-D and 3-D thermal convection indicated significant differences [e.g., Schmalzl et al., 1996; Ferrachat and Ricard, 1998]. Using our new method we presented calculations which revealed that the topography of the dense layer as generated by the flow is much smaller than in comparable 2-D scenarios [e.g., Hansen and Yuen, 1989]. These calculations however are only examples of the capability of the method and for the future we plan to carry out a systematic investigation.

Acknowledgments

[40] A. Loddoch is supported by DFG priority programme 1115 “Mars and the terrestrial planets” under grant no. Ha 1765/10-1. We would like to thank P. van Keken, L. Moresi, and Y. Ricard for constructive reviews. Thanks to C. Stein for thoroughly reading the manuscript.

Ancillary