We present a new method for investigating the transport of an active chemical component in a convective flow. We apply a three-dimensional front tracking method using a triangular mesh. For the refinement of the mesh we use subdivision surfaces which have been developed over the last decade primarily in the field of computer graphics. We present two different subdivision schemes and discuss their applicability to problems related to fluid dynamics. For adaptive refinement we propose a weight function based on the length of triangle edge and the sum of the angles of the triangle formed with neighboring triangles. In order to remove excess triangles we apply an adaptive surface simplification method based on quadric error metrics. We test these schemes by advecting a blob of passive material in a steady state flow in which the total volume is well preserved over a long time. Since for time-dependent flows the number of triangles may increase exponentially in time we propose the use of a subdivision scheme with diffusive properties in order to remove the small scale features of the chemical field. By doing so we are able to follow the evolution of a heavy chemical component in a vigorously convecting field. This calculation is aimed at the fate of a heavy layer at the Earth's core-mantle boundary. Since the viscosity variation with temperature is of key importance we also present a calculation with a strongly temperature-dependent viscosity.
 Convective flows govern much of the dynamics of the Earth. Examples of such flows are convection in the Earth's mantle, convection in magma chambers and the dynamics of the world oceans. The observed geochemical differences of the mid-ocean ridge basalts and the ocean island basalts imply the existence of chemically distinct reservoirs [e.g., Zindler and Hart, 1986]. The size and location of these reservoirs however has remained unclear. The traditional model where a depleted, well mixed upper layer is separated from an enriched lower mantle seems unlikely in the light of recent results from seismic tomography and geodynamic modeling (for a recent review see van Keken et al. ). Kellogg et al.  proposed a compositional stratification in the deep mantle as source region for the observed heterogeneities. While this idea has stirred much attention most investigators favor a thin, heavy layer near the core-mantle boundary. This layer could coincide with the seismologically observed D″ layer which exhibits high variations in seismic wave speed. Nowadays these time-dependent flows are often studied by means of three dimensional (3-D) numerical models which solve the equations for the transport of heat and momentum alternatingly. These flows are often driven by a temperature difference. But often there is also an active or passive chemical component that has to be considered. In magma chambers we observe partial melt zones and chemically driven flows. The dynamics of convective ocean circulation is often influenced not only by the temperature but also by the salt content, a phenomenon termed double-diffusive convection. One characteristic of these flows is that the chemical diffusivity is much lower than the thermal diffusivity. This reaches from a factor of 100 for a heat-salt system in water up to almost infinity for solid state diffusion in the Earth's mantle. The implementation of a chemical component with a very low diffusivity into a numerical model is difficult. The reason for this is theso-called numerical diffusion introduced by Eulerian discretization schemes. They artificially enhance the diffusion of an advected field. Therefore a Lagrangian method is often used to simulate the chemical component. This can be done by means of independent tracer-particles advected with the flow or by interconnected tracers forming a tracer-line (2-D) or tracer-surface (3-D). In the two dimensional case the tracer-line starts up with a small ensemble of interconnected tracers and once the distance between two neighboring tracers exceeds a threshold a new tracer is inserted into the line where the position of the new tracer is interpolated from the position of the neighboring tracers.
 For 2-D flows van Keken et al.  compared three different methods: a field approach, tracer particles and a tracer line (also called marker-chain) method for a convective scenario related to the Earth's mantle. They concluded that there is no generally favorable method. While the field approach is computationally very efficient the chemical heterogeneities disappear due to numerical diffusion after some time which depends on the grid resolution and the numerical scheme chosen. The tracer approach does not suffer from artificial diffusion but needs a huge amount of tracer particles in order to give accurate results when calculating a concentration field from the tracer distribution. The tracer-line method in contrast is initially computationally very efficient and does also not suffer from artificial diffusion. Since the flow has however regions where the largest Lyapunov exponents are larger than unity (i.e., nearby tracers diverge exponentially in time) the number of tracers needed, and therefore the computational resources required, increases exponentially with time. Therefore the tracer-line method seems to be best suited when either only a calculation for a short time period is required or the chemical component has a density contrast connected with it which limits the entrainment.
 When applying these methods in 3-D numerical simulation the constraints imposed by the limited computational resources are even more strict. For the field approach it is almost impossible to, for example double, the spatial resolution of the computational grid since this would result in, at best, an increase of computer time by a factor of 16 (eight time as many computational nodes and the restriction on the length of the time step).
 For the tracer method the number of tracers required increases with the spatial resolution in the second horizontal direction. To be able to carry out tracer statistics in 3-D comparable to the 2-D calculations the number of tracers has thus to be increased in the order of 103. Apart from being computational expensive this is also often impossible due to the limited memory available. An example of this method in 3-D is given in [Tackley, 2002]. He applied a tracer ratio method which is described in detail in [Tackley and King, 2003]. They claimed to be able to avoid statistical noise and found it sufficient to use ∼5 tracers per computational cell. But even with such a low number of tracers their investigations are limited to boxes with an aspect-ratio of four. A comparison of their results with the method presented here would certainly be interesting but exceeds the scope of this paper.
 This makes the computationally efficient tracer-line (in 3-D tracer-surface) method very interesting, at least for the certain class of problems mentioned above. Unfortunately the transition from a tracer-line in a 2-D flow to a tracer-surface in a 3-D flow is not as straightforward as with the field approach or with the tracer methods. One has to start with a mesh of tracer with a triangular or quadrilateral connectivity. In such meshes each tracer is typically connected to 4 (quadrilateral) or 6 (triangular) neighboring tracers. This connectivity, also called valence, determines the spline function needed when interpolating the position of a new tracer. Once the new tracer has been inserted into the mesh the valence of the neighboring tracers changes. Since the interpolating spline function used in this work does depend on the valence of the mesh we thus need different interpolation functions for every different connectivity in the mesh. The refinement of polygonal meshes is a very active field of research in computer graphics called Subdivision Surfaces and has developed interpolation schemes for different connectivity.
 When a density difference is connected to a chemical component it can act as a restoring force. The governing flow is often spatially heterogenous and the spatial location of the heterogeneities is varying in time (e.g., the location of an upwelling plume). The restoring force of the density contrast may result in a situation where a highly deformed, and therefore highly refined region, returns to a simple geometry. In order to limit the computational expenses we also need a surface simplification algorithm which removes excess elements of the surface.
 We thus need two algorithms: one for grid refinement and one for grid simplification. At least the grid simplification has to be adaptive in order to account for the changing flow field.
 While the algorithms used for refinement [Catmull and Clark, 1978; Doo, 1978; Dyn et al., 1987] are already known for more than 20 years they have only recently been used in computer graphics in fields such as movie production and computer games. They have developed a wealth of different interpolation schemes which have different properties. The research in the area of surface simplification has also been motivated by these fields (see Heckbert and Garland  for a review). The ultimate aim of these techniques is the visual appearance of objects and their mathematical background and geometrical properties are rigorously defined.
 In this paper we want to show that these techniques can be successfully applied in geophysical fluid dynamics. In section 2 we give a brief introduction to the field of subdivision surfaces. Section 3 introduces a surface simplification algorithm based on a quadratic penalty method. In section 4 we will discuss a simple application to geophysical flows.
2. Subdivision Surfaces
 The basic idea of subdivision is to define a smooth curve (2-D) or surface (3-D) as the limit of a sequence of successive refinements. For a curve connecting a small number of points in a plane this is sketched in Figure 1. On the left side four points are connected through straight line segments. The middle sketch shows a refined version where three points have been added in-between the existing 4 points. On the right hand side the polygon has been refined one more time. The shape and smoothness of the resulting curve or surface depends on the rules chosen. To construct the curve in Figure 1, the new points are constructed by taking the average of nearby points: two to the left and two to the right with weights 1/16(−1, 9, 9, −1) respectively (ignoring the boundaries). When repeating the process ad infinitum the continuity of the resulting curve gets C1. An important property of this refinement, which is important when considering more complex curves and surfaces, is the local definition of the scheme, which means that far away points are not important in constructing new points.
 For subdivision surfaces the geometry of the initial mesh is important. Figure 2 shows a refinement for a triangular scheme. A problem arises from the existence of extraordinary vertices which have a valence not equal to 6 for triangular meshes (Figure 3). Such extraordinary vertices are problematic because the scheme has to define an extra set of rules for a valence up to infinity and it has to be shown that the properties of the interpolated surface, such as continuity, does not change at those points.
 An important characteristic of a subdivision scheme is the question whether it is “interpolating” in which case the original points are part of the refined set like in Figure 1 or whether it is an “approximating” scheme where the original points (the control vertices) are removed.
 Approximating subdivision schemes for arbitrary topology meshes are typically modifications of spline based schemes. The algorithms of Doo and Sabin [Doo, 1978; Doo and Sabin, 1978; Sabin, 1976] and Catmull and Clark  are generalizations of quadric and B-cubic spline, respectively. A generalization of quadric box splines for arbitrary triangulation was given by Loop . In this scheme each triangle is split into four new triangles at each iteration step. The position of the new vertices which are generated by splitting the edges of the triangles are computed by a weight average of neighboring vertices as given in Figure 4a. The position of the old vertices is adjusted by a weighted average of adjacent vertices. The weights used are dependent on the valence of the vertex. For regular vertices with a valence of six the weights are given in Figure 4b whereas for extraordinary vertices Loop  suggested the weights as given in Figure 4c.
 Interpolating schemes can be defined as modifications to approximating schemes. Narsi  and Halstead et al.  proposed such extensions for the Doo-Sabin and the Catmull-Clark scheme, respectively. Both cases require a linear system of equations for the interpolation constraints to be solved. It remains unclear however under which conditions the linear system remains solvable. An alternative approach are subdivision schemes which are interpolating by design such as the “Butterfly” scheme [Dyn et al., 1990] or the quadrilateral scheme by Kobbelt . As the Butterfly scheme is interpolating, local, simple to implement and leads to C1 surfaces we will discuss it as an example of subdivision in greater detail. Figure 5 shows the original 8-point stencil from Dyn et al.  where the name is derived from the shape of the nearby vertices used in the interpolation. The weights used for constructing a new vertex are, based on the labeling in Figure 5:
 In this case w is a tension parameter, which controls how “tightly” the limit surface is pulled toward the control net. If w is set to a simple linear interpolation is carried out and the surface is not smooth. A shortcoming of this scheme is its inapplicability for extraordinary vertices (e.g., point a in Figure 5, which has a valence smaller than 5) The scheme was therefore extended to use a 10-point stencil by the same authors [Dyn and Levin, 1994]. The new scheme was similar to the Butterfly scheme with two vertices added (Figure 5). The new weights are:
The total weight of the points is still unity and the new scheme includes the old scheme as a subset by choosing w = 0. It can be shown that the Butterfly scheme reproduces a polynomial of degree 3. However this extension does not address the smoothness problem of extraordinary vertices. Zorin and Sweldens [1996a] derived an extension to the scheme allowing it to handle vertices with arbitrary valence. See Zorin and Sweldens [1996b] for the mathematical derivation. If one of the endpoint vertices a is extraordinary, the new vertex is computed by a weighted sum of the extraordinary vertex and its neighbors (see Figure 6 for the stencil). The weights for vertices with valence N = 6 are the same as for the 10-point Butterfly scheme. For other valences the weights are:
 Using this subdivision scheme which allows for arbitrary mesh topology it is also possible to adaptively refine the mesh in regions of high deformations which is interesting with respect to an application of this method in computational fluid dynamics. Another possibility of adapting the mesh refinement to the flow structure is to use an adaptive mesh simplification algorithm in turns with the mesh refinement. Such an adaptive mesh simplification also leads to an irregular mesh with vertices having valences different from 6. This allows for the simplification of once highly refined regions where the surface geometry later has become more simple due to the time-dependent nature of the flow.
 In order to clarify the differences between the interpolating Butterfly scheme and the approximating Loop scheme we have plotted the limit surface generated by the recursive subdivision of a tetrahedron in Figure 7. For the Butterfly scheme the volume of the resulting is increased compared to the initial tetrahedron. The approximating Loop scheme yields a contraction of the initial volume. This is true however only for convex bodies. The limit surface of both surfaces is smooth.
3. Surface Simplification Algorithms
 With the possibility of creating models with greater detail the simplifications of polygon surfaces have become more and more important. Examples are range data captured by 3-D scanners, isosurfaces extracted from volume data with the “marching cube” algorithm, terrain data which is acquired by satellite, and polygonal modes in computer-aided geometric design generated by subdivision of curved parametric curves. The simplification of these surfaces makes storage, tranmission, computation and displaying more efficient. Owing to its importance there have been many different approaches to the subject. Most of them are surveyed by Heckbert and Garland  who categorize the different approaches in three classes: Vertex Decimation, Vertex Clustering, and Iterative Edge Contraction. For our work we have chosen a method proposed by Garland and Heckbert  which uses iterative contraction of vertex pairs as a generalization of edge contraction. Beside its computational efficiency and high quality approximation of the remaining model this method is able to join unconnected regions of the model together, a process termed aggregation. For our fluid dynamical application this is of key importance. An example is an entrained patch of chemically heavy material which is returning into the main reservoir.
 For the pair contraction algorithm Garland and Heckbert  first select pairs of vertices which could be contracted (Figure 8). A pair (v1, v2) can be contracted when it forms an edge in the polygon model or if ∥v1, v2∥ < t where t is a threshold parameter. The second rule allows for aggregation. Next they define the cost of the contraction for every valid vertex pair. To do this they associate a 4x4 matrix Q with every vertex and define the error Δ(v) to be the quadratic form Δ(v) = vTQv. The definition of Q follows that by Ronfard and Rossignac  and constructs the matrix as a sum of the squared distance of the vertex v to the plane defined by the triangles that meet at the vertex (see Garland and Heckbert  for a derivation). Having this error of moving a vertex associated with every vertex we can calculate the costs for a contraction (v1,v2) → as a sum of the single errors: . Before calculating we must choose the position of the new vertex however. This could be done by taking the midpoint of (v1,v2). A better method however is to calculate the position of where the error is a minimum. Since the error function Δ is a quadratic, finding its minimum is a linear problem and we have to solve for ∂Δ/∂x = ∂Δ/∂y = ∂Δ/∂z = 0.
 The algorithm can be summarized as follows: (1) compute the Q matrices for all vertices; (2) select valid vertex pairs; (3) calculate the optimal position for every valid vertex pair; (4) calculate the error for every potential contraction; (5) place all pairs in a list sorted by increasing error; (6) iteratively contract pairs from the list starting from the pair which has the lowest error associated. Update costs for all valid pairs involved in the contraction. In addition to that Garland and Heckbert  took special care to preserve boundaries and to prevent inversion of the face normal.
4. Simulating a Passive Component
 We will first discuss how these methods can be used to simulate the transport of a scalar field which does not influence the flow and is therefore termed passive. An example of such a field is the distribution of rare elements in the Earth's mantle.
 The equations for 3-D thermally driven convection in a highly viscous, incompressible fluid are solved in the primitive variable formulation. For spatial discretization we employed a second order correct finite volume scheme. Time stepping was performed by a second order, fully implicit method. The resulting algebraic system of equations was solved by a multigrid method, using SIMPLER as a smoother. The algorithm for solving the thermal convection problem is described by Trompert and Hansen . The concentration is defined by a closed, triangular mesh which encloses the chemical component. After every Levy-Courant limited timestep of the fluid dynamical model the vertices of the mesh are advanced by solving the set of first order equations
by using a 4th order correct Runge-Kutta method. Within the grid-cells the velocities are interpolated using a tri-linear interpolation. The accuracy of this method has been discussed by Schmalzl et al. . Once the initial mesh gets advected by the flow field it is getting deformed and stretched. The question of whether a triangle has to be split in order to maintain the accuracy of the tracer mesh depends on two factors: First the length of a triangle edge and second the curvature of the mesh at this triangle. Topologically it is only possible to split pairs of triangle. Splitting a single triangle would result in a T-split which destroys the continuity of mesh. The curvature of the mesh is calculated by evaluating the sum of the scalar product of the normal vector of the triangle with the normal vectors of neighboring triangles (Figure 9). For an edge-split we use the criterium:
where is the normal vector of the triangle, ledge is the length of the edge under consideration. Gridwidth is the width of the finite-volume cells. The exponent m is a positive, even number which has been chosen so that only higher curvatures influence the splitting behavior. The factors 0.3 and 0.7 are examples for weights and ensure that the length of a triangle edge gets at most 3 times the width of the finite-volume cells and is at least 0.3 times that width. This behavior is desirable because triangle that are too large may lead to inaccuracies whereas triangles which are too small are computationally inefficient.
 The routine for mesh refinement is called every fifth time step. The simplification step can be carried out at larger intervals since the excess triangles only influence the performance but not the accuracy.
 In Figure 10 we show two snapshots of the advection of a passive sphere in a convective flow. The Rayleigh number has been chosen as 10000 which results in a stationary flow with a roll pattern. Figure 10a shows the initial sphere and a linear temperature gradient. In Figure 10b the situation after 6 overturns is presented. The initial sphere has been streaked out by the flow field. As time progresses this filementation due to the differential rotation in the velocity field increases. Since the flow field is effectively two-dimensional and time independent the flow is non-chaotic and the distance of initially nearby tracers should increase linear in time. We therefor consider this as a good test case opposed to more complex flows which exhibit chaotic behavior. In Figure 11a we show the evolution of the number of triangles versus time. Time is non-dimensional thermal diffusion time as defined in Trompert and Hansen . The full length of the simulation (time = 1.8) corresponds to 36 overturns. The solid line indicates the number of triangles using the refinement criterion as defined above but without applying the simplification. The number of triangles increases rapidly and almost exponentially with time. The dotted line indicates the number of triangles when a simplification step is carried out every 25 model iterations. The initial model consisting of 3000 triangles is simplified to 1000 triangles. As time progresses the number of triangles is increasing almost linear with time. Because the refinement is done every 5 steps but the simplification every 25 steps the number of triangles oscillates. In Figure 11b the total volume which is encapsulated by the surface relative to the start volume in percent is given. The volume is decreasing in time and after 26 overturns the structure has lost about 14% of its initial volume. At this time the filementation is very fine and we thus find the error acceptable.
5. Simulating an Active Component
 To calculate the velocity using the momentum equation the concentration C(x) is needed in terms of a scalar field rather than a tracer surface. The process of mapping the 3-D surface on a discretely sampled grid, which we term rastering, is described in this section.
 To obtain a field representation of the concentration distribution a cell value is calculated for each control volume (CV). It ranges from 0 for those CVs lying completely outside the tracer-surface to 1 for those within. Intermediate values are assigned to control volumes that are intersected by the surface. To calculate the values for those boundary volumes, a ray-triangle-intersection technique is applied: For each ray given as
NX, NY, NZ being the number of control volumes in x-, y- and z-direction respectively and Δx, Δy and Δz the size of these control volumes. For each of these rays the intersection points ti with the tracer-surface are determined. The cell value of control volume lying in between two intersection points t1 ≤ i · Δx ≤ t2 are determined as follows:
assuming that the boundaries of the computational domain are outside the tracer-surface by definition. Figure 12 illustrates this calculation: The control volumes i = 2 and i = 4 are intersected by the surface (thick lines). The assigned cell values correspond to the relative position of the surface within the control volume.
 The calculation of the tis is carried out using the algorithm described by Möller and Tumbore  by cycling through all triangles of the tracer-surface for each ray given by equation (1). Afterward the procedure is repeated for rays in y- and z-direction. The resulting cell value is normalized to unity.
 The total number of rays can be approximated by 3 × NX2, assuming NX = NY = NZ. The whole procedure therefore results in a computational effort proportional to 3 × NX2 × NT + τOH with NT being the number of triangles the tracer-surface consists of and τOH some overhead that is nearly independent of the number of triangles. Owing to the direct dependency on the number of triangles tested, the total time spent on the rastering process can be dramatically reduced by decreasing NT itself. Therefore a bounding box is calculated for each triangle in advance to the intersection tests. The intersection test is then performed only for those triangles whose bounding box is intersected by the given ray. To illustrate the benefit of this technique, a spherical tracer-surface has been rastered using a resolution of NX = NY = NZ = 128 control volumes. Without the bounding box technique, a total number of about 377 × 106 triangles have to be tested for intersection with the rays given by equation 1. This number drops to about 48 × 103 using the precalculated bounding boxes. In terms of computational time spent on rastering the effort can be reduced from 43.0 seconds down to 1.7 seconds for cases without and with bounding boxes, respectively. The latter number corresponds to the overhead time mentioned above.
 We have tested the implementation for a flow driven by compositional density differences by investigating a three dimensional Rayleigh-Taylor instability where a layer of light material is underlying a layer of heavier material. The results where in close agreement both with the analytically predicted growth rate for a disturbance that lead to a 2-D flow pattern [Chandrasekhar, 1961] and with numerical studies for fully 3-D flow patterns as presented in Kaus and Podlatchikov .
 In Figure 13 we show a comparison with experimental and 2-D boundary integral calculations for a rising plume as presented in Manga et al. . The plume rises from an initial half sphere placed at the stress-free boundary. Different from the calculation of Manga et al.  our calculations have been carried out in the limit of zero Reynolds number whereas in the experiments a Reynolds number of Re ≈ 0.01 has been employed. An estimate of the difference between our calculations and the calculation presented in Manga et al.  based on the difference in tail width, plume length, and plume head diameter results in a difference of <5%. For comparison we have also added a calculation where the chemical component has been calculated using a field approach. For the same grid resolution as employed for the surface tracking method we observe strong effects of numerical diffusion Figures 13e–13h.
 Since the module for calculating the composition is just an extension to the code presented by Trompert and Hansen  which has been verified and benchmarked extensively we considered these tests sufficient.
6. Thermochemical Convection
 For the Rayleigh-Taylor instability the fluid motion ceases after the potential energy stored in the compositional field has been released. This time-limited motion also leads to a stretching of the interface which can be handled for most cases. If we consider thermochemical convection however this is different. The flow is typically time-dependent and thus exhibits regions where the largest Lyapunov exponents are larger than one. This results in initially nearby tracers separating exponentially in time. This is different from 3-D stationary flows at infinite Prandtl number shown in Figure 11 which do not have these chaotic regions [Schmalzl et al., 1995]. For some cases interesting to the dynamic of the Earth's interior the growth of the interfacial surface can be limited. When the density contrast connected with the composition is larger then the density contrast caused by the temperature variation the compositional different material is forming a layer near one of the horizontal boundaries. This leads to layered convection. The large scale convective flow is modulating this layer and compositional different material is entrained into the large scale flow. An example of such a flow is a chemically dense layer at the core mantle boundary which has been detected by seismology and termed D″ layer. Different numerical studies with 2-D models [e.g., Kellogg et al., 1999] have investigated this scenario. One of the central questions is connected with the stability of such a dense layer and the shape of the layer. In these models thin streaks of material is entrained by upwelling plumes. We have decided to focus the investigation on the stability and shape of layer. The thin filaments of chemically dense material must therefore be removed. We do this by carrying out a loop subdivision step every 25 model iterations. As discusses in section 2 the loop subdivision has some properties which are similar to a diffusive process thus removing fine-scale structures from the interface. By doing so we can limit the number of triangles needed. In Figure 14 we present such a model. We consider Rayleigh-Bénard convection of a constant property flow with a Rayleigh number of Ra = 106, stress-free boundary conditions with an aspect ratio of two by two by one. The grid resolution of the finite volume mesh is 64 × 64 × 32 with equidistant mesh spacing. A compositionally dense blob of material is inserted into the flow at time T = 0. The influence of the composition on the flow is described by the non-dimensional chemical Rayleigh number which is Rac = −3 × 106 in this model. The initial blob in Figure 14a sinks to the lower boundary and spreads (Figure 14b). There is a high temperature drop over the thin layer as indicated by the temperature cross-sections in Figure 14c. Material gets entrained in thin filaments and disappears due to the diffusive properties of the loop subdivision scheme. This results in a material loss of the chemical reservoir over time as plotted in Figure 15b. We consider the lost volume as entrained into the bulk flow and distribute this mass over the bulk volume. The density difference between the chemical layer and the bulk flow is thus decreasing as time proceeds. In Figures 14d–14f we see the fully developed thermochemical flow. Some interesting points to note are the thickness of the layer relative to the numerical resolution of the finite volume mesh. The mesh-spacing is noted by the color markings along the axes. The thickness of the chemical layer is typically only two mesh-spacing wide. It would be impossible to maintain such a thin layer in an Eulerian approach. Different from the 2-D investigations the material is not piled up in hill-like structures by the large-scale flow. We account this to the greater degree of freedom in the 3-D calculations where the heavy material has the possibility to flow around upwellings. Another interesting point to note is the sheet-like shape of entrained material (Figure 14f). The chemically distinct signature of hot spots has often been attributed to the entrainment of material from the core-mantle boundary, whereas more linear surface structures like mid-ocean ridges do not reflect this entrained material. Considering the often sheet like entrainment this seems hard to understand. The number of vertices needed for the triangle mesh increases only slightly during the calculation (Figure 15a).
 The real situation in the Earth's mantle is of course more complex. As a last example of the capabilities of the method we present a simulation with the same resolution as above but with a strongly temperature and depth dependent viscosity. The viscosity depends exponentially on the temperature [see Trompert and Hansen, 1996] by a factor of 10000 and varies with depth by a factor of 3. The surface Rayleigh number is Ra = 103 whereas the compositional Rayleigh number is Rac = −3 × 103 in this model. Figure 16a shows again the initial configuration with a linear temperature gradient and the initial blob of heavy material. In Figure 16b the material which has been placed lower and is therefore warmer and less viscous spreads out at the lower boundary. Owing to the strong temperature dependence of the viscosity a stagnant lid develops on the upper boundary (Figure 16c). Across this immobile lid the temperature can only be transported conductively which leads to high internal temperatures as can be seen by the colored temperature cross-sections. The chemical component creates a layer at the bottom of the domain. The heat transport over this layer is enhanced by convection within this layer and thus is more efficient than the transport across the upper boundary. The average temperature within the center of the domain is therefore warmer than T = 0.5 as shown in the horizontally averaged temperature-depth profile in Figure 16f. Owing to the almost isothermal interior of the cell the entrainment of material is very similar to the iso-viscous calculation in Figure 14 and also exhibits the sheet-like structure of the entrained material.
 We have presented a new method for investigating the transport of an active chemical component in a convective flow. Using a triangle mesh we are able to track the evolution of a scalar field with negligible diffusivity in a hydrodynamic code.
 When a density difference is connected to this scalar, the concentration field has to be calculated by a triangle-ray intersection test. For a large number of triangles this test dominates the calculation in terms of the computer time needed. By using precalculated bounding boxes we were able to reduce the computational effort significantly.
 A problem arises in time-dependent flows where the surface-area increases exponentially with time. This inhibits the use of this method for some problems where long time series are required. When a strong density contrast is connected to the chemical component however we show that small scale structures can be removed by using an approximating subdivision scheme. Different from numerical diffusion connected to Eulerian schemes the amount of this artificial diffusion can be controlled easily.
 We suggest that the method can be useful in different fields of geophysical fluid dynamics such as meteorology, oceanography, and solid Earth fluid dynamics. As an example application we presented calculations from the field of convection in the Earth's mantle. The calculations aim at explaining the observed strong seismic heterogeneities at the core-mantle boundary by a thin, chemically dense layer. We were able to simulate such a layer which only occupied 2–3 grid cells in the underlying finite-volume grid. While the front-tracking method proposed may not be well suited for all problems there is certainly a class of problems where it can be used efficiently. With relatively low computational expenses it is possible to track the evolution of a scalar component in the flow while avoiding the numerical diffusion connected with Eulerian schemes and the statistical noise from tracer method. In the future this method may further benefit from the rapid development in computer graphics such as new subdivision schemes or the hardware implementation of subdivision schemes as presented by Bischoff et al. .
 Concerning the investigation of the Earth's mantle the new tool presented should enable researchers to numerically model the fate of a thin, geochemically distinct, layer in the Earth's mantle in 3-D. So far most calculations have been carried out in two spatial dimensions [e.g., Hansen and Yuen, 1989; Christensen and Hofmann, 1994]. However comparison of mixing properties of 2-D and 3-D thermal convection indicated significant differences [e.g., Schmalzl et al., 1996; Ferrachat and Ricard, 1998]. Using our new method we presented calculations which revealed that the topography of the dense layer as generated by the flow is much smaller than in comparable 2-D scenarios [e.g., Hansen and Yuen, 1989]. These calculations however are only examples of the capability of the method and for the future we plan to carry out a systematic investigation.
 A. Loddoch is supported by DFG priority programme 1115 “Mars and the terrestrial planets” under grant no. Ha 1765/10-1. We would like to thank P. van Keken, L. Moresi, and Y. Ricard for constructive reviews. Thanks to C. Stein for thoroughly reading the manuscript.