Notice: Wiley Online Library will be unavailable on Saturday 27th February from 09:00-14:00 GMT / 04:00-09:00 EST / 17:00-22:00 SGT for essential maintenance. Apologies for the inconvenience.
 One of the most widely used numerical modeling techniques in geodynamics to study the evolution of geomaterials is the “marker-and-cell” technique. In such methods the material lithology is represented by Lagrangian particles (markers), while the continuum equations are solved on a background mesh. Significant research has been devoted to improving the efficiency and scalability of these numerical methods to enable high-resolution simulations to be performed on modest computational resources. In contrast, little attention has been given to developing visualization techniques suitable for interrogation high-resolution 3D particle data sets. We describe an efficient algorithm for performing a volume reconstruction of the lithology field defined via particles (code available upon request from the author). The algorithm generates an Approximate Voronoi Diagram (AVD) which transforms particle data sets into a cell-based, volumetric data set. The volumetric representation enables cross sections of the material configuration to be constructed efficiently and unambiguously, thereby enabling the interior material structure of the simulation results to be analyzed. Examples from geodynamic simulations are used to demonstrate visual results possible using this visualization technique. Performance comparisons are made between existing implementations of exact and approximate Voronoi diagrams. Overall, the AVD developed herein is found to be extremely competitive as a visualizing tool for massive particle data sets as it is extremely efficient, has low memory requirements and can be trivially used in a distributed memory computing environment.
If you can't find a tool you're looking for, please click the link at the top of the page to "Go to old article view". Alternatively, view our Knowledge Base articles for additional help. Your feedback is important to us, so please let us know if you have comments or ideas for improvement.
 One of the focuses of geodynamics is in understanding the long time (e.g. 100 Myr) evolution of the deformation of rocks in the crust, lithosphere, and or mantle. Traditionally, these processes have been studied via field based observations and analogue modeling. In conjunction with these approaches, the use of continuum mechanics to describe the dynamics of such materials, and numerical methods to approximate the underlying partial differential equations has also become a widely accepted technique within the Earth science community to study the deformation of geomaterials.
 To study the long time evolution, and large deformation of geomaterials, in the geodynamics community, the underlying continuum is assumed to be a very viscous, incompressible fluid (i.e. Stokes flow). In considering the evolution of brittle, and visco-elastic-plastic material over million year timescales, we invariably require numerical methods which (i) track material history subjected to large deformations, and (ii) continue to follow the material evolution post failure. To accommodate these requirements, a hybrid Eulerian-Lagrangian (HEL) methodology has been exploited by the geodynamics community [Pracht, 1971; Poliakov and Podladchikov, 1992; Fullsack, 1995; van Keken et al., 1997; Moresi et al., 2003; Gerya and Yuen, 2003; Tackley and King, 2003; Schreurs et al., 2006; Popov and Sobolev, 2008; Thieulot, 2011]. The principal of the methodology is to separate the discretization used to track the deforming material and the flow variables. The velocity and pressure variables associated with Stokes equations are discretized on an Eulerian grid, while the complete material description is defined via a set of Lagrangian particles. Attributed to each particle with coordinates xp, is a material index χ, indicating the particular lithology (or composition) which the particle belongs to. We denote this via χ(xp) = 1, 2, …, NL, where NL is the maximum number of lithologies in the model. All material history variables are tracked by the Lagrangian particles. Extreme deformation of the material is trivially handled by the particulate representation adopted as there is no connectivity between the particles. Note that in this formulation, material interfaces are not explicitly represented by the particles. An illustration of the HEL methodology is shown in Figure 1.
 Due to advances in both computer hardware and software, together with continued research and educational efforts, two-dimensional numerical modeling of geodynamic processes is now common practice in Earth science. Over the last 15 years, much research has been focused on the development of improving the efficiency and scalability of three-dimensional HEL methodologies. Previously, the ability to perform simulations in three-dimensions was restricted to dedicated research groups with access to specialized, high performance computing (HPC) hardware. However, with the advent of affordable, high performance computer clusters, performing 3D computational geodynamic simulations has become widespread and is no longer restricted to the domain of experts, or researchers based at HPC centers.
 One of the crucial observables from the output of multiphase geodynamic systems is the evolution of the material configuration. Over Myr timescales, the material within the system will invariably have experienced extremely large deformation, thus large amounts of mixing will have occurred. To illustrate the extend to which stirring and mixing occurs over different length scales in geodynamic models, we refer to simulations results in Figure 2. From inspection of Figure 2, the complexity of the lithological structures (denoted via different colors) which require visualization are inherently more topologically complex than scalar (or vector) fields such as temperature (or velocity). Given the HEL methods currently being adopted in geodynamics, visualization of the lithological structures mandates visualization of particle based data sets. In contrast to the research efforts devoted to improving the 3D modeling capabilities, comparatively less attention in the geodynamics community has been focused on the development of visualization techniques which are appropriate for high-resolution, three-dimensional particle fields. This issue becomes more critical as the usage of three-dimensional geodynamic models continues to increase. In this paper we aim to address this shortcoming by describing a visualization technique which can transform a high-resolution, 3D particle based lithology representation, into a solid volume representation.
 The outline of the paper is as follows. In section 2 we briefly overview surface and volume reconstruction techniques which are applicable for analyzing volumetric point based data set which define material lithology. In section 3 we discuss efficient techniques for generating Approximate Voronoi Diagrams (AVDs) to generate a volumetric representation of the material lithology. In section 4we demonstrate the type of images one can generate using the AVD technique by analyzing the simulation results of three-dimensional models of continental collision, salt tectonics and a high resolution synthetic model utilizing one billion particles. Additionally, we also profile the performance characteristics of the AVD implementation used to build the volumetric representations of the material configuration. Here comparisons between the new AVD algorithm and existing Voronoi algorithms from the literature are compared in terms of CPU and memory requirements. Last insection 5 we provide a summary of the AVD methodology.
 The hybrid Eulerian-Lagrangian methods currently being adopted in geodynamics require visualization techniques which can convert the lithological structures defined vianpparticles, into a visual representation which can be easily interpreted without ambiguity. In two-dimensions, this is readily achieved by creating anx, yscatterplot, using the particles position and coloring the points according to their lithology. Using only colored points (which need not be rendered as spheres), this approach unambiguously fills the 2D plane (defining the model space) with information defining the current material configuration. For three-dimensional simulations, the scatterplot approach does not yield a meaningful representation of the material configuration. To decipher the complex geometry of the deformed material configuration in 3D, an alternative representation is required. Two possible alternatives are to apply (i) a surface reconstruction or (ii) a volume reconstruction technique to the particle data.
2.1. Surface Reconstruction
 In general, a surface reconstruction technique can be used to extract an isosurface (of some particular value) from a 3D point set. An isosurface can be constructed by interpolating the particle data onto a grid, and then applying a standard surface rendering technique, such as the marching cubes [Lorensen and Cline, 1987]. Standard scattered data interpolation techniques such as Shepards methods [Shepard, 1968], Nearest Neighbor (NN) interpolation [Sibson, 1980], Moving Least Squares (MLS) [Lancaster and Salkauskas, 1981] or Radial Basis Functions (RBF) [Hardy, 1971] can be used to interpolate the particle field data onto the grid. Specifically, we wish to reconstruct the interface between different lithologies using the volumetric representation of the lithology provided by the particles. However, the field of interest, namely the lithology, does not represent a continuous field, thus it cannot be directly interpolated. To rectify this, we must first covert each lithology field, into an independent scalar field ϕi(xp), i = 1, …, NL which will be interpolated onto the grid. For each lithology i, we define an auxiliary particle field via
By applying a scattered data interpolation to ϕi, we can obtain a scalar field 0 ≤ i ≤ 1 on the grid. The isosurface value of i = is defined as the interface of lithology i. However, employing such a surface reconstruction technique may be of limited practical use to help understand the material configuration of some lithological interfaces due to the severity of the mixing which may have occurred. For example, in considering the structure in Figure 2 (right), surface reconstruction may be beneficial to visualize the topography, Moho, and crustal boundaries, however such a procedure would provide little insight into the nature of the sedimentary layering (light and dark brown regions), due to the extent of the stretching and shearing which has occurred in these layers.
2.2. Volume Reconstruction
 In scenarios where the lithological interfaces are too complicated to obtain insight into the material configuration, we instead prefer to use a volume reconstruction technique. To define a volumetric representation of a material configuration described via particles, one can utilize a Voronoi diagram [Voronoi, 1907]. Given a set containing np point coordinates, the Voronoi diagram defines a unique partitioning of space amongst the set of points. The volume associated to a particle s ∈ is referred to as a Voronoi cell, V(s). Every coordinate contained within a Voronoi cell V(s) is closer, in a Euclidian sense, to particle s than any other particle in . Using a Voronoi diagram, the integer field representing the lithology is defined throughout the entire domain without the need for any interpolation.
 An example of type of lithology map which can be obtained via the Voronoi representation is illustrated in Figure 3. Here the dashed line represents the true interface between the red and blue regions. We note that the Voronoi diagram representation does not produce a lithology partition with boundaries which conform to the true interface. Nevertheless for the purpose of visualization, the approximate interface location between the regions is acceptable.
 The Voronoi representation allows one to perform either a full 3D volume rendering, or to generate cross sections through the data set. The latter is particularly useful to validate numerical models as this output can be directly compared with field derived geological cross sections. Since the volume reconstruction provides a robust technique for visualizing highly deformed interfaces, we prefer to utilize this methodology for geodynamic simulations.
3. Approximate Voronoi Diagrams: From Points to Volumes
 Numerous algorithms and software exist for constructing exactVoronoi diagrams for example; Fortune's sweep-plane algorithm [Fortune, 1987], the incremental Bowyer-Watson algorithm [Bowyer, 1981; Watson, 1981], Triangle [Shewchuk, 2002], Qhull [Barber et al., 1996] and Voro++ [Rycroft, 2009]. From a practical point of view, generating the exact Voronoi diagram in 3D is expensive, both in terms of CPU time and memory requirements (see section 4.4 for more details). There are several downsides associated with computing the exact Voronoi diagram. These include the jump in algorithmic complexity between 2D and 3D implementations; the problem of bounding the Voronoi diagram within a finite domain, and the development of a scalable, parallel algorithm which is required when large numbers of particles (e.g. np > 106) in there-dimensions are used as input. Furthermore, there exist numerous pathological point distributions (special cases) which must be correctly accounted for, thus complicating the implementation.
 1. They are performant on both CPU and GPU architectures.
 2. They are easy to implement in 3D.
 3. The algorithms are fast as most operations are boolean and in practice, very few distance comparisons are actually performed.
 4. Bounding the Voronoi diagram in a finite domain is straight forward. All that is required is that the AVD cell structure used conforms to the geometry of the bounding domain.
 As we will demonstrate in section 4, for the purpose of visualizing the material configuration of 3D point data sets, using an approximate Voronoi diagram is sufficient. To be a useful tool for interpreting the output of particle based geodynamic simulations, one of the primary requirements of the visualization methodology is that is capable of dealing with “large” particle data sets. In the very near future, we anticipate running finite difference based, staggered grid HEL simulations employing a nodal resolution of ∼10013, with a typical simulation employing 8 particles per control volume. Thus the present and future definition of “large” is np ∼ (109 − 1011).
 To address the present and future requirements of volume reconstruction from large point cloud data set, we have developed a new AVD algorithm (from now referred to as pAVD) which is based on the method described in Velić et al. . pAVD generates approximate Voronoi diagrams on a structured grid consisting of M = Mx × My × Mz cells. We have tailored the data structures in pAVD to be low memory, efficient and fast in both the limiting cases when M ≫ np (highly resolved Voronoi cells) and when np ∼ M. The newly developed algorithm is detailed in Appendix A.
 The pAVD algorithm is designed to run in a parallel, distributed memory computing environment using CPUs. Given m CPUs, we subdivide the AVD mesh in m = mx × my × mz subdomains. On each processors subdomain, we construct an independent approximate Voronoi diagram, i.e. we choose the bounding box of each AVD to coincide with the boundary of each subdomain. An approximation is made in truncating the AVD across each processors subdomain boundary, however, this approximation renders the algorithm completely local to each processor and thus no communication is necessary to compute the AVD.
 To facilitate flexible post-processing we permit the AVD construction to be performed on different numbers of CPUs compared to the execution of the forward model. This is introduced by allowing AVD subdomains to be further subdivided into chunks of size x × y × z, over which we again apply the pAVD algorithm. For example, consider a 16013 finite difference grid calculation performed on 8 × 8 × 8 = 512 CPUs. During execution, the forward model contains subdomains of size 2003cells. Suppose as a post-processing task, we require an AVD diagram of 64003 to be constructed using only 64 CPUs. This would naturally map to AVD subdomains of size 16003 cells. In practice, processing this size AVD subdomain on a single CPU may not be possible. Thus we can specify that x = y = z = 400, implying that each of the 64 CPUs would be scheduled to process 64 independent AVDs with a cell resolution of 4003.
 We note that efficient GPU implementations of 2D AVD flood filling algorithms exist [Rong and Tan, 2006], however these rely on hardware provided 2D textures. At present 3D textures are not provided on GPUs, thus currently limiting the potential of using GPUs for 3D AVD computation. The AVD algorithm developed here is specifically designed to run efficiently on a CPU. This has the advantage that the same code can be utilized “on-the-fly” during the execution of the geodynamic simulation on clusters, or as a stand alone post-processing tool which can be executed on the same cluster - without having to rely on specialized hardware.
4.1. 3D Subduction-Collision
 To demonstrate the applicability of the AVD algorithm described in Appendix A to visualize the material configuration, we consider results from a subduction simulation obtained from the HEL code, i3vis [Gerya and Yuen, 2007; Zhu et al., 2009]. The model results analyzed are part of a numerical study which examines the dynamics and evolution of a subducting slab. The setup of the model consisted of two continental blocks, initially separated by a planar transform fault. The simulation was performed in a rectangular box given by Ω ≡ [0, 1000] × [0, 328] × [0, 200] km3. The numerical grid resolution used consisted of 501 × 165 × 101 nodal points in the x, y, z directions respectively. A total of 67,865,978 Lagrangian particles were used to represent the material properties of the viscous fluid within the domain. The volumetric data representations obtained via the AVD algorithm after 21.03 Myrs of evolution are shown in Figure 5. The details of how the AVD algorithm was applied to generate these images is discussed below.
 To analyze this particular data set, the AVD post-processing tool was applied in two post-processing phases. In the first step, a coarse representation of the material configuration was constructed using the entire model space Ω (see dashed black line inFigure 5) and all the Lagrangian particles in the simulation. For this coarse representation, the AVD employed a cell resolution of 500 × 164 × 100, which is comparable to that used by the finite difference grid in the geodynamic model. The coarse representation of the material configuration is used to determine the location of features of interest in the model domain. A clipping plane through the coarse AVD representation (shown in the center of Figure 5) reveals that a region centered on the hinge is the focal point of this data set. Thus, the material configuration at the convergence zone was chosen to be examined in more detail. This region, denoted via ≡ [295, 645] × [0, 328] × [0, 200] km3, is outlined by the solid thick yellow line in Figure 5. In the second phase of analysis, the AVD algorithm was only applied to the particles contained inside , using a cell resolution of 500 × 320 × 200. From this higher resolution representation of the material in the subdomain , a number of cross sections were generated in each coordinate direction, which are displayed along the perimeter of the image in Figure 5.
 In Figure 5, the volume rendering and sections presented were visualized using the open-source software ParaView (http://www.paraview.org). Using a single CPU, visualizing and rendering rectilinear mesh with up to 1000 × 640 × 400 elements is possible - both in terms of the required computation time and available memory. While ParaView is capable of rendering data sets in parallel, often users only require diagnostics at a given time step to either monitor the progress of the simulation, or for debugging purposes. Consequently, conducting a massive single CPU rendering of the current time step, or a fully parallel rendering of the model output is generally unnecessary. To address this issue, as a post-processing task, we apply the AVD algorithm to the domain of interest from which we directly extract image files which describeM*, N*, P* sections in the x, y, z directions respectively. The simplicity of the grid used in representing the AVD allows for the efficient construction of the image files in numerous formats (e.g. PPM, PostScript, JPEG, etc.). Directly extracting the images of the sections enables users to efficiently probe the material configuration without having to resort to using a GUI based visualization tool. Such functionality is illustrated in Figure 6where a time series obtained from the subduction study were post-processed using the AVD cross section PostScript analyzer. The lithology map used in the collision models is provided inFigure 7.
4.2. Salt Tectonics
 In addition to using the AVD algorithm as a sequential post-processing tool, the AVD methodology described can easily be used to generate volumetric representations in parallel computing environments, either as a parallel post-processing operation, or as part of the parallel computations performed during each time step of the forward model. Here we focus on the latter form of parallel execution.
 In Figure 8 we illustrate volumetric representation which can be obtained by using the AVD in this manner. The forward model is designed to study the morphology of salt domes when subjected to surface processes such as erosion and sedimentation. Each colored layer denotes a different material lithology. The layers are initially are stacked in a gravitationally unstable configuration. The grey material at the surface is a low viscosity, zero density material which mimics air. The interface between the grey material and the colored layers represents an approximation of the free surface, along which erosion and sedimentation laws can be applied. The calculation of this forward models was performed using the HEL code, LaMEM [Schmeling et al., 2008; Lechmann et al., 2011]. This parallel simulation was performed using eight CPUs. The subdomain boundaries are denoted via the transparent blue surfaces in Figure 8. In Figure 8 (right), a number of cross sections in the x–y plane are shown. No visual artifacts are apparent in the AVD constructed using bounding boxes defined via the subdomains of the parallel forward model.
4.3. Synthetic 1B Particle Test
 To demonstrate the parallel visualization capabilities of pAVD, we performed a synthetic experiment using 1 billion particles. Particles were initially distributed on a regularly spaced, cell centered lattice consisting of 10003 voxels. The initial coordinates were randomly shifted by <10% of the cell spacing. The model domain used was Ω ≡ [0, 1]3. To simulate mixing, we defined the following velocity field
which we used to advect the particles. The function g(t) modulates the velocity field, and is defined as
where we take the period, T = 1. An AVD was constructed using a cell resolution of 16003 on 64 CPUs. Selected cross sections at time t = are shown in Figure 9. The different colors represent synthetic lithologies which were prescribed to monitor the deformation and access any artifacts which arise due to truncating the AVD across processor subdomains. The synthetic lithologies consisted of two throughgoing slabs; one located in the x–z plane with thickness 0.2, centered at y = 0.35 (blue shades) and another in the y–z plane with thickness 0.2, centered at x = 0.35 (green shades) and a sphere of radius 0.23, centered at (0.35, 0.35, 0.35) (pink shades). No artifacts along the three horizontal and vertical planes associated with the processor boundaries is evident in any of the cross sections shown in Figure 9.The total time required to generate this AVD (once the point coordinates were assigned to the current processor) was ∼55 seconds.
 In Tables 1 and 2we report the CPU time and memory (RAM) required by pAVD to process the subduction particle data set. These results consider the sequential execution of the pAVD algorithm used in a post-processing mode. The memory usage reported represents the peak value obtained during the execution. The results presented inTables 1 and 2were performed on a Quad-Core AMD Opteron (Processor 8380) with code compiled using GCC 4.1.2 and level three optimization. The CPU time required by the AVD algorithm is a function of the number of particles and the length of the boundary associated with each approximate Voronoi cell. For a fixed number of points, we observe that increasing the AVD grid resolution increases the overall CPU time required, as the finer resolution naturally causes the length of the boundary of each Voronoi cell to increase. Accordingly, the peak memory usage is seen to increase as the Voronoi cells become more resolved.
Table 1. Performance Characteristics of the AVD Algorithm–Ia
AVD Resolution Mx × My × Mz
Results utilize the entire particle data set from the collision example, consisting of 67,865,978 particles.
500 × 164 × 100
500 × 164 × 200
1000 × 328 × 200
1000 × 328 × 400
2000 × 656 × 800
Table 2. Performance Characteristics of the AVD Algorithm–IIa
AVD Resolution Mx × My × Mz
Results utilize a subset of particle from the collision example, consisting of 24,281,221 points. See section 4.1 for details.
125 × 80 × 50
250 × 160 × 100
500 × 320 × 200
1000 × 640 × 400
2000 × 1280 × 800
 We note that the performance characteristics of the AVD algorithm are sensitive to the point distribution. In our experiments, the coordinates of the particle data sets are obtained from the evolution of a divergence free velocity field - i.e. the material is modeled as an incompressible media. Under this assumption, we are guaranteed a certain degree of regularity in the particle packing fraction (or density of points) within any given region of the model domain.
 To access the competitiveness of the pAVD implementation, we provide CPU time and memory usage comparisons of our algorithm with several exact Voronoi diagram implementations; Voro++ [Rycroft, 2009], Qhull [Barber et al., 1996] and the CPU based approximate Voronoi algorithm ffbcAVD [Velić et al., 2008]. These results are summarized in Tables 3 and 4. The timings reported were performed on an 8-core Intel Xeon 2.67GHz (Nehalem) machine with code compiled using GCC 4.4.3 and level three optimization.
Table 3. Comparison of the pAVD Algorithm With Exact Voronoi Algorithms
We report CPU time (sec) and memory usage (GB) in the parenthesis. The “X” denotes examples which were omitted from the study as the AVDs had fewer cells than particles.
 Due to the intended use of the Voronoi diagram, in the comparison study we are primarily concerned with the performance of the algorithms when large numbers are particles are used. In tests following, we used a domain Ω ≡ [−1, 1]3 filled with point coordinates defined using the standard C library random number generator, rand(). In comparing exact Voronoi algorithms with approximate Voronoi algorithms, we have to consider what resolution is appropriate for visualization purposes when using an AVD. This is entirely subjective, but practical experiences indicates that having 10–400 AVD cells per approximate Voronoi cell is sufficient for visually satisfactory results. Accordingly, this would imply that for the case of 106 and 107 particles, that an AVD resolution of 4003 and 8003respectively would be sufficient for visualization purposes. We note for both these cases, the pAVD approach is faster than computing the exact Voronoi diagram, however the approximate approach uses significantly more memory than Voro++. We also emphasis that the pAVD methodology can be executed in an “embarrassingly” parallel manner, enabling it to be used to process ever increasing size particle data sets - where as the same is not possible for the exact Voronoi algorithms.
 In comparing pAVD with the algorithm of Velić et al. , we observe the new algorithm uses ∼6 times less memory when np < M (“high resolution” Voronoi cell limit), and 3.6 times less when np ∼ M (“low resolution” Voronoi cell limit). CPU times reveal pAVD is 4.3 times faster in the “high resolution limit” and ∼1.6 times faster in the “low resolution limit”.
 The advent of using large scale, high resolution three-dimensional hybrid particle-grid based methods to study geodynamics processes is upon us. Visualizing and interpreting the three-dimensional geometry of the material configuration after severe deformation has occurred is a challenging task when utilizing particle based methods. To address some of the visualization challenges posed by these methods, we describe how an Approximate Voronoi Diagram (AVD) can be used to reconstruct a volumetric representation of the material lithology. From the volumetric representation, we can efficiently generate a data representation of the material configuration which can be volume rendered, contoured, or from which cross sections can be extracted. The type of volumetric representations possible, and the performance characteristics of the AVD algorithm were demonstrated by applying the technique to simulation results from models of continental collision, salt tectonics and a high resolution synthetic model.
 The AVD methodology is found to be a competitive volume reconstruction approach (with respect to CPU time) in contrast to exact Voronoi diagram or other CPU based AVD implementations. To address the issue of large scale particle visualization, we have designed and demonstrated an efficient way to process particle data in batches which can be conducted in a massively parallel computing environment, or on desktop computers.
Appendix A:: AVD Algorithm
 In the following we describe a procedure to compute an Approximate Voronoi Diagram (AVD). The construction of the AVD is obtained via a type of moving front, or flood fill algorithm. We note that the algorithm described here is based upon that described in Velić et al. .
 We consider the construction of an AVD contained within a bounding box of size [x0, x1] × [y0, y1] × [z0, z1]. The AVD mesh we use is defined by a structured grid employing M = Mx, My, Mz cells in the x, y, z directions respectively. The cell spacing in each coordinate direction (Δx, Δy, Δz) is constant. The set of all cells will be denoted via . The set of particles is denoted via . This sequence of operations required to build the AVD is defined in Algorithm 1. At every iteration, the claimed region of each particle grows, thus mimicking the propagation of a moving front (see Figure A1).
 Initially, each particle s ∈ “claims” the cell c ∈ it resides within. By “claiming” cell c, we denote that c now belongs to the discrete Voronoi cell associated with particle s. Claimed cells are added to the list C. If more than one particle is detected in any given cell c, we elected to only keep the particle closest to the centroid of cell c.Only the particle closest to the centroid will participate in the AVD construction - any other points contained inc will not feature in the Voronoi diagram. Under such a situation, one may elect to return an error and request that a higher resolution AVD grid be used. The final length of C defines the initial number of claimed cells Mclaimed. This procedure is referred to as AVD_INITIALIZE(.)
Algorithm 2 Propagate the moving front for all particles
 There are several aspects of Algorithm 2 which we further expand upon below:
 1. The operation, s ← CELLSITE(c), returns the particle s associated with cell c ∈ .
 2. The operation, CLAIMCELL(c) ← s, associates cell c ∈ with the Voronoi cell of particle s ∈ .
 3. The operation CELLNEIGHBORS(c), returns the list of neighbor cells of cell c. Neighbor cells of c ∈ are identified as all the cells which share a common edge (in 2D), or face (in 3D) with cell c (see Figure A2 (left)).
 4. Neighboring cells n are associated with a particle s, if n has not already been claimed by another particle. However, if n has been claimed by another particle R ∈ , then s may claim n if Δns < ΔnR, where ΔnX is the Euclidean distance between the cell centroid of n( n) and the particle coordinate. The process of claiming cells is illustrated in Figure A2.
 The author wishes to thank Taras Gerya and Thibault Duretz for their valuable input into this work, and for providing access to the model results of the three-dimensional slab break-off study, and the code i3vis. Boris Kaus is also thanked for providing access to LaMEM and the data set used for the salt tectonics example. Reviewers D. Yuen and C. Thieulot are thanked for their comments that helped improve the quality of the original manuscript.