Volume reconstruction of point cloud data sets derived from computational geodynamic simulations

Authors


Abstract

[1] One of the most widely used numerical modeling techniques in geodynamics to study the evolution of geomaterials is the “marker-and-cell” technique. In such methods the material lithology is represented by Lagrangian particles (markers), while the continuum equations are solved on a background mesh. Significant research has been devoted to improving the efficiency and scalability of these numerical methods to enable high-resolution simulations to be performed on modest computational resources. In contrast, little attention has been given to developing visualization techniques suitable for interrogation high-resolution 3D particle data sets. We describe an efficient algorithm for performing a volume reconstruction of the lithology field defined via particles (code available upon request from the author). The algorithm generates an Approximate Voronoi Diagram (AVD) which transforms particle data sets into a cell-based, volumetric data set. The volumetric representation enables cross sections of the material configuration to be constructed efficiently and unambiguously, thereby enabling the interior material structure of the simulation results to be analyzed. Examples from geodynamic simulations are used to demonstrate visual results possible using this visualization technique. Performance comparisons are made between existing implementations of exact and approximate Voronoi diagrams. Overall, the AVD developed herein is found to be extremely competitive as a visualizing tool for massive particle data sets as it is extremely efficient, has low memory requirements and can be trivially used in a distributed memory computing environment.

1. Introduction

[2] One of the focuses of geodynamics is in understanding the long time (e.g. 100 Myr) evolution of the deformation of rocks in the crust, lithosphere, and or mantle. Traditionally, these processes have been studied via field based observations and analogue modeling. In conjunction with these approaches, the use of continuum mechanics to describe the dynamics of such materials, and numerical methods to approximate the underlying partial differential equations has also become a widely accepted technique within the Earth science community to study the deformation of geomaterials.

[3] To study the long time evolution, and large deformation of geomaterials, in the geodynamics community, the underlying continuum is assumed to be a very viscous, incompressible fluid (i.e. Stokes flow). In considering the evolution of brittle, and visco-elastic-plastic material over million year timescales, we invariably require numerical methods which (i) track material history subjected to large deformations, and (ii) continue to follow the material evolution post failure. To accommodate these requirements, a hybrid Eulerian-Lagrangian (HEL) methodology has been exploited by the geodynamics community [Pracht, 1971; Poliakov and Podladchikov, 1992; Fullsack, 1995; van Keken et al., 1997; Moresi et al., 2003; Gerya and Yuen, 2003; Tackley and King, 2003; Schreurs et al., 2006; Popov and Sobolev, 2008; Thieulot, 2011]. The principal of the methodology is to separate the discretization used to track the deforming material and the flow variables. The velocity and pressure variables associated with Stokes equations are discretized on an Eulerian grid, while the complete material description is defined via a set of Lagrangian particles. Attributed to each particle with coordinates xp, is a material index χ, indicating the particular lithology (or composition) which the particle belongs to. We denote this via χ(xp) = 1, 2, …, NL, where NL is the maximum number of lithologies in the model. All material history variables are tracked by the Lagrangian particles. Extreme deformation of the material is trivially handled by the particulate representation adopted as there is no connectivity between the particles. Note that in this formulation, material interfaces are not explicitly represented by the particles. An illustration of the HEL methodology is shown in Figure 1.

Figure 1.

Hybrid Eulerian (mesh)–Lagrangian (particle) discretization commonly used in computational geodynamics to study the dynamics of viscous fluids. (left) The physical system contains two lithologies, which here define two isotropic fluids, each with a unique viscosity and density. (right) The discrete representation of the problem specifies the material properties associated with each material lithology via a set of particles. A structured mesh is used to discretize the equations describing incompressible Stokes flow.

[4] Due to advances in both computer hardware and software, together with continued research and educational efforts, two-dimensional numerical modeling of geodynamic processes is now common practice in Earth science. Over the last 15 years, much research has been focused on the development of improving the efficiency and scalability of three-dimensional HEL methodologies. Previously, the ability to perform simulations in three-dimensions was restricted to dedicated research groups with access to specialized, high performance computing (HPC) hardware. However, with the advent of affordable, high performance computer clusters, performing 3D computational geodynamic simulations has become widespread and is no longer restricted to the domain of experts, or researchers based at HPC centers.

[5] One of the crucial observables from the output of multiphase geodynamic systems is the evolution of the material configuration. Over Myr timescales, the material within the system will invariably have experienced extremely large deformation, thus large amounts of mixing will have occurred. To illustrate the extend to which stirring and mixing occurs over different length scales in geodynamic models, we refer to simulations results in Figure 2. From inspection of Figure 2, the complexity of the lithological structures (denoted via different colors) which require visualization are inherently more topologically complex than scalar (or vector) fields such as temperature (or velocity). Given the HEL methods currently being adopted in geodynamics, visualization of the lithological structures mandates visualization of particle based data sets. In contrast to the research efforts devoted to improving the 3D modeling capabilities, comparatively less attention in the geodynamics community has been focused on the development of visualization techniques which are appropriate for high-resolution, three-dimensional particle fields. This issue becomes more critical as the usage of three-dimensional geodynamic models continues to increase. In this paper we aim to address this shortcoming by describing a visualization technique which can transform a high-resolution, 3D particle based lithology representation, into a solid volume representation.

Figure 2.

Severe deformation and mixing in two-dimensional geodynamic models of plume development from a subducting slab. The colors represent different lithologies. Images modified after (left)Gorczyk et al. [2007] and (right) Gerya [2011].

[6] The outline of the paper is as follows. In section 2 we briefly overview surface and volume reconstruction techniques which are applicable for analyzing volumetric point based data set which define material lithology. In section 3 we discuss efficient techniques for generating Approximate Voronoi Diagrams (AVDs) to generate a volumetric representation of the material lithology. In section 4we demonstrate the type of images one can generate using the AVD technique by analyzing the simulation results of three-dimensional models of continental collision, salt tectonics and a high resolution synthetic model utilizing one billion particles. Additionally, we also profile the performance characteristics of the AVD implementation used to build the volumetric representations of the material configuration. Here comparisons between the new AVD algorithm and existing Voronoi algorithms from the literature are compared in terms of CPU and memory requirements. Last insection 5 we provide a summary of the AVD methodology.

2. Visualization

[7] The hybrid Eulerian-Lagrangian methods currently being adopted in geodynamics require visualization techniques which can convert the lithological structures defined vianpparticles, into a visual representation which can be easily interpreted without ambiguity. In two-dimensions, this is readily achieved by creating anx, yscatterplot, using the particles position and coloring the points according to their lithology. Using only colored points (which need not be rendered as spheres), this approach unambiguously fills the 2D plane (defining the model space) with information defining the current material configuration. For three-dimensional simulations, the scatterplot approach does not yield a meaningful representation of the material configuration. To decipher the complex geometry of the deformed material configuration in 3D, an alternative representation is required. Two possible alternatives are to apply (i) a surface reconstruction or (ii) a volume reconstruction technique to the particle data.

2.1. Surface Reconstruction

[8] In general, a surface reconstruction technique can be used to extract an isosurface (of some particular value) from a 3D point set. An isosurface can be constructed by interpolating the particle data onto a grid, and then applying a standard surface rendering technique, such as the marching cubes [Lorensen and Cline, 1987]. Standard scattered data interpolation techniques such as Shepards methods [Shepard, 1968], Nearest Neighbor (NN) interpolation [Sibson, 1980], Moving Least Squares (MLS) [Lancaster and Salkauskas, 1981] or Radial Basis Functions (RBF) [Hardy, 1971] can be used to interpolate the particle field data onto the grid. Specifically, we wish to reconstruct the interface between different lithologies using the volumetric representation of the lithology provided by the particles. However, the field of interest, namely the lithology, does not represent a continuous field, thus it cannot be directly interpolated. To rectify this, we must first covert each lithology field, into an independent scalar field ϕi(xp), i = 1, …, NL which will be interpolated onto the grid. For each lithology i, we define an auxiliary particle field via

display math

By applying a scattered data interpolation to ϕi, we can obtain a scalar field 0 ≤ inline imagei ≤ 1 on the grid. The isosurface value of inline imagei = inline image is defined as the interface of lithology i. However, employing such a surface reconstruction technique may be of limited practical use to help understand the material configuration of some lithological interfaces due to the severity of the mixing which may have occurred. For example, in considering the structure in Figure 2 (right), surface reconstruction may be beneficial to visualize the topography, Moho, and crustal boundaries, however such a procedure would provide little insight into the nature of the sedimentary layering (light and dark brown regions), due to the extent of the stretching and shearing which has occurred in these layers.

2.2. Volume Reconstruction

[9] In scenarios where the lithological interfaces are too complicated to obtain insight into the material configuration, we instead prefer to use a volume reconstruction technique. To define a volumetric representation of a material configuration described via particles, one can utilize a Voronoi diagram [Voronoi, 1907]. Given a set inline image containing np point coordinates, the Voronoi diagram defines a unique partitioning of space amongst the set of points. The volume associated to a particle sinline image is referred to as a Voronoi cell, V(s). Every coordinate contained within a Voronoi cell V(s) is closer, in a Euclidian sense, to particle s than any other particle in inline image. Using a Voronoi diagram, the integer field representing the lithology is defined throughout the entire domain without the need for any interpolation.

[10] An example of type of lithology map which can be obtained via the Voronoi representation is illustrated in Figure 3. Here the dashed line represents the true interface between the red and blue regions. We note that the Voronoi diagram representation does not produce a lithology partition with boundaries which conform to the true interface. Nevertheless for the purpose of visualization, the approximate interface location between the regions is acceptable.

Figure 3.

(left) Point set and their associated lithology which we differentiate by color. (right) Volume partitioning of the point set via an exact Voronoi diagram. Voronoi cells are colored according to the lithology they represent. The dashed line represents the true boundary between the “red” and “blue” material lithologies.

[11] The Voronoi representation allows one to perform either a full 3D volume rendering, or to generate cross sections through the data set. The latter is particularly useful to validate numerical models as this output can be directly compared with field derived geological cross sections. Since the volume reconstruction provides a robust technique for visualizing highly deformed interfaces, we prefer to utilize this methodology for geodynamic simulations.

3. Approximate Voronoi Diagrams: From Points to Volumes

[12] Numerous algorithms and software exist for constructing exactVoronoi diagrams for example; Fortune's sweep-plane algorithm [Fortune, 1987], the incremental Bowyer-Watson algorithm [Bowyer, 1981; Watson, 1981], Triangle [Shewchuk, 2002], Qhull [Barber et al., 1996] and Voro++ [Rycroft, 2009]. From a practical point of view, generating the exact Voronoi diagram in 3D is expensive, both in terms of CPU time and memory requirements (see section 4.4 for more details). There are several downsides associated with computing the exact Voronoi diagram. These include the jump in algorithmic complexity between 2D and 3D implementations; the problem of bounding the Voronoi diagram within a finite domain, and the development of a scalable, parallel algorithm which is required when large numbers of particles (e.g. np > 106) in there-dimensions are used as input. Furthermore, there exist numerous pathological point distributions (special cases) which must be correctly accounted for, thus complicating the implementation.

[13] To alleviate the practical and technical implementation challenges exact Voronoi diagrams pose, numerous algorithms to compute Approximate Voronoi Diagrams (AVDs) have been developed [Lavender et al., 1992; Vleugels and Overmars, 1995; Vleugels et al., 1996; Teichmann and Teller, 1997; Hoff et al., 2000; Rong and Tan, 2006; Velić et al., 2008]. Examples of approximate Voronoi diagrams computed using different grid resolutions are shown in Figure 4. AVD algorithms possesses a number of desirable features;

Figure 4.

Illustration of the AVD construction with 20 particles, using a sequence of successively refined, structured grids. The AVD cell resolution used in each example was (from left to right) 25 × 25, 50 × 50, 100 × 100 and 200 × 200.

[14] 1. They are performant on both CPU and GPU architectures.

[15] 2. They are easy to implement in 3D.

[16] 3. The algorithms are fast as most operations are boolean and in practice, very few distance comparisons are actually performed.

[17] 4. Bounding the Voronoi diagram in a finite domain is straight forward. All that is required is that the AVD cell structure used conforms to the geometry of the bounding domain.

[18] As we will demonstrate in section 4, for the purpose of visualizing the material configuration of 3D point data sets, using an approximate Voronoi diagram is sufficient. To be a useful tool for interpreting the output of particle based geodynamic simulations, one of the primary requirements of the visualization methodology is that is capable of dealing with “large” particle data sets. In the very near future, we anticipate running finite difference based, staggered grid HEL simulations employing a nodal resolution of ∼10013, with a typical simulation employing 8 particles per control volume. Thus the present and future definition of “large” is npinline image(109 − 1011).

[19] To address the present and future requirements of volume reconstruction from large point cloud data set, we have developed a new AVD algorithm (from now referred to as pAVD) which is based on the method described in Velić et al. [2008]. pAVD generates approximate Voronoi diagrams on a structured grid consisting of M = Mx × My × Mz cells. We have tailored the data structures in pAVD to be low memory, efficient and fast in both the limiting cases when Mnp (highly resolved Voronoi cells) and when npM. The newly developed algorithm is detailed in Appendix A.

[20] The pAVD algorithm is designed to run in a parallel, distributed memory computing environment using CPUs. Given m CPUs, we subdivide the AVD mesh in m = mx × my × mz subdomains. On each processors subdomain, we construct an independent approximate Voronoi diagram, i.e. we choose the bounding box of each AVD to coincide with the boundary of each subdomain. An approximation is made in truncating the AVD across each processors subdomain boundary, however, this approximation renders the algorithm completely local to each processor and thus no communication is necessary to compute the AVD.

[21] To facilitate flexible post-processing we permit the AVD construction to be performed on different numbers of CPUs compared to the execution of the forward model. This is introduced by allowing AVD subdomains to be further subdivided into chunks of size inline imagex × inline imagey × inline imagez, over which we again apply the pAVD algorithm. For example, consider a 16013 finite difference grid calculation performed on 8 × 8 × 8 = 512 CPUs. During execution, the forward model contains subdomains of size 2003cells. Suppose as a post-processing task, we require an AVD diagram of 64003 to be constructed using only 64 CPUs. This would naturally map to AVD subdomains of size 16003 cells. In practice, processing this size AVD subdomain on a single CPU may not be possible. Thus we can specify that inline imagex = inline imagey = inline imagez = 400, implying that each of the 64 CPUs would be scheduled to process 64 independent AVDs with a cell resolution of 4003.

[22] We note that efficient GPU implementations of 2D AVD flood filling algorithms exist [Rong and Tan, 2006], however these rely on hardware provided 2D textures. At present 3D textures are not provided on GPUs, thus currently limiting the potential of using GPUs for 3D AVD computation. The AVD algorithm developed here is specifically designed to run efficiently on a CPU. This has the advantage that the same code can be utilized “on-the-fly” during the execution of the geodynamic simulation on clusters, or as a stand alone post-processing tool which can be executed on the same cluster - without having to rely on specialized hardware.

4. Examples

4.1. 3D Subduction-Collision

[23] To demonstrate the applicability of the AVD algorithm described in Appendix A to visualize the material configuration, we consider results from a subduction simulation obtained from the HEL code, i3vis [Gerya and Yuen, 2007; Zhu et al., 2009]. The model results analyzed are part of a numerical study which examines the dynamics and evolution of a subducting slab. The setup of the model consisted of two continental blocks, initially separated by a planar transform fault. The simulation was performed in a rectangular box given by Ω ≡ [0, 1000] × [0, 328] × [0, 200] km3. The numerical grid resolution used consisted of 501 × 165 × 101 nodal points in the x, y, z directions respectively. A total of 67,865,978 Lagrangian particles were used to represent the material properties of the viscous fluid within the domain. The volumetric data representations obtained via the AVD algorithm after 21.03 Myrs of evolution are shown in Figure 5. The details of how the AVD algorithm was applied to generate these images is discussed below.

Figure 5.

Volumetric representation obtained via the Approximate Voronoi Diagram method when applied to analyze particle data from a three-dimensional subduction simulation. The colors define the different rock lithologies present within the simulation. SeeFigure 7 for the lithology legend. The entire model domain used in the simulation is denoted via the broken black line. The cross sections shown relate to the volume inline image, whose bounding box is denoted via the thick solid yellow line. Along each coordinate direction x, y, z, ten cross sections are shown. These sections were sampled evenly over each side length of the region inline image.

[24] To analyze this particular data set, the AVD post-processing tool was applied in two post-processing phases. In the first step, a coarse representation of the material configuration was constructed using the entire model space Ω (see dashed black line inFigure 5) and all the Lagrangian particles in the simulation. For this coarse representation, the AVD employed a cell resolution of 500 × 164 × 100, which is comparable to that used by the finite difference grid in the geodynamic model. The coarse representation of the material configuration is used to determine the location of features of interest in the model domain. A clipping plane through the coarse AVD representation (shown in the center of Figure 5) reveals that a region centered on the hinge is the focal point of this data set. Thus, the material configuration at the convergence zone was chosen to be examined in more detail. This region, denoted via inline image ≡ [295, 645] × [0, 328] × [0, 200] km3, is outlined by the solid thick yellow line in Figure 5. In the second phase of analysis, the AVD algorithm was only applied to the particles contained inside inline image, using a cell resolution of 500 × 320 × 200. From this higher resolution representation of the material in the subdomain inline image, a number of cross sections were generated in each coordinate direction, which are displayed along the perimeter of the image in Figure 5.

[25] In Figure 5, the volume rendering and sections presented were visualized using the open-source software ParaView (http://www.paraview.org). Using a single CPU, visualizing and rendering rectilinear mesh with up to 1000 × 640 × 400 elements is possible - both in terms of the required computation time and available memory. While ParaView is capable of rendering data sets in parallel, often users only require diagnostics at a given time step to either monitor the progress of the simulation, or for debugging purposes. Consequently, conducting a massive single CPU rendering of the current time step, or a fully parallel rendering of the model output is generally unnecessary. To address this issue, as a post-processing task, we apply the AVD algorithm to the domain of interest from which we directly extract image files which describeM*, N*, P* sections in the x, y, z directions respectively. The simplicity of the grid used in representing the AVD allows for the efficient construction of the image files in numerous formats (e.g. PPM, PostScript, JPEG, etc.). Directly extracting the images of the sections enables users to efficiently probe the material configuration without having to resort to using a GUI based visualization tool. Such functionality is illustrated in Figure 6where a time series obtained from the subduction study were post-processed using the AVD cross section PostScript analyzer. The lithology map used in the collision models is provided inFigure 7.

Figure 6.

Simulation results from the i3vis subduction study. Snapshots (columns) in time of (rows) different xy sections. Cross sections were obtained directly from the AVD data structure without using sophisticated visualization software.

Figure 7.

Definition of the lithologies used in the continental collision simulation. Repeated names correspond to materials with identical rheological behavior, but with different material parameters.

4.2. Salt Tectonics

[26] In addition to using the AVD algorithm as a sequential post-processing tool, the AVD methodology described can easily be used to generate volumetric representations in parallel computing environments, either as a parallel post-processing operation, or as part of the parallel computations performed during each time step of the forward model. Here we focus on the latter form of parallel execution.

[27] In Figure 8 we illustrate volumetric representation which can be obtained by using the AVD in this manner. The forward model is designed to study the morphology of salt domes when subjected to surface processes such as erosion and sedimentation. Each colored layer denotes a different material lithology. The layers are initially are stacked in a gravitationally unstable configuration. The grey material at the surface is a low viscosity, zero density material which mimics air. The interface between the grey material and the colored layers represents an approximation of the free surface, along which erosion and sedimentation laws can be applied. The calculation of this forward models was performed using the HEL code, LaMEM [Schmeling et al., 2008; Lechmann et al., 2011]. This parallel simulation was performed using eight CPUs. The subdomain boundaries are denoted via the transparent blue surfaces in Figure 8. In Figure 8 (right), a number of cross sections in the xy plane are shown. No visual artifacts are apparent in the AVD constructed using bounding boxes defined via the subdomains of the parallel forward model.

Figure 8.

Salt tectonics simulation computed by LaMEM. (left) Volumetric representation of the different material layers obtained using the AVD algorithm in parallel. The boundaries of each processors subdomain are denoted by the transparent blue planes. (right) Horizontal cross sections taken at different depths. No visual artifacts in the volumetric map along the processor subdomain boundaries can be observed. The different colors denote material lithology, which here include; salt (red), overburden (blues and yellows) and the air layer (grey).

4.3. Synthetic 1B Particle Test

[28] To demonstrate the parallel visualization capabilities of pAVD, we performed a synthetic experiment using 1 billion particles. Particles were initially distributed on a regularly spaced, cell centered lattice consisting of 10003 voxels. The initial coordinates were randomly shifted by <10% of the cell spacing. The model domain used was Ω ≡ [0, 1]3. To simulate mixing, we defined the following velocity field

display math

which we used to advect the particles. The function g(t) modulates the velocity field, and is defined as

display math

where we take the period, T = 1. An AVD was constructed using a cell resolution of 16003 on 64 CPUs. Selected cross sections at time t = inline image are shown in Figure 9. The different colors represent synthetic lithologies which were prescribed to monitor the deformation and access any artifacts which arise due to truncating the AVD across processor subdomains. The synthetic lithologies consisted of two throughgoing slabs; one located in the xz plane with thickness 0.2, centered at y = 0.35 (blue shades) and another in the yz plane with thickness 0.2, centered at x = 0.35 (green shades) and a sphere of radius 0.23, centered at (0.35, 0.35, 0.35) (pink shades). No artifacts along the three horizontal and vertical planes associated with the processor boundaries is evident in any of the cross sections shown in Figure 9.The total time required to generate this AVD (once the point coordinates were assigned to the current processor) was ∼55 seconds.

Figure 9.

Synthetic test utilizing 1 billion particles. Cross sections shown (from left to right) are taken at z = 0.375, y = 0.625 and x = 0.625 respectively.

4.4. Performance

[29] In Tables 1 and 2we report the CPU time and memory (RAM) required by pAVD to process the subduction particle data set. These results consider the sequential execution of the pAVD algorithm used in a post-processing mode. The memory usage reported represents the peak value obtained during the execution. The results presented inTables 1 and 2were performed on a Quad-Core AMD Opteron (Processor 8380) with code compiled using GCC 4.1.2 and level three optimization. The CPU time required by the AVD algorithm is a function of the number of particles and the length of the boundary associated with each approximate Voronoi cell. For a fixed number of points, we observe that increasing the AVD grid resolution increases the overall CPU time required, as the finer resolution naturally causes the length of the boundary of each Voronoi cell to increase. Accordingly, the peak memory usage is seen to increase as the Voronoi cells become more resolved.

Table 1. Performance Characteristics of the AVD Algorithm–Ia
AVD Resolution Mx × My × MzTime (sec)Mem. (GB)
  • a

    Results utilize the entire particle data set from the collision example, consisting of 67,865,978 particles.

500 × 164 × 1008.42e+002.9
500 × 164 × 2001.80e+013.0
1000 × 328 × 2006.67e+013.8
1000 × 328 × 4001.36e+025.7
2000 × 656 × 8008.56e+0229.1
Table 2. Performance Characteristics of the AVD Algorithm–IIa
AVD Resolution Mx × My × MzTime (sec)Mem. (GB)
  • a

    Results utilize a subset of particle from the collision example, consisting of 24,281,221 points. See section 4.1 for details.

125 × 80 × 50<1.0e-040.98
250 × 160 × 1003.44e+001.04
500 × 320 × 2002.83e+011.57
1000 × 640 × 4001.92e+027.50
2000 × 1280 × 8001.18e+0345.08

[30] We note that the performance characteristics of the AVD algorithm are sensitive to the point distribution. In our experiments, the coordinates of the particle data sets are obtained from the evolution of a divergence free velocity field - i.e. the material is modeled as an incompressible media. Under this assumption, we are guaranteed a certain degree of regularity in the particle packing fraction (or density of points) within any given region of the model domain.

[31] To access the competitiveness of the pAVD implementation, we provide CPU time and memory usage comparisons of our algorithm with several exact Voronoi diagram implementations; Voro++ [Rycroft, 2009], Qhull [Barber et al., 1996] and the CPU based approximate Voronoi algorithm ffbcAVD [Velić et al., 2008]. These results are summarized in Tables 3 and 4. The timings reported were performed on an 8-core Intel Xeon 2.67GHz (Nehalem) machine with code compiled using GCC 4.4.3 and level three optimization.

Table 3. Comparison of the pAVD Algorithm With Exact Voronoi Algorithms
MethodnpTime (sec)Mem. (GB)
Qhull–2011.21064.98e+012.00
 1075.45e+0220.00
Voro++–0.4.41063.89e+010.04
 1071.38e+030.56
pAVD   
40031061.52e+010.91
80031068.44e+015.61
120031063.78e+0217.19
40031073.03e+011.40
80031071.46e+027.62
120031072.19e+0321.23
Table 4. Comparison of the pAVD Algorithm With the AVD Algorithm of Velić et al. [2008] (ffbcAVD)a
Methodnp
101104107
  • a

    We report CPU time (sec) and memory usage (GB) in the parenthesis. The “X” denotes examples which were omitted from the study as the AVDs had fewer cells than particles.

ffbcAVD   
10031.01e-01 (0.051)3.39e-01 (0.064)X
20031.19e+00 (0.40)2.01e+00 (0.43)X
40031.35e+01 (3.13)1.39e+01 (3.22)8.38e+01 (7.74)
80031.88e+02 (24.78)1.17e+02 (25.15)2.27e+02 (27.58)
pAVD   
10034.21e-02 (0.009)1.75e-01 (0.014)X
20034.40e-01 (0.067)1.03e+00 (0.09)X
40034.53e+00 (0.53)6.72e+00 (0.60)3.03e+01 (1.40)
80034.30e+01 (4.15)6.74e+01 (4.43)1.46e+02 (7.62)
120031.54e+02 (13.95)1.85e+02 (14.59)2.19e+03 (21.23)

[32] Due to the intended use of the Voronoi diagram, in the comparison study we are primarily concerned with the performance of the algorithms when large numbers are particles are used. In tests following, we used a domain Ω ≡ [−1, 1]3 filled with point coordinates defined using the standard C library random number generator, rand(). In comparing exact Voronoi algorithms with approximate Voronoi algorithms, we have to consider what resolution is appropriate for visualization purposes when using an AVD. This is entirely subjective, but practical experiences indicates that having 10–400 AVD cells per approximate Voronoi cell is sufficient for visually satisfactory results. Accordingly, this would imply that for the case of 106 and 107 particles, that an AVD resolution of 4003 and 8003respectively would be sufficient for visualization purposes. We note for both these cases, the pAVD approach is faster than computing the exact Voronoi diagram, however the approximate approach uses significantly more memory than Voro++. We also emphasis that the pAVD methodology can be executed in an “embarrassingly” parallel manner, enabling it to be used to process ever increasing size particle data sets - where as the same is not possible for the exact Voronoi algorithms.

[33] In comparing pAVD with the algorithm of Velić et al. [2008], we observe the new algorithm uses ∼6 times less memory when np < M (“high resolution” Voronoi cell limit), and 3.6 times less when npM (“low resolution” Voronoi cell limit). CPU times reveal pAVD is 4.3 times faster in the “high resolution limit” and ∼1.6 times faster in the “low resolution limit”.

5. Summary

[34] The advent of using large scale, high resolution three-dimensional hybrid particle-grid based methods to study geodynamics processes is upon us. Visualizing and interpreting the three-dimensional geometry of the material configuration after severe deformation has occurred is a challenging task when utilizing particle based methods. To address some of the visualization challenges posed by these methods, we describe how an Approximate Voronoi Diagram (AVD) can be used to reconstruct a volumetric representation of the material lithology. From the volumetric representation, we can efficiently generate a data representation of the material configuration which can be volume rendered, contoured, or from which cross sections can be extracted. The type of volumetric representations possible, and the performance characteristics of the AVD algorithm were demonstrated by applying the technique to simulation results from models of continental collision, salt tectonics and a high resolution synthetic model.

[35] The AVD methodology is found to be a competitive volume reconstruction approach (with respect to CPU time) in contrast to exact Voronoi diagram or other CPU based AVD implementations. To address the issue of large scale particle visualization, we have designed and demonstrated an efficient way to process particle data in batches which can be conducted in a massively parallel computing environment, or on desktop computers.

Appendix A:: AVD Algorithm

[36] In the following we describe a procedure to compute an Approximate Voronoi Diagram (AVD). The construction of the AVD is obtained via a type of moving front, or flood fill algorithm. We note that the algorithm described here is based upon that described in Velić et al. [2008].

[37] We consider the construction of an AVD contained within a bounding box of size [x0, x1] × [y0, y1] × [z0, z1]. The AVD mesh we use is defined by a structured grid employing M = Mx, My, Mz cells in the x, y, z directions respectively. The cell spacing in each coordinate direction (Δx, Δy, Δz) is constant. The set of all cells will be denoted via inline image. The set of particles is denoted via inline image. This sequence of operations required to build the AVD is defined in Algorithm 1. At every iteration, the claimed region of each particle grows, thus mimicking the propagation of a moving front (see Figure A1).

Figure A1.

The moving front observed in constructing the AVD. The particles are denoted by the red circles. The white regions represent unclaimed cells. Shown are the cells claimed at several iterations (increasing from left to right) during the AVD algorithm.

[38] Algorithm 1 Construct Approximate Voronoi Diagram

1: procedure AVD_CONSTRUCT inline image

2: AVD_INITIALIZE( inline image

3: Mclaimed := 0

4: while MclaimedM do

5: AVD_UPDATEANDCLAIMCELLS( inline image, Mclaimed)

6: end while

7: end procedure

[39] Initially, each particle sinline image “claims” the cell cinline image it resides within. By “claiming” cell c, we denote that c now belongs to the discrete Voronoi cell associated with particle s. Claimed cells are added to the list inline imageC. If more than one particle is detected in any given cell c, we elected to only keep the particle closest to the centroid of cell c.Only the particle closest to the centroid will participate in the AVD construction - any other points contained inc will not feature in the Voronoi diagram. Under such a situation, one may elect to return an error and request that a higher resolution AVD grid be used. The final length of inline imageC defines the initial number of claimed cells Mclaimed. This procedure is referred to as AVD_INITIALIZE(.)

[40] Algorithm 2 Propagate the moving front for all particles

[41] 1: procedure AVD_UPDATEANDCLAIMCELLS( inline imageC, inline image, inline image, Mclaimed)

[42] 2: inline imaget = NULL

[43] 3: for all cinline imageC do

[44] 4: s ← CELLSITE(c)

[45] 5: inline imagen ← CELLNEIGHBORS(c)

[46] 6: for all ninline imagen do

[47] 7: R ← CELLSITE(n)

[48] 8: if R = UNCLAIMED then

[49] 9: Insert n into inline imaget

[50] 10: CLAIMCELL(n) ← s

[51] 11: MclaimedMclaimed + 1

[52] 12: else if Rs then

[53] 13: ΔnR = ∥ inline imagenxR

[54] 14: Δns = ∥ inline imagenxs

[55] 15: if Δns < ΔnR then

[56] 16: Insert n into inline imaget

[57] 17: CLAIMCELL(n) ← s

[58] 18: end if

[59] 19: end if

[60] 20: end for

[61] 21: end for

[62] 22: inline imageC = inline imaget

[63] 23: end procedure

[64] There are several aspects of Algorithm 2 which we further expand upon below:

[65] 1. The operation, s ← CELLSITE(c), returns the particle s associated with cell cinline image.

[66] 2. The operation, CLAIMCELL(c) ← s, associates cell cinline image with the Voronoi cell of particle sinline image.

[67] 3. The operation CELLNEIGHBORS(c), returns the list of neighbor cells of cell c. Neighbor cells of cinline image are identified as all the cells which share a common edge (in 2D), or face (in 3D) with cell c (see Figure A2 (left)).

Figure A2.

(left) Neighbors of cell c (shown in grey) used by the AVD algorithm. (right) Growth of the Voronoi cell associated with point sinline image, and the interaction with a neighbor particle Rinline image. Cell ninline image was previously claimed by R, but it also appears in the new neighbor list of cinline image. A distance comparison is required to determine which particle (s or R) will claim n. The thick dotted line denotes all the neighbor cells which particle s will attempt to claim.

[68] 4. Neighboring cells n are associated with a particle s, if n has not already been claimed by another particle. However, if n has been claimed by another particle Rinline image, then s may claim n if Δns < ΔnR, where ΔnX is the Euclidean distance between the cell centroid of n( inline imagen) and the particle coordinate. The process of claiming cells is illustrated in Figure A2.

Acknowledgments

[69] The author wishes to thank Taras Gerya and Thibault Duretz for their valuable input into this work, and for providing access to the model results of the three-dimensional slab break-off study, and the code i3vis. Boris Kaus is also thanked for providing access to LaMEM and the data set used for the salt tectonics example. Reviewers D. Yuen and C. Thieulot are thanked for their comments that helped improve the quality of the original manuscript.

Advertisement