DynEarthSol2D: An efficient unstructured finite element method to study long-term tectonic deformation

Authors

  • E. Choi,

    Corresponding author
    1. Institute for Geophysics, Jackson School of Geosciences, University of Texas, Austin, Texas, USA
    • Now at Center for Earthquake Research and Information, University of Memphis, Memphis, Tennessee, USA
    Search for more papers by this author
  • E. Tan,

    1. Institute of Earth Sciences, Academia Sinica, Taipei, Taiwan
    Search for more papers by this author
  • L. L. Lavier,

    1. Institute for Geophysics, Jackson School of Geosciences, University of Texas, Austin, Texas, USA
    2. Department of Geological Sciences, Jackson School of Geosciences, University of Texas, Austin, Texas, USA
    Search for more papers by this author
  • V. M. Calo

    1. King Abdullah University of Science and Technology (KAUST), Center for Numerical Porous Media, Applied Mathematics & Computational Science, and Earth Science & Engineering, Thuwal, Saudi Arabia
    Search for more papers by this author

Corresponding author: E. Choi, Center for Earthquake Research and Information, University of Memphis, 3890 Central Ave., Memphis, TN 38152, USA. E-mail:(echoi2@memphis.edu)

Abstract

[1] Many tectonic problems require to treat the lithosphere as a compressible elastic material, which can also flow viscously or break in a brittle fashion depending on the stress level applied and the temperature conditions. We present a flexible methodology to address the resulting complex material response, which imposes severe challenges on the discretization and rheological models used. This robust, adaptive, two-dimensional, finite element method solves the momentum balance and the heat equation in Lagrangian form using unstructured meshes. An implementation of this methodology is released to the public with the publication of this paper and is named DynEarthSol2D (available at http://bitbucket.org/tan2/dynearthsol2). The solver uses contingent mesh adaptivity in places where shear strain is focused (localization) and a conservative mapping assisted by marker particles to preserve phase and facies boundaries during remeshing. We detail the solver and verify it in a number of benchmark problems against analytic and numerical solutions from the literature. These results allow us to verify and validate our software framework and show its improved performance by an order of magnitude compared against an earlier implementation of the Fast Lagrangian Analysis of Continua algorithm.

1 Introduction

1.1 Overview of Numerical Techniques for Long-Term Tectonic Modeling

[2] Numerical simulation of the long-term (>10,000 years) evolution of geological structures at the crustal to lithospheric scales are denoted long-term tectonic modeling (LTM). Challenges arise in LTM because the geological structures of interest involve localized deformation at plate boundaries such as narrow fault and shear zones (e.g., ≤1 km) within a domain of a much larger scale (e.g., ≥100 km). To mechanically describe materials exhibiting strain localization, nonlinear rheologies are commonly used, which include power-law viscosity [e.g., Montési, 2003; Behn et al., 2002], damage [e.g., Lyakhovsky et al., 1993, 2012], or strain-weakening plasticity based on Mohr-Coulomb models for frictional materials [e.g., Poliakov et al., 1994; Poliakov and Buck, 1998]. The materials that compose the Earth's lithosphere are brittle when temperature and confining pressure are low but exhibit ductility when any of these thermodynamic variables are high [e.g., Jaeger and Cook, 1976; Scholz, 1988]. LTM should address these severe transitions as well as variable stiffness within each of the brittle and ductile regimes. Simulation techniques for LTM must also account for large amounts of deformation whether localized or distributed.

[3] Assessing the predictive power of new rheologies is another fundamental issue in LTM since no agreement has been reached in the scientific community on the first-order structure of the Earth's continental lithosphere. For example, the geodynamics community is presently addressing whether loads are supported by a strong crust and mantle lithosphere separated by a weak fluid-like lower crust [e.g., Watts and Burov, 2003] or by a strong upper crustal layer sitting on top of weaker lower crust and mantle lithosphere [e.g., Scholz, 2002; Bourne et al., 1998]. Many studies focused on lithospheric rheology have shown that the metamorphic and magmatic reactions involving hydrous fluids as well as the polymineralic nature of rocks can control the rheological behavior of the lithosphere through time and space [e.g., Handy, 1990; Lavier and Manatschal, 2006; Ranalli, 1997]. This rheological evolution is not taken into account by the monomineralic rheological flow laws usually used in geodynamic modeling [e.g., Ranalli, 1995; Kohlstedt et al., 1995]. LTM simulators may be used as test beds for new rheological models for the flow of rocks, as exemplified in studies of rifting [e.g., Huismans and Beaumont, 2002; Lavier and Manatschal, 2006]. In the long run, for LTM simulations to be able to answer these open questions about the structure of the continental lithosphere, preference will be given to numerical techniques that can implement these complex nonlinear rheologies with ease.

[4] The available numerical techniques for LTM can be largely divided into two groups. The first group models the material response as elastoviscoplastic (EVP), where the brittle behavior is modeled by strain-weakening elastoplasticity and/or damage, while the ductile behavior is modeled by Maxwell viscoelasticity [e.g., Albert et al., 2000; Poliakov and Buck, 1998; Gerya and Yuen, 2007; Popov and Sobolev, 2008]. This material description naturally represents elastic compressibility, strain weakening, and confining pressure dependence. The second group uses viscoplastic (VP) models, ignoring the elastic response of the Earth's lithosphere entirely. These models treat lithospheric motion as a viscous flow where the material response is represented using a nonlinear viscosity. The lithosphere is modeled as a high viscosity region where the brittle behavior is mimicked by a yield stress varying with an internal variable [e.g., Tackley, 2000; Čžková et al., 2002; Billen and Hirth, 2007; Fullsack, 1995; van Hunen et al., 2001; Braun et al., 2008; Dabrowski et al., 2008]. There are also hybrid models which consider deviatoric components of elasticity on top of the viscoplastic model [hence, partial EVP (pEVP)] [e.g., Gerya and Yuen, 2007; Moresi et al., 2003; OzBench et al., 2008; Kaus, 2010].

[5] Whether the motion is described in a Lagrangian or an Eulerian framework can be another classification criterion for existing techniques for LTM although this distinction is not always clear. It is important to track free surfaces or phase boundaries in the physical domain, for instance, to study surface processes or interactions between the lithosphere and the mantle [e.g., Kaus et al., 2010; Duretz et al., 2011]. These tasks are an inherent part of the Lagrangian framework in the sense that no extra operations are needed. However, the large deformation involved in LTM requires remeshing in the Lagrangian framework because the numerical approximation degrades as the mesh distortion becomes severe. Similar limitations are faced by Arbitrary Eulerian-Lagrangian methods [e.g., Fullsack, 1995; Moresi et al., 2003; Braun et al., 2008; Popov and Sobolev, 2008]. This remeshing process needs to be handled with care. Remeshing violates the fundamental premise of the Lagrangian description of motion; that is, the material points are attached to the deforming mesh. Practically, remeshing causes numerical diffusion of the advected variables as in a naively implemented advection in the Eulerian framework. The numerical diffusion, however, can be efficiently remedied by introducing particles and letting them carry phase information as well as history-dependent variables [2005; 2011; 2012; 2008; 1995; 2003; 2007; 2008; 2010]. The role that particles play and the computational complexity vary among different algorithms. Although without need for remeshing, the Eulerian framework usually needs extra operations involving particles or level sets to define and keep sharp internal and external boundaries [e.g., Gerya and Yuen, 2007; Braun et al., 2008]. There is also a wealth of literature describing Eulerian formulations of elastoplasticity [Duddu et al., 2009, 2012; Plohr and Sharp, 1988, 1992; Demarco and Dvorkin, 2005; Trangenstein and Colella, 1991; Trangenstein, 1994; Miller and Colella, 2001, 2002]. These methodologies avoid remeshing but need to solve a larger system of equations at each time step when compared with standard Lagrangian formulations and still need special interface tracking algorithms.

[6] An inspection of the available techniques used in LTM reveals a mixed use of explicit and implicit constitutive updates. The implementation of new rheologies is relatively straightforward when an explicit constitutive update is used because it does not involve subiteration within a time step even for nonlinear constitutive models. As a result, a numerical technique that adopts explicit updates can simulate both linear and nonlinear rheologies with equal ease. This is particularly true in the case of strain-weakening elastic-plastic models, where the constitutive update is more complicated than in the effective viscosity approach often used in VP or pEVP models. This is a desirable feature of a numerical technique for LTM which, as explained earlier, should work as a test bed for new rheologies. This validation process can be applied to newly proposed models on benchmark problems as well as to established models when new data becomes available. A drawback of an explicit constitutive update is that the time step size is restricted by stability requirements. In contrast, implicit constitutive updates may involve numerically expensive iterations, and the implementation of a novel nonlinear rheology can be a challenge by itself because finding a tangent stiffness operator is not always straightforward. Nevertheless, the resulting time-stepping techniques control the time step size by accuracy requirements, not stability; thus, significantly larger time steps can be taken in the models adopting implicit constitutive updates.

1.2 Need for Compressible Elasticity in Long-Term Tectonic Modeling

[7] A fundamental difference between the available implementations of the EVP approach and the others lies in how to account for elastic (reversible) deformation. The VP approach, for instance, ignores elasticity by assuming that elastic stresses can be relaxed by flow mechanisms such as creep over a Maxwell timescale (e.g., ∼1 My for shear modulus of 30 GPa and viscosity of 1024 Pa ·s). Nevertheless, there is a broad range of situations in which elastic stresses are important in the overall force balance. Well-known examples include the bending of oceanic lithosphere when subducted or loaded by an island chain. These model problems can be described in the framework of the thin-plate (slender-body) approximations. In particular, the bending of an oceanic lithosphere is accurately represented by the bending of an infinite or semiinfinite thin elastic plate [e.g., Watts, 2001; Turcotte and Schubert, 2002].

[8] Aware of the need to account for elastic stresses, some researchers came up with the pEVP models where only the deviatoric component of elastic stresses are brought into the incompressible VP model [e.g., Moresi et al., 2003; OzBench et al., 2008; Kaus, 2010]. Nevertheless, the imposition of the incompressibility constraint is still too restrictive for arbitrary motions, leading to simulation responses that are overly stiff for some loading patterns.

[9] Additionally, the volumetric component of deformation, elastic or inelastic, is important in many practical instances as evidenced, for example, by the measured Poisson's ratio (ν) of common rocks in the vicinity of 0.3 rather than 0.5 [e.g., King and Christensen, 1996]; nonisochoric phase transformations like that of peridotite to serpentine in the hydrated mantle [e.g., Hyndman and Peacock, 2003; Hetényi et al., 2011]; permanent volume change during brittle deformation of rocks [e.g., Brace et al., 1966], and thermal expansion and contraction of rocks [e.g., Korenaga, 2007; Choi et al., 2008; Schrank et al., 2012].

[10] To demonstrate the need for nonisochoric deformation in LTM, we analyze the flexure of a finite-length elastic plate as an example. Through a back-of-the-envelope calculation, we can show that an elastically incompressible plate would be overly stiff compared to a reasonably compressible counterpart. In the thin-plate theory, the maximum displacement (wmax) of a fixed-length plate by force loading is inversely proportional to the flexural rigidity (D) [2002]:

display math

D is defined as EH3/12(1−ν2), where E is the Young's modulus, H is the plate's thickness, and ν is Poisson's ratio. The following holds for two plates with different Poisson's ratios if everything else is the same:

display math

According to this relationship, elastic flexure can be underestimated in the incompressible (ν=0.5) case to be only about 67% of the flexure for ν=0.25, a more relevant value for rock modeling.

[11] In the case of an infinitely long thin plate, the difference in flexure due to different Poisson's ratios would be less noticeable because the flexure scales with inline image[2002]. However, the assumption of infinitely long plate does not always hold. Compounded with the rheological complexity such as layered structures and poorly understood polymineralic rheologies, continental plates have finite width and length delimited by major boundary faults. Thus, assuming the lithosphere to be elastically incompressible in LTM bears a potential error other than those associated with numerical approximation.

[12] The previous considerations in addition to the fact that the Lagrangian framework can handle free boundaries naturally and an explicit constitutive update allows for an easy implementation of complex rheologies provide critical motivations for the development of a new solver for LTM based on the explicit Lagrangian EVP approach that allows for unstructured meshing, which is the fundamental contribution reported in this work.

1.3 Need for a New Explicit Lagrangian Elastoviscoplastic Solver

[13] The combination of an explicit constitutive update, the Lagrangian reference frame, and the EVP material model has been implemented in a family of codes following the Fast Lagrangian Analysis of Continua (FLAC) algorithm [1988]. Termed geoFLAC, hereafter, these specific implementations of the generic FLAC algorithm [e.g., Poliakov et al., 1993] require a structured quadrilateral mesh which severely limits the meshing flexibility needed for adequately capturing strain localization with a locally refined mesh. Additionally, each quadrilateral is decomposed into two sets of overlapping linear triangles that guarantee a symmetrical response to loading but leads to redundant computations. GeoFLAC uses an explicit scheme for the time integration of the momentum equation in the dynamic form as well as for the constitutive update. All of these features bring both advantages and disadvantages and thus deserve critical assessment when inherited. For instance, the explicit time integration and stress update require small time step sizes to ensure stability, increasing the computational cost of the solution. On the other hand, the explicit schemes allow for the simple implementation of arbitrarily complex nonlinear rheologies. In spite of this ambivalence, we put more weight on the relative ease with implementing rheologies, which are almost always nonlinear in LTM. As another example, we believe that the structured quadrilateral mesh of geoFLAC can be replaced with other types of mesh for improved flexibility and performance.

[14] Through such critical evaluations of the FLAC algorithm and its implementation, we distilled a new code, DynEarthSol2D, as an extension and simplification of the geoFLAC algorithm for the EVP material model. An implementation of this methodology is released to the public with the publication of this paper and is named DynEarthSol2D (available for download at http://bitbucket.org/tan2/dynearthsol2). The most notable improvement is the removal of the restrictions on meshing. As a result, we can solve problems on unstructured triangular meshes while keeping the simple explicit material update that made geoFLAC dominant in the field. The use of the state-of-the-art mesh generation tools for triangulations allows for the following: (i) adaptive mesh refinement in regions of highly localized deformation, (ii) high quality of the mesh is maintained by adjusting nodal connectivity, (iii) simple mesh refinement and unrefinement to keep the size of the computational problem in a narrow band, without seriously compromising the quality of the simulation results, and (iv) easier and more faithful tracking of curvilinear boundary, such as the Moho.

[15] The rest of the paper is structured as follows. We first describe the key components of the proposed algorithm in detail, including the newly adopted techniques like the conservative mapping via a local supermesh construction. Results from relevant benchmark tests are presented next to verify our implementation as well as to demonstrate the algorithm's versatility and excellent performance. Finally, conclusions and future work are discussed.

2 Methods

2.1 Equation of Motion

[16] The equation of motion (linear momentum balance) solved by DynEarthSol2D takes the following full dynamic form:

display math(1)

where ρ is the material density, u is the velocity vector, σ is the total (Cauchy) stress tensor, and g is the acceleration of gravity. The dot above uindicates total time derivative, while boldface indicates a vector or tensor. The spatial gradient is denoted by ∇, the inner product between vectors is denoted by ·, while ∇· represents the divergence operator. This equation must be complemented with appropriate boundary conditions. (In this section, we assume that the boundary conditions define a well-posed problem. In section 'Discussion', the boundary conditions are detailed for each benchmark problem.) The motion is described using a Lagrangian formulation.

[17] The momentum equation is discretized using a two-dimensional (2-D), unstructured grid based on triangles. The displacement x, velocity u, acceleration a, force f, and temperature T are defined on linear (P1) elements, while other physical quantities (e.g., stress σ and strain ε) and material properties (e.g., density ρand viscosity η) are piecewise constant over the elements. Equation ((1)) is multiplied by a weighting function, and the product is integrated over the domain. After integrating by parts and applying Gauss theorem, we obtain the following equation for the acceleration aa of every node a:

display math(2)

where mais the nodal mass given by

display math(3)

Ωe is the area (volume in 3-D) of the element e, inline imageis the linear shape function associated with the node a in the element e, and M is the number of apexes of an element (M=3 for 2-D triangles and M=4 for 3-D tetrahedra). The summation should be understood as done for all the elements having node a as an apex. A fictitious density, ρf, instead of the true density, ρ, is used in the definition of main ((3)). Additionally, row sum mass lumping is applied to obtain a diagonal mass matrix in ((3)). We discuss the definition of  ρf in section 'Mass Scaling'. The total force fais composed of three parts: the internal, boundary, and external forces. The internal force, inline image, is defined as follows:

display math(4)

Neumann boundary conditions are tractions prescribed on the surface of the body. These tractions yield a boundary force denoted inline image:

display math(5)

The summation is over the boundary segment s, which has a length Ls (surface area in 3-D), the outward, unit normal vector n, and a prescribed (constant) stress σs on the Neumann boundary. The external force,inline image, is given by the following:

display math(6)

[18] When deriving the equations above, we utilize the fact that ρf, ρ, inline image, σ, and g are constants on each element, and these identities the following:

display math(7)

We are interested in tectonic deformation, which can be properly simulated in a quasi-static fashion. Thus, we apply a technique called “dynamic relaxation,” which enables us to achieve a static equilibrium from the dynamic momentum equation by damping out the intertial force. Additionally, using “mass scaling,” we substitute the true density by a fictitious scaled density that allows us to increase the size of admissible stable time steps in the explicit time integration scheme. That is, using the resulting “scaled” acceleration and velocity, we compute an instantaneous velocity and position of each node in the mesh, which updates the model geometry at each time step. Each of these modifications is detailed in the following sections.

2.1.1 Dynamic Relaxation

[19] Given that our focus lies in LTM, high-frequency vibrations are not relevant to the overall deformation pattern. A strong and efficient damping is necessary to achieve quasi-static solutions of the dynamic equation. Complementarily, force amplification might be needed to accelerate the transient process to achieve equilibrium. Therefore, we either damp or amplify the total net force in the discretized nodal momentum equation ((2)) according to the direction of velocity [1989]:

display math(8)

where subscript i denotes the ith component of a vector and sgn denotes the signum function. The motivation for the choice of damping/amplification is based on the simple observation that in an underdamped oscillator, the direction of force is always opposite to the velocity direction, while in an overdamped system, the direction of the force is parallel to the velocity direction. We found that this choice of damping/amplification accomplishes the design goals satisfactorily (i.e., robustly and economically).

2.1.2 Mass Scaling

[20] The Courant-Friedrichs-Lewy condition imposes a fundamental limit on the time step size for an explicit time-marching scheme. In the explicit EVP approach used in DynEarthSol2D, the p-wave velocity sets the largest possible time step size. For instance, using relevant parameters for lithospheric modeling, a p-wave speed of ∼103 m/s and an element size of ∼103 m yield a stable time step size of ∼1 s. With this stringent upper limit for the time step size, a typical LTM simulation would take an excessively large number of time steps to reach the targeted amount of deformation (e.g., O(1013) steps for 1 Myrs of model time).

[21] To overcome this drawback, a mass scaling technique is applied. We adjust each nodal mass (density) to achieve a stable time step size which is orders of magnitude larger than the one allowed by the physical density, while the fictitious increase in mass keeps the inertial forces small compared with the other forces at play in these simulations. The time step size increases when the elastic wave speed, uelastic, is made comparable to the tectonic speed, utectonic, (∼10−9 m/s). We achieve this time step size increase by scaling the density as follows:

display math(9)

where Ks is the bulk modulus of the material, ρf is a fictitious scaled density, and c1is a constant. When c1is too small, that is, the density is scaled up too high, dynamic instabilities might occur. In this case, the fictitious elastic wave is too slow to relax the stress back to quasi-equilibrium; therefore, the kinetic energy becomes too large, breaking the assumption of the quasi-static state [e.g., Chung et al., 1998]. When the density scaling is insufficient (i.e., c1is too large), the simulation becomes too time consuming. As c1 approaches 1012, the fictitious density approaches the material (true) density. The optimal value of c1depends on the rheology parameters, resolution, and domain size. We find that c1 in the range of 104 to 108is adequate for our simulation targets. Unfortunately, the choice of c1 is currently empirical. We are working to devise a consistent way of finding the optimal value of c1.

2.2 Nodal Mixed Discretization

[22] The linear triangular elements used in DynEarthSol2D are known to suffer volumetric locking when subject to incompressible deformations [e.g., Hughes, 2000]. Since incompressible plastic or viscous flow are often needed in LTM, we adopt an antivolumetric locking correction based on the nodal mixed discretization (NMD) methodology [2006; 2009].

[23] The strain rate of element e, inline image, is computed from the velocity:

display math(10)

where i, j are the spatial indices. The strain rate tensor can be decomposed into the deviatoric and the isotropic parts:

display math(11)

where dev(·) represents an operator returning the deviatoric tensor, tr(·) is an operator returning the trace of the tensor, D is the number of diagonal terms of the tensor (two for 2-D case and three for 3-D or plain strain cases), and Iis an appropriate identity tensor. (When plane strain description is used, that is, εyy=0 and inline image, but σyy can be nonzero and must be included in the calculation.)

[24] The basic idea is to average volumetric strain rate over a group of neighboring elements and then replace each element's volumetric strain rate with the averaged one. The NMD method first assigns an area (volume in 3-D) average of the trace of inline image to each node a:

display math(12)

Then, the nodal field inline imageis interpolated back to the element to retrieve an averaged volumetric strain rate for an element e:

display math(13)

where, as before, M is the number of apexes in an element. Finally, the averaged volumetric strain rate of an element is used to modify the original strain rate tensor. The antilocking modification replaces the isotropic part with inline image:

display math(14)

[25] This modified strain rate tensor substitutes the original strain rate tensor when updating strain tensor and in defining constitutive update. For the sake of brevity, we drop the prime and use inline image to refer the modified strain rate tensor from now on.

[26] The strain tensor εis accumulated:

display math(15)

2.3 Constitutive Update

[27] The stress tensor is updated using the strain rate and strain tensors according to an appropriate constitutive relationship. Since the stress update calculations are performed at the element level, we drop the subscript e to simplify notation. The EVP material model is approximated by a composite rheology which uses viscoelastic and elastoplastic submodels. With the bulk modulus Ks, shear modulus G, viscosity η, cohesion C, and internal friction angle φ, we calculate the viscoelastic stress σve and the elastoplastic stress σep.

[28] The viscoelastic stress increment Δσveis calculated assuming a linear Maxwell material, where a total deviatoric strain increment Δε is composed of the elastic and the viscous components, while the deviatoric stress increment is identical for each component:

display math(16)

Substituting Δεwith εt+Δtεt, Δσve with inline image, and σve with inline image, the equation above is reduced to the following:

display math(17)

The isotropic stress components are updated based on the volume change. As a result, the viscoelastic stress is the following:

display math(18)

[29] The elastoplastic stress σepis computed using linear elasticity and the Mohr-Coulomb (MC) failure criterion with a general (associative or nonassociative) flow rule. Following a standard operator-splitting scheme [e.g., Lubliner, 1990; Simo and Hughes, 2004; Wilkins, 1964], an elastic trial stress inline image is first calculated as

display math(19)

If the elastic trial stress, inline image, is on or within a yield surface, that is, inline image, where f is the yield function, then the stress does not need a plastic correction. So inline image is set to be equal to inline image. However, if inline imageis outside the yield surface, then we project it onto the yield surface using a return-mapping algorithm [2004].

[30] In the case of a Mohr-Coulomb material, it is convenient to express the yield function for shear failure in terms of principal stresses:

display math(20)

where σ1 and σ3 are the maximal and minimal compressive principal stresses with the sign convention that tension is positive (i.e., σ1σ2σ3), C is the material's cohesion, inline image, inline image, and φ is an internal friction angle (<90°). The yield function for tensile failure is defined as

display math(21)

where σt is the tension cut-off. If a value for the tension cut-off is given as a parameter, then the smallest value between the theoretical limit (C/ tanφ) and the given value is assigned to σt. This comparison is required because the theoretical limit is not constant in the strain-weakening case, where the material cohesion, C, and the friction angle φ may change.

[31] To guarantee a unique decision on the mode of yielding (shear versus tensile), we define an additional function, fh(σ1,σ3), which bisects the obtuse angle made by two yield functions on the σ1- σ3plane, as

display math(22)

Once yielding is identified, that is, fs(σel,1,σel,3)<0 or ft(σel,3)>0, the mode of failure (shear or tensile) is decided based on the value of fh; in other words, shear failure occurs if fh(σel,1,σel,3)<0, tensile failure occurs otherwise.

[32] The flow rule for frictional materials is in general nonassociative, that is, the direction of plastic flow in the principal stress space during plastic flow is not the same as the direction of the vector normal to the yield surface. As in the definitions of yield functions, the plastic flow potential for shear failure in the Mohr-Coulomb model can be defined as

display math(23)

where ψis the dilation angle. Likewise, the tensile flow potential is given as

display math(24)

[33] In the presence of plasticity, the total strain increment Δε is given by

display math(25)

where Δεeland Δεpl are the elastic and plastic strain increments, respectively. The plastic strain increment is normal to the flow potential surface and can be written as

display math(26)

where β is the plastic flow magnitude. βis computed by requiring that the updated stress state lies on the yield surface,

display math(27)

[34] In the principal component representation, σA=EABεBwhere σA and εA are the principal stress and strain, respectively, and Eis a corresponding elastic moduli matrix with the following components:

display math

By applying the consistency condition ((27)) and using inline image(in the principal component representation), we obtain the following formulae for β

display math(28)

and

display math(29)

Likewise, g/σtakes different forms according to the failure mode:

display math(30)

and

display math(31)

[35] Once Δεplis computed as in ((26)) using ((28)) and ((30)) or ((29)) and ((31)), σep is updated as

display math(32)

in the principal component representation and transformed back to the original coordinate system.

[36] After the viscoelastic stress σveand elastoplastic stress σepare evaluated, we compute the second invariant of the deviatoric components of each. If the viscoelastic stress has a smaller second invariant (J2), then σve is be used as the updated stress; otherwise, σepis used.

[37] The fundamental deformation measures in DynEarthSol2D are strain rates. Thus, the stress update by rate-independent constitutive models like elastoplastic stresses need to be considered as the time-integration of the rate form of the corresponding stresses. Since a stress rate is not frame-indifferent in general, an objective (or corotational) stress rate needs to be constructed and integrated instead. The Jaumann stress rate is our choice for DynEarthSol2D among the possible objective rates because of its simplicity.

[38] The Jaumann stress rate (inline image) is defined as

display math(33)

where ωis the spin tensor, which is defined as,

display math(34)

Based on this definition, the new objective stress (inline image) is,

display math(35)

where σt+Δt is the updated stress equal to either σve or σep, depending on which has a lower value of J2.

2.4 Velocity and Displacement Update

[39] The velocity is updated with the damped acceleration, but subject to the prescribed velocity boundary conditions, that is:

display math(36)

The position xaof the node a is updated by the following:

display math(37)

[40] Since the mesh is changed, the shape function derivates inline image and the element volume Ωeare updated every time step.

2.5 Modeling Thermal Evolution

[41] Thermal evolution of lithosphere is often one of the key components of the long-term tectonics and is modeled by solving the following heat equation:

display math(38)

where T is the temperature field, while cpand k are the heat capacity and the thermal conductivity of the lithosphere material. Multiplying by a weighting function on both sides and integrating by parts over the domain, we get

display math(39)

where the diffusion matrix

display math(40)

is evaluated at the barycenter of each element since we use constant strain triangles (linear finite elements on simplexes). The lumped thermal capacitance (mass) is given by

display math(41)

and qs is the prescribed boundary heat flux on a segment s. Then, the temperature is updated explicitly as follows:

display math(42)

The stability condition for the explicit integration of temperature is usually satisfied by the time step size determined by the scaled wave speed, but if a stable time step size for heat diffusion is smaller, then it becomes the global time step size.

2.6 Remeshing

[42] We assess the mesh quality at fixed temporal intervals and use specific quality measures to decide whether to keep using the present mesh or remesh. For example, if the smallest angle of an element is less than a certain prescribed value, then we remesh. A group of nodes in the deformed mesh is removed from the mesh if any of the following criteria is met. For instance, if the deformed or displaced boundary is restored to the initial configuration, then some nodes may be left outside of the boundaries of the new domain. Internal nodes, if surrounded only by small elements, may be removed from the point set to be remeshed. Once all criteria are enforced, a final list of nodes is collected. These nodes are provided to the Triangle library [1996] to construct a new triangulation of the domain. At this stage, new nodes might be inserted into the mesh or the mesh topology changed through edge-flipping during the triangulation (Figure 1). This type of remeshing has been proposed as a way of solving large deformation problems in the Lagrangian framework [1994]. After the new mesh is created, the boundary conditions, derivatives of shape function, and mass matrix have to be recalculated.

Figure 1.

Images showing how edge-flipping as well as conservative mapping of a color-coded piecewise constant field is performed during remeshing. On the “After” image, white solid and dashed lines indicate example pairs of flipped and original edges, and the magenta-colored dots are nodes newly created during remeshing.

[43] When most of the deformation is focused in and around a few deformation zones like shear bands, most of the elements outside of the zones deform only slightly and thus mostly remain unaffected by remeshing. The high degree of similarity between the new and old meshes makes projecting the fields of variables between the meshes very easy. For nodes and elements unaffected by remeshing, which are the majority, a simple injection suffices. That is, the data of the nodes and elements of the old mesh are mapped onto the nodes and elements which are collocated with them in the new mesh.

[44] When deformation is not localized but distributed over a broad region of the domain, remeshing might result in a new mesh that is very different from the old one. Then, an intermesh mapping of variables becomes necessary. For data associated to nodes (e.g., velocity and temperature), we use linear interpolation of the data from the old mesh to evaluate the field at the new nodal location. For data associated to elements (e.g., strain and stress), we use a conservative mapping described in the following. Given an old mesh P and a new mesh Q, the mapping is defined as follows:

display math(43)

where χ is a field to be mapped to χ, and ΩPand ΩQ are domains of the old and new mesh, respectively. In the finite element discretization, the above global conservative mapping boils down to the following local operation:

display math(44)

where q is an element in the new mesh Q. For the linear elements used in DynEarthSol2D, variables to be mapped such as strain and stress are constant within an element. Then, the conservative mapping is further simplified to the following:

display math(45)

where p is an element in the old mesh P with a nonempty intersection with q, χp is the value of χ on the element p, and Ωpqis the overlapping volume of element p and q (Figure 2). The algorithms for efficiently finding the set of p and computing Ωpqby locally constructing a supermesh that contains both q and its intersections with p's as subsets have been proposed [2011]. We make use of these algorithms implemented in the open source library fluidity [2011].

Figure 2.

A new element after remeshing (q) and a set of elements in an old mesh (dotted lines) that have a nonzero intersectional area with q. As an example, an old element (p) and its intersection with q are also shown.

[45] This globally conservative remapping still induces artificial smoothing in fields that are discontinuous at a subelement level. The field of material phases is the most notable example in our case. To avoid this numerical diffusion, we generate particles to fill each element and let them carry information of such fields [2005; 2011; 2012; 2008; 1995; 2003; 2007; 2008; 2010]. The particles with phase identities set in an old mesh find a containing element in the new mesh after remeshing. A material property needed at a new element's quadrature point is computed by arithmetically or harmonically averaging the corresponding values of particles that represent each phase.

3 Discussion

3.1 Benchmarks

[46] To demonstrate the validity of our numerical method, we solve a series of problems of which exact solutions are known or approximate solutions are available, reported in the literature or acquired from an independent code, so that we can quantitatively compare these solutions with those obtained with DynEarthSol2D. The solved problems include the following:

  1. [47] Flexture of a finite-length elastic plate;

  2. [48] Thermal diffusion of half-space cooling plate;

  3. [49] Stress build-up in Maxwell viscoelastic material;

  4. [50] Rayleigh-Taylor instability;

  5. [51] Mohr-Coulomb oedometer test; and

  6. [52] Modes of core complex formation.

3.1.1 Flexture of a Finite-Length Elastic Plate

[53] An elastic plate is loaded in a fashion often encountered in LTM [e.g., Albert et al., 2000], where a 50 km long and 5 km thick elastic plate is pushed up from below at one end by a buoyant body (Figure 3a). The plate's shear modulus is 30 GPa, and density is 2700 kg/m3. The plate is floating over a low-viscosity (1017 Pa ·s) layer. The buoyant body is a 5 ×5 km square block with a reduced density (70 % of the plate's density) but with the same elastic stiffness. The buoyant block is placed at the right-end base of the plate, generating an upward gravitational load.

Figure 3.

Effect of elastic compressibility on the prediction of an elastic thin plate subject to an uplifting load. (a) Model setup for a finite length elastic layer subjected to a finite length buoyant load applied on the bottom. (b) Profiles of mean-subtracted surface topography.

[54] The total relief generated by the load converges to 306 and 370 m for the nearly incompressible (ν=0.495) and the compressible (ν=0.25) cases, respectively, when the resolution is at least 146 m (Figure 3b). Furthermore, we acquired a converged total relief of 308 m from a resolution of at least 140 m with the marker-in-cell pEVP approach [2010] and completely incompressible elasticity (i.e., ν=0.5) (Figure 3b). The reliefs for the (nearly) incompressible cases are consistently about 83 % of those for the compressible cases. These results indicate that the ignored compressibility in elasticity renders the plate overstiff to bending, which is a locking phenomenon due to material description, not numerical discretization.

3.1.2 Thermal Diffusion of Half-Space Cooling Plate

[55] This simple case tests the implementation of the discrete system defined by equations ((39))–((42)). The initial temperature profile is based on the half-space cooling of a 1 Myr-old plate with ρ=1000 kg/m3, cp=1000 J/kg/K, k=1 J/m/K, surface temperature T0=0°C, and mantle temperature Tm=1300°C. The analytical solution of the temperature profile [2002] is given as follows:

display math

The temperature profiles at 5 Myrs and 15 Myrs from DynEarthSol2D are compared with the analytical solution in Figure 4, and they show excellent agreement.

Figure 4.

Thermal diffusion of half space cooling plate. The temperature profiles in the analytical solution at 1, 5, and 15 Myrs are plotted in solid lines. The results from DynEarthSol2D are plotted in circles.

3.1.3 Stress Build-up in Maxwell Viscoelastic Material

[56] We demonstrate the validity of dynamic relaxation and mass scaling used in our numerical method by computing a quasi-static stress build-up process with Maxwell rheology. This test is based on the first test example in 2007]. An incompressible, viscoelastic (Maxwell) medium is subject to pure shear at a constant strain rate. The analytical solution of the deviatoric stress is given by the following:

display math

The numerical result is obtained with a grid size of 251 nodes and 441 elements. The parameters used in the calculation are the following: inline image, η=1022 Pa·s, G=1010 Pa, Ks=1012 Pa (to approximate incompressibility), g=0 m/s2, and c1=10−6. The comparison (Figure 5) of the numerical result against the analytical solution establishes the validity of our numerical method and the accuracy of the Maxwell rheology implementation.

Figure 5.

Stress build-up in Maxwell viscoelastic material. (a) Model setup. (b) The curve shows the evolution of stress buildup as a function of time. The numerical results in the inset are plotted less frequently.

3.1.4 Rayleigh-Taylor Instability

[57] To further test the Maxwell viscoelastic rheology in the limit of purely viscous flow, we benchmark our solver by modeling a Rayleigh-Taylor instability for two fluids which have identical viscosity following the study of 1997] (referred to as “VK” hereafter). The viscous convection is a Stokes problem, where the inertial term in equation ((1)) is negligible and force equilibrium is achieved at any instant of time. We use this problem to demonstrate the efficacy of our damping to reach static equilibrium, especially after remeshing.

[58] Figure 6(a) shows the model setup that includes the domain height (H) of 10 km, the aspect ratio of the domain, 0.9142, and boundary conditions. The lower layer takes up 20 % of the domain volume and is 10 % less dense than the upper layer of density 3000 kg/m3. The initial geometry of the interface between the layers is perturbed by a cosine function with an amplitude of 2 % of H. The viscosity is uniform throughout and equal to 1017 Pa·s.

Figure 6.

Rayleigh-Taylor instability. (a) Model setup. Snapshots of the density at dimensionless time of (b) 500, (c) 1000, and (d) 1500. (e) Plot of Vrms versus dimensionless time, t. The resolution is about 0.6 km. The results are in good agreement with benchmarks published by [1997; 2002].

[59] Since the original benchmark test assumed a purely viscous material, we modify the elastic and viscous properties following 1993] to minimize the influence from the elastic component in the Maxwell model. Key nondimensional numbers are the Deborah number (De) defined as the ratio of relaxation time to advection time (see Table 1for the precise expression) and the Reynolds number (Re) which relates viscous to intertial forces. In our simulations, to scale the fictitious density, we prescribe Re as it relates density and velocity. We set Reto be 0.01 to ensure compatibility with the original viscous flow [1993; 2002]. With the parameters listed in Table 1, we get De=0.01, which ensures that the material in the model behaves viscously [1993]. Time and velocity are nondimensionalized by the characteristic advection time, τa (see Table 1for definition) and the characteristic velocity, H/τa.

Table 1. Model Parameters for the Rayleigh-Taylor Instability
ParameterSymbolValue
Reference densityρ03000 kg/m3
Density differenceΔρ300kg/m3
Gravitational accelerationg10m/s2
Bulk modulusK50 GPa
Shear modulusG50 GPa
Viscosityη1017 Pa·s
Length scaleH10 km
Relaxation time scaleτrη/G=3×108 s
Advection time scaleτaη/(ΔρgH)=3×1010 s
Deborah numberDeτr/τa=0.01
Reynolds numberReρfH2/(τaη)=0.01

[60] Snapshots from a DynEarthSol2D model with ∼67 m resolution are shown at nondimensional times 500, 1000, and 1500 (Figures 6b– 6d). They exhibit a good match with those in the original benchmark tests of VK. The root mean square velocity (Vrms) is defined as

display math

where ||u|| is the L2 norm of a velocity vector u, and Ω is the area of the domain. The temporal variation of Vrmsis shown in Figure 6e and is consistent with the results of VK. From DynEarthSol2D, the times that correspond to the maximum Vrmsare 0.003106 and 215.3, respectively. A coarser resolution model (∼100 m) gave 0.003101 and 215.28. So the relative error between solutions for these two resolutions are less than 0.1%. The “best” model in VK gave 0.003091 and 207.05. While the maximum Vrmsvalues are close to each other, the corresponding time from DynEarthSol2D differs by about 4% relative to the best model. However, this difference is still within the range shown by various models tested in VK. The sudden dips of Vrmsoccur right after remeshing, when the interpolated stress on the new mesh is not in equilibrium. However, our efficient damping quickly restores the force balance without overshoots.

[61] The remeshing scheme described in section 'Remeshing'along with the use of markers to carry compositional variables allows us to model extremely large mesh deformation without artificial diffusion, as exemplified in Figure 7. In this figure, the horizontal (σxx) stress (top) and the density (bottom) fields are transferred from a mesh which tracked the material interface (labeled “Before”), to one which does not capture the interface explicitly (labeled “After”). As shown in Figure 7, the new mesh has excellent quality, while the material interface blurring is limited to the elements cut by the interface.

Figure 7.

σxx and density fields before and after the first remeshing in the Rayleigh-Taylor instability model with about 1 km resolution. The white lines in the “After” images denote the original phase boundary before remeshing. The thick-lined box in the inset shows the location of the zoomed-in part of the domain.

03.1.5 Plastic Oedometer Test

[62] In an oedometer test, a square block (1  × 1 m) of Mohr-Coulomb (MC) plastic material is compressed (Figure 8a). The compression is driven by a constant velocity, 10−5 m/s, applied on one side, while the other sides can slip freely. Due to symmetry, σ1, the most compressive principal stress, coincides with σxx, while the other two principal stresses, σ2and σ3, are equal. As a result, starting from the initial unstressed state, the stress developing in the block follows a trajectory like the one shown in Figure 8b. Under these assumptions, the numerical solutions obtained using DynEarthsol2D can be compared with the analytic solution for the stresses (Appendix A). The post-yielding portion of the stress trajectory corresponds to one of the edges of the angular MC yield envelope. Since the surface normal is not uniquely determined there, this situation poses a challenge for numerical simulations of the Mohr-Coulomb plasticity. Thus, this test is also useful for verifying that DynEarthSol2D is not affected by the nonsmooth geometry of the Mohr-Coulomb yield surface. The rest of the model parameters we use are listed in Table 2.

Table 2. Model Parameters for the Oedometer Test
ParameterSymbolValue
Bulk modulusK200 MPa
Shear modulusG200 MPa
CohesionC1 MPa
Friction angleφ10°
Dilation angleψ0 or 10°
Figure 8.

Plastic oedometer test. (a) Model setup. (b) A schematic trajectory of stress (red line) expected from the oedometer test in the principal stress space. A Mohr-Coulomb failure envelope is also shown. (c) |σxx| versus x-displacement curves from analytic and numerical solutions by DynEarthSol2D.

[63] In Figure 8c, σxxis plotted as a function of the displacement in x. The DynEarthSol2D simulation results are in excellent agreement with the analytic solution. Brittle deformations of rocks often accompany permanent volume change which is modeled using a nonzero dilation angle. DynEarthSol2D can accurately and robustly model the behavior of a Mohr-Coulomb material for arbitrary dilation angles.

3.1.6 Normal Fault Evolution

[64] To test the applicability of DynEarthSol2D to LTM, we simulate the evolution of normal faults, which was systematically investigated by 2000]. This exercise is particularly interesting since it tests the remeshing capability and efficiency of our solver as well as the consistency of solutions with previous studies. Localization of plastic strains induced by strain weakening is considered as a proxy to faults. Strain softening is realized by reducing the cohesion linearly from its initial value Ci to its final value Cf when the plastic shear strain is increased from zero to εc.

[65] In the first model, a fault is created in an extending Mohr-Coulomb elastic-plastic layer, which is initially 100 km long and 10 km thick (Figure 9a). Both sides of the layer are pulled at a constant velocity of 0.3 cm/yr away from the center. The bottom boundary is supported by a Winkler foundation. Given a constant compensation depth zcomp(usually the initial depth of the domain) and a constant hydrostatic pressure Pcomp at zcomp, this bottom boundary condition gives a restoration force inline imagein equation ((5)) to a segment s of the bottom boundary, which has a mean vertical coordinate zs:

display math

The layer is initially in lithostatic stress state, and a small weak inhomogeneity is placed at the bottom center of the layer. The values for the elastic and plastic properties are set to be the same to those in 2000] for the case of the large offset fault (Table 3). This experiment reproduces the formation of a core complex by a rolling hinge mechanism [1988].

Table 3. Model Parameters for the Normal Fault Evolution Tests
ParameterSymbolValue
Bulk modulusK50 GPa
Shear modulusG30 GPa
Densityρ2700 kg/m3
  (core complex/multiple fault mode)
Layer thicknessH10/20 km
Initial cohesionCi44/20 MPa
Final cohesionCf4/0.1 MPa
Critical plastic strainεc1.2/0.5
Friction angleφ30/30°
Dilation angleψ0/0°
Figure 9.

Normal fault evolution: Core complex formation. (a) Model setup for the large offset normal fault. Plastic strain distributions after extension of (b) 3, (c) 10, and (d) 21.6 km. Below is the evolution of the mesh as a function of the amount of extension.

[66] A Lagrangian description of the motion makes it trivial to keep track of an evolving free surface. An accurate treatment of the resulting body forces that arise from the development of topography is critical to the evolution of a localized zone of deformation at the shear zones. To avoid unrealistically sharp reliefs, however, a diffusion-type smoothing is applied to the surface topography [1991]. In this study, we use a “diffusivity” of 10−7 m2/s.

[67] DynEarthSol2D uses an unstructured mesh with nonuniform resolution which is refined in the proximity of the shear zone area. This refinement is achieved using a remeshing scheme described in Figures 9b– 9d. The fault initially forms as a zone of high plastic strain with a dip angle of about 50° and offsets by around 2 km after 3 km of horizontal extension Figure 9b. After 10 km of extension, the exposed part of the fault rotates to a dip angle of less than 40° and accumulates about 8 km of slip (Figure 9c). Even after 21.6 km of extension, the main fault remains active and has accumulated an offset almost twice the thickness of the layer (∼18 km).

[68] The results are consistent with those in 2000] but acquired at a higher resolution and at much smaller computational cost. The model runs are about 10 times faster than the geoFLAC runs that resolved faults with a similar resolution. The most significant among the factors contributing to the improved performance is the nonuniform initial mesh that allows a high resolution only where faults are expected to form. Although not a rigorous comparison of performance, the results are very encouraging. This improved performance is possible due to the use of a locally refined nonuniform mesh, which is dynamically refined to preserve the mesh quality in spite of the large geometrical distorsion of the physical domain (Figures 9b– 9d).

[69] To resolve the mesh resolution-dependence of localization, 2000] set cohesion to be a function of the critical fault offset (Δxc) rather than of plastic strain. Δxcis defined as the critical plastic strain, to which minimum cohesion corresponds, multiplied by the width of a shear band. In a case of a uniform resolution, the width of a shear band could be identified with that of three elements on the empirical basis. However, such a static definition of the critical fault offset might not be appropriate for a mesh being dynamically refined. In this benchmark, we continued to use a static value of the critical fault offset, based on the element size of the initial mesh. As shown above, our new models produced results consistent with those from the reference study by 2000]. Mesh refinement in this benchmark occurs only after elements in a fault zone are sheared by a large enough amount, which in turn is proportional to the accumulating fault offset. So the rate of element size reduction through remeshing must be comparable to the rate of plastic strain accumulation. This condition seems to keep the effect of dynamic mesh refinement on strain weakening at an insignificant level. Although a further investigation is needed, we speculate that results would be different for a much faster reduction in element size than a certain static rate of strain weakening.

[70] 1993] showed that if the thickness of the layer is greater than 20 km for 20 MPa of initial cohesion, multiple faults should form during the initial evolution of the model. This contrasting behavior compared to the case of a 10 km thick layer originates from the increased bending resistance to the rolling hinge in the thick layer. We set up a second benchmark model in this category so that a layer of 200  × 20 km is extended as in the previous simulation. The parameters of this model are also listed in Table 3. Figure 10shows that the extension of the thick layer results in the formation of multiple faults, which is consistent with the theoretical prediction as well as a previous numerical model (Plate 1c in 2000]). Mesh adaptivity concentrates where the faults form and offset.

Figure 10.

Normal fault evolution: Multiple faults. Plastic strain distributions after extension of (a) 2.5, (b) 7.5, and (c) 12.5 km. Below is the evolution of the mesh as a function of the amount of extension. The layer thickness is 20 km, and strain-weakening parameters are different from those of the large offset fault mode (Table 3).

3.2 Remarks on Benchmarks

[71] We present two kinds of benchmarks that allows us to verify and validate our solver. The first kind compares numerical solutions with analytic ones for the Maxwell viscoelastic stress buildup and the Mohr-Coulomb plasticity. The second kind verifies qualitatively the consistency of solutions from our new code with the published numerical solutions in the problems of viscous chemical convection [1997] and of normal fault evolution [2000]. The latter type of benchmarks is critical because there is no known analytic solution to large deformation of EVP materials in LTM. The importance of putting forward a set of relevant and reliable benchmarks cannot be overstated. Community efforts [e.g., Chapter 16 of Gerya, 2010, and references therein] are necessary to propose these benchmarks and to require new solver implementations to report their performance on these benchmarking tests. That is, the predictive power of elastoviscoplastic as well as of viscoplastic solvers on the agreed upon benchmarks is fundamental to allow for reproducible science. The final goal is to avoid misinterpretation of modeling behavior due to erroneous implementations of the models.

3.3 Remarks on Performance

[72] DynEarthSol2D runs only serially. A 3-D version is under development and will be parallelized via OpenMP. All the benchmarks presented above were run on one core of a Intel Core2 Quad Q9650 CPU with an operating frequency of 3.00 GHz. The largest size benchmark is the highest resolution (∼146 m resolution) model for flexure of a finite-length elastic plate, which has 22,772 nodes and 44,897 elements. The CPU time taken for 4.32 million time steps is 176,961s; thus, the average CPU time per time step is about 41 ms.

[73] We reported in the previous section that DynEarthSol2D is about 10 times faster than geoFLAC. This apparent boost in performance is due to the a priori mesh adaption. For instance, the models for the normal fault evolution started with a mesh refined around the region known to deform actively, achieving the desired resolution for faults; while the number of elements are much less than in a uniform-resolution mesh. Dynamic refinement increases the total number of elements but is also regulated by coarsening.

[74] Although we have improved the performance of existing geoFLAC-based EVP codes, it may not be the fastest possible numerical algorithm. For instance, the partial EVP marker-in-cell finite difference algorithm of Gerya [2007] shows great performance especially when combined with a multigrid solver [2010]. Moreover, it is possible to implement the full EVP approach in this algorithm because it is not hardwired to solve the incompressible Stokes equation. However, this potential has yet to be realized in an actual implementation. Also, we believe that in addition to the relative ease of rheological implementation, the algorithms adopted by DynEarthSol2D deliver unique advantages such as flexibility to develop models with complex geometries since it is based on an adaptive unstructured mesh. Additionally, the use of markers allows us to reproduce complex inclusion of facies without explicitly meshing their interfaces.

4 Conclusions

[75] In this paper, we report a new Lagrangian elastoviscoplastic solver and its implementation in DynEarthSol2D for long-term tectonic deformation of the lithosphere. This new solver combines the flexibility of the explicit finite element formulation with an unstructured simplicial mesh, conservative remeshing aided by Lagrangian particles and adaptive meshing with edge-flipping. DynEarthSol2D can easily handle all the conventional rheologies of interest for lithospheric deformation including true elastic, Maxwell viscoelastic, pure viscous, and elastoplastic with associative and nonassociative plastic flow. The explicit formulation of the constitutive update makes the potential addition of other more complicated rheologies straightforward. We showed that this type of formulation more accurately renders the effects of elastic compressibility. Therefore, the amplitude of critical geologic features is better predicted, such as rift flank uplifts, bending at subduction and folding zones in mountain belts. On top of the reliability shown by benchmark results, DynEarthSol2D can be more efficient than geoFLAC through a priori and dynamic mesh refinement. Written in standard Fortan 90, this new code also allows for easy maintenance and portability. In the future, we will use a three-dimensional simplicial mesh generator to extend this algorithm to three dimensions. Our goal is to achieve the same degree of flexibility, robustness, and efficiency for complex three-dimensional simulations of lithospheric evolution.

Appendix A: Analytic Solution to the Mohr-Coulomb Oedometer Test

[76] In the setting depicted by Figure 8a, the components of the strain increments are given by

display math

where vx is the xcomponent of the velocity applied on the moving surface, L is the length of the side of the square domain, and vxΔtL is assumed. The corresponding stress increments in the elastic regime are

display math(46)

[77] where the Lamé constant inline imageis used to simplify the equations. Moreover, stresses at any time t before yielding are given as

display math

while at the yielding, the stresses defined above make the following two yield functions simultaneously zero:

display math

where the parameters are the same with those in equation ((20)). However, due to the inherent symmetry (i.e., σyy=σzz), it suffices to consider only one of the two yield functions. Therefore, from fs,1(σxx,σyy)=0, for instance, we get the following expression for the time when yielding starts:

display math

[78] By the same token, plastic flows need to be computed from two yield potentials,

display math

Then, plastic strain increments are given as

display math

where the inherent symmetry is utilized again to identify β1with β2 and denote them as β. Collecting the previous results, we conclude that the stress increments after yielding are the following:

display math

By substituting the expressions for the elastic strain increments into the above equations,

display math

we get

display math(47)

Now, we solve the following incremental consistency condition for β:

display math

Thus, the closed form expression for βmay be given as

display math(48)

[79] Once βis known from ((48)), pre-yielding and post-yielding stresses, ((46)) and ((47)), are completely determined and can be plotted as a function of displacement in x as shown in Figure 8c.

Acknowledgments

[80] The authors would like to acknowledge the useful discussions we had with Nathan Collier from NumPor. EC would like to thank Kenni Dinesen Pettersen for generously providing his particle-in-cell code. ET is supported by Grant 101-2116-M-001-038-MY3 from the National Science Council, Taiwan. This work was supported in part by the Academic Excellence Alliance program award to Luc L. Lavier from King Abdullah University of Science and Technology (KAUST) Global Collaborative Research under the title “3-D numerical modeling of the tectonic and thermal evolution of continental rifting.” This is UTIG contribution 2563..3