Dynamic fragmentation is a rapid and catastrophic failure of a material. During this process, the nucleation, propagation, branching and coalescence of cracks results in the formation of fragments. The numerical modeling of this phenomenon is challenging because it requires high-performance computational capabilities. For this purpose the finite-element method with dynamic insertion of cohesive elements was chosen. This paper describes the parallel implementation of its fundamental algorithms in the C++ open-source library Akantu. Moreover a numerical instability that can cause the loss of energy conservation and possible solutions to it are illustrated. Finally, the method is applied to the dynamic fragmentation of a hollow sphere subjected to uniform radial expansion. This article is protected by copyright. All rights reserved.

We propose a new method to obtain contact forces under a non-smoothed contact problem between arbitrarily-shaped bodies which are discretized by Finite Element Method (FEM). Contact forces are calculated by the specific contact algorithm between two particles of Smoothed Particle Hydrodynamics (SPH), which is a meshfree method, and that are applied to each colliding body. This approach has advantages that accurate contact forces can be obtained within an accelerated collision without a jump problem in a discrete time increment. Also, this can be simply applied into any contact problems like a point-to-point, a point-to-line, and a point-to-surface contact for complex shaped and deformable bodies. In order to describe this method, an impulse based method, a unilateral contact method and SPH method are firstly introduced in this paper. Then, a procedure about the proposed method is handled detaily. Finally, accuracy of the proposed method is verified by a conservation of momentum through three contact examples. This article is protected by copyright. All rights reserved.

A new smoothed finite element method (S-FEM) with tetrahedral elements for finite strain analysis of nearly incompressible solids is proposed. The proposed method is basically a combination of the F-bar method and edge-based S-FEM with tetrahedral elements (ES-FEM-T4) and is named “F-barES-FEM-T4”. F-barES-FEM-T4 inherits the accuracy and shear locking-free property of ES-FEM-T4. At the same time, it also inherits the volumetric locking-free property of the F-bar method. The isovolumetric part of the deformation gradient (*F*^{iso}) is derived from the ** F** of ES-FEM-T4, whereas the volumetric part (

The FFT-based homogenization method of Moulinec-Suquet has recently emerged as a powerful tool for computing the macroscopic response of complex microstructures for elastic and inelastic problems. In this work, we generalize the method to problems discretized by trilinear hexahedral elements on Cartesian grids and physically nonlinear elasticity problems. We present an implementation of the basic scheme that reduces the memory requirements by a factor of four and of the conjugate gradient scheme that reduces the storage necessary by a factor of nine compared to a naive implementation. For benchmark problems in linear elasticity the solver exhibits mesh- and contrast-independent convergence behavior, and enables the computational homogenization of complex structures, for instance arising from CT imaging techniques. There exist 3D microstructures involving pores and defects, for which the original FFT-based homogenization scheme does not converge. In contrast, for the proposed scheme convergence is ensured. Also, the solution fields are devoid of the spurious oscillations and checkerboarding artifacts associated to conventional schemes. We demonstrate the power of the approach by computing the elastoplastic response of a long fiber reinforced thermoplastic material with 172 × 10^{6} (displacement) degrees of freedom. This article is protected by copyright. All rights reserved.

In this study, a new mean-strain 10-node tetrahedral element (T10MS) is developed using energy-sampling stabilization. The proposed 10-node tetrahedron is composed of several four-node linear tetrahedral elements, four tetrahedra in the corners and four tetrahedra that tile the central octahedron in three possible sets of four-node linear tetrahedra, corresponding to three different choices for the internal diagonal. The assumed strains are calculated from mean “basis function gradients”. The energy-sampling technique introduced previously for removing zero-energy modes in the mean-strain hexahedron is adapted for the present element: the stabilization energy is evaluated on the four corner tetrahedra. The proposed element naturally leads to a lumped mass matrix, and does not have unphysical low-energy vibration modes. For simplicity, we limit our developments to linear elasticity with compressible and nearly incompressible material. The numerical tests demonstrate that the present element performs well compared to the classical 10-node tetrahedral elements for shell and plate structures, and nearly incompressible materials. This article is protected by copyright. All rights reserved.

In nonlinear model order reduction, hyper reduction designates the process of approximating a projection-based reduced-order operator on a reduced mesh, using a numerical algorithm whose computational complexity scales with the small size of the Projection-based Reduced-Order Model (PROM). Usually, the reduced mesh is constructed by sampling the large-scale mesh associated with the high-dimensional model underlying the PROM. The sampling process itself is governed by the minimization of the size of the reduced mesh for which the hyper reduction method of interest delivers the desired accuracy for a chosen set of training reduced-order quantities. Because such a construction procedure is combinatorially hard, its key objective function is conveniently substituted with a convex approximation. Nevertheless, for large-scale meshes, the resulting mesh sampling procedure remains computationally intensive. In this paper, three different convex approximations that promote sparsity in the solution are considered for constructing reduced meshes that are suitable for hyper reduction, and paired with appropriate active set algorithms for solving the resulting minimization problems. These algorithms are equipped with carefully designed parallel computational kernels in order to accelerate the overall process of mesh sampling for hyper reduction, and therefore achieve practicality for realistic, large-scale, nonlinear structural dynamics problems. Conclusions are also offered as to what algorithm is most suitable for constructing a reduced mesh for the purpose of hyper reduction. This article is protected by copyright. All rights reserved.

We propose a method to couple Smoothed Particle Hydrodynamics (SPH) and Finite Elements (FEM) methods for non-linear transient fluid-structure interaction simulations by adopting different time-steps depending on the fluid or solid sub-domains. These developments were motivated by the need to simulate highly non-linear and sudden phenomena requiring the use of explicit time integrators on both sub-domains (Explicit Newmark for the solid and Runge-Kutta 2 for the fluid). However, due to critical time-step required for the stability of the explicit time-integrators in, it becomes important to be able to integrate each sub-domain with a different time-step while respecting the features that a previously developed mono time-step coupling algorithm offered. For this matter, a dual-Schur decomposition method originally proposed for structural dynamics was considered, allowing to couple time-integrators of the Newmark family with different time-steps with the use of Lagrange multipliers. This article is protected by copyright. All rights reserved.

This paper considers stochastic hybrid stress quadrilateral finite element analysis of plane elasticity equations with stochastic Young's modulus and stochastic loads. Firstly, we apply Karhunen-Loève expansion to stochastic Young's modulus and stochastic loads so as to turn the original problem into a system containing a finite number of deterministic parameters. Then we deal with the stochastic field and the space field by *k*−version/*p*−version finite element methods and a hybrid stress quadrilateral finite element method, respectively. We derive a priori error estimates which are uniform with respect to the Lamé constant *λ∈* (*0,+ ∞*). Finally, we provide some numerical results. This article is protected by copyright. All rights reserved.

This article presents a family of variational integrators from a continuous time point of view. A general procedure for deriving symplectic integration schemes preserving an energy-like quantity is shown, which is based on the principle of virtual work. The framework is extended to incorporate holonomic constraints without using additional regularization. In addition, it is related to well-known partitioned Runge-Kutta methods and to other variational integration schemes. As an example, a concrete integration scheme is derived for the planar pendulum using both polar and Cartesian coordinates. This article is protected by copyright. All rights reserved.

In this work, a novel time marching procedure is presented, in which adaptive time integration parameters are locally computed, allowing different spatial and temporal distributions. As a consequence, a more versatile technique arises, enabling an enhanced performance. The main features of the proposed technique are: (i) it is simple; (ii) it has guaranteed stability; (iii) it is an efficient non-iterative adaptive single-step procedure; (iv) it provides enhanced accuracy; (v) it enables advanced controllable algorithmic dissipation in the higher modes; (vi) it is truly self-starting; (vii) it is entirely automatic, requiring no input parameter from the user. The proposed technique is very complete, providing the main positive attributes that are requested from an effective time marching procedure. Numerical results are presented along the paper, illustrating the excellent performance of the method.

The treatments of heterogeneities and periodic boundary conditions are explored to properly perform isogeometric analysis (IGA) based on NURBS basis functions in solving homogenization problems for heterogeneous media with omni-directional periodicity and composite plates with in-plane periodicity. Since the treatment of the combination of different materials in IGA models is not trivial especially for periodicity constraints, the first priority is to clearly specify points at issue in the numerical modeling, or equivalently mesh generation, for IG homogenization analysis (IGHA). The most awkward, but important issue is how to generate patches for NURBS representation of the geometry of a rectangular parallelepiped unit cell to realize appropriate deformations in consideration of the convex-hull property of IGA. The issue arises from the introduction of overlapped control points located at angular points in the heterogeneous unit cell, which must satisfy multiple point constraint (MPC) conditions associated with periodic boundary conditions (PBCs). Although two measures may be conceivable, we suggest the use of multiple patches along with double MPC that imposes PBCs and the continuity conditions between different patches simultaneously. Several numerical examples of numerical material and plate tests are presented to demonstrate the validity of the proposed strategy of IG modeling for IGHA. This article is protected by copyright. All rights reserved.

A large amount of research in computational mechanics has biased toward atomistic simulations. This trend, on one hand, is due to the increased demand to perform computations in nano-scale and, on the other hand, is due to the rather simple applications of pairwise potentials in modeling the interactions between atoms of a given crystal. The Cauchy-Born hypothesis has been used effectively to model the behavior of crystals under different loading conditions, in which the comparison with molecular dynamics simulations presents desirable coincidence between the results. A number of research works have been devoted to the validity of CB hypothesis and its application in post-elastic limit. However, the range of application of CB hypothesis is limited and it remains questionable whether it is still applicable beyond the validity limit. In this paper, a multi-scale technique is developed for modeling of plastic deformations in nano-scale materials. The deformation gradient is decomposed into the plastic and elastic parts, i.e. F = F^{p}F^{e}. This decomposition is in contrast to the conventional decomposition, F = F^{e}F^{p}, generally encountered in continuum and crystal plasticity. It is shown that the former decomposition is more appropriate for the problem dealt within this work. Inspired by crystal plasticity, the plastic part is determined from the slip on potential slip systems. Based on the assumption that the CB hypothesis remains valid in the homogeneous deformation, the elastic deformation gradient resulting from the aforementioned decomposition is employed in conjunction with the CB hypothesis to update the state variables for FCC crystals. The assumption of homogeneity of elastic deformation gradient is justified by the fact that elastic deformations are considerably smaller than the plastic deformations. The computational algorithms are derived in details and numerical simulations are presented through several examples to demonstrate the capability of the proposed computational algorithm in modeling of Golden crystals under different loading conditions. This article is protected by copyright. All rights reserved.

We present a framework to efficiently solve large deformation contact problems with nearly incompressible materials by implementing adaptive re-meshing. Specifically, nodally integrated elements are employed to avoid mesh locking when linear triangular or tetrahedral elements are used to facilitate mesh re-generation. Solution variables in the bulk and on contact surfaces are transferred between meshes such that accuracy is maintained and re-equilibration on the new mesh is possible. In particular, the displacement transfer in the bulk is accomplished through a constrained least squares (LS) problem based on nodal integration, while the transfer of contact tractions relies on parallel transport. Finally, a residual-based error indicator is chosen to drive adaptive mesh refinement. The proposed strategies are applicable to both two or three dimensional analysis and are demonstrated to be robust by a number of numerical examples. This article is protected by copyright. All rights reserved.

This contribution presents a novel approach to structural shape optimization that relies on an embedding domain discretization technique. The evolving shape design is embedded within a uniform finite element background mesh which is then used for the solution of the physical state problem throughout the course of the optimization. We consider a boundary tracking procedure based on adaptive mesh refinement to separate between interior elements, exterior elements, and elements intersected by the physical domain boundary. A selective domain integration procedure is employed to account for the geometric mismatch between the uniform embedding domain discretization and the evolving structural component. Thereby, we avoid the need to provide a finite element mesh that conforms to the structural component for every design iteration, as it is the case for a standard Lagrangian approach to structural shape optimization. Still, we adopt an explicit shape parametrization that allows for a direct manipulation of boundary vertices for the design evolution process. In order to avoid irregular and impracticable design updates, we consider a geometric regularization technique to render feasible descent directions for the course of the optimization. This article is protected by copyright. All rights reserved.

It was observed in [1,2] that the strain smoothing technique over higher order elements and arbitrary polytopes yields less accurate solutions than other techniques such as the conventional polygonal finite element method. In this work, we propose a linear strain smoothing scheme that improves the accuracy of linear and quadratic approximations over convex polytopes. The main idea is to subdivide the polytope into simplicial subcells and use a linear smoothing function in each subcell to compute the strain. This new strain is then used in the computation of the stiffness matrix. The convergence properties and accuracy of the proposed scheme are discussed by solving a few benchmark problems. Numerical results show that the proposed linear strain smoothing scheme makes the approximation based on polytopes able to deliver the same optimal convergence rate as traditional quadrilateral and hexahedral approximations. The accuracy is also improved, and all the methods tested pass the patch test to machine precision. This article is protected by copyright. All rights reserved.

This work proposes novel stability-preserving model order reduction approaches for vibro-acoustic finite element models. As most research in the past for these systems has focused on noise attenuation in the frequency-domain, stability-preserving properties were of low priority. However as the interest for time-domain auralization and (model based) active noise control increases, stability-preserving model order reduction techniques are becoming indispensable. The original finite element models for vibro-acoustic simulation are already well established, but require too much computational load for these applications. This work therefore proposes two new global approaches for the generation of stable reduced-order models. Based on proven conditions for stability preservation under one-sided projection, a reformulation of the displacement-fluid velocity potential (*u* − *ϕ*) formulation is proposed. In contrast to the regular formulation, the proposed approach leads to a new asymmetric structure for the system matrices which is proven to preserve stability under one-sided projection. The second approach starts from a displacement-pressure (*u* − *p*) description where the system level projection space is decoupled for the two domains, for which we also prove the preservation of stability. Two numerical validation cases are presented which demonstrate the inadequacy of straightforward model order reduction on typical vibro-acoustic models for time-domain simulation and compare the performance of the proposed approaches. Both proposed approaches effectively preserve the stability of the original system. This article is protected by copyright. All rights reserved.

The paper deals with two main advantages in the analysis of slender elastic structures both achieved through the mixed (stress and displacement) format with respect to the more commonly used displacement one: i) the smaller error in the extrapolations usually employed in the solution strategies of nonlinear problems; ii) the lower polynomial dependence of the problem equations on the finite element degrees of freedom when solid finite elements are used. The smaller extrapolation error produces a lower number of iterations and larger step length in path-following analysis and a greater accuracy in Koiter asymptotic method. To focus on the origin of the phenomenon the two formats are derived for the same finite element interpolation. The reduced polynomial dependence improves the Koiter asymptotic strategy in terms of both computational efficiency, accuracy and simplicity. This article is protected by copyright. All rights reserved.

This paper presents a novel class of preconditioners for the iterative solution of the sequence of symmetric positive definite linear systems arising from the numerical discretization of transient parabolic and self-adjoint partial differential equations. The preconditioners are obtained by nesting appropriate projections of reduced order models into the classical iteration of the preconditioned conjugate gradient (PCG). The main idea is to employ the reduced order solver to project the residual associated with the conjugate gradient iterations onto the space spanned by the reduced bases. This approach is particularly appealing for transient systems where the full model solution has to be computed at each time step. In these cases the natural reduced space is the one generated by full-model solutions at previous time steps. When increasing the size of the projection space, the proposed methodology highly reduces the system conditioning number and the number of PCG iterations at every time step. The cost of the application of the preconditioner linearly increases with the size of the projection basis, and a trade-off must be found to effectively reduce the PCG computational cost. The quality and efficiency of the proposed approach is finally tested in the solution of groundwater flow models. This article is protected by copyright. All rights reserved.

Among all 3D 8-node hexahedral solid elements in current finite element library, the ‘best’ one can produce good results for bending problems using coarse regular meshes. However, once the mesh is distorted, the accuracy will drop dramatically. And how to solve this problem is still a challenge that remains outstanding. This paper develops an 8-node, 24-DOF (three conventional DOFs per node) hexahedral element based on the virtual work principle, in which two different sets of displacement fields are employed simultaneously to formulate an unsymmetric element stiffness matrix. The first set simply utilizes the formulations of the traditional 8-node tri-linear isoparametric element, while the second set mainly employs the analytical trial functions in terms of 3D oblique coordinates (*R*, *S*, *T*). The resulting element, denoted by US-ATFH8, contains no adjustable factor, and can be used for both isotropic and anisotropic cases. Numerical examples show it can strictly pass both the first-order (constant stress/strain) patch test and the second-order patch test for pure bending, remove the volume locking, and provide the invariance for coordinate rotation. Especially, it is insensitive to various severe mesh distortions.

Contact and fracture in the material point method (MPM) require grid-scale enrichment or partitioning of material into distinct velocity fields to allow for displacement or velocity discontinuities at a material interface. A new method is presented in which a kernel-based damage field is constructed from the particle data. The gradient of this field is used to dynamically repartition the material into contact pairs at each node. This approach avoids the need to construct and evolve explicit cracks or contact surfaces, and is therefore well suited to problems involving complex 3-D fracture with crack branching and coalescence. A straightforward extension of this approach permits frictional “self-contact" between surfaces that are initially part of a single velocity field, enabling more accurate simulation of granular flow, porous compaction, fragmentation, and comminution of brittle materials. Numerical simulations of dynamic crack propagation are presented to demonstrate the accuracy of the self-contact approach. This article is protected by copyright. All rights reserved.

This paper presents a new efficient monolithic finite element solution scheme to treat the set of PDEs governing a two-dimensional, biphasic, saturated TPM model with intrinsically coupled and incompressible solid and fluid constituents for infinitesimal and large elastic deformation. Our approach, which inherits some of its techniques from CFD, is characterized by the following aspects: (1) It only performs operator evaluation with no additional Gateaux derivatives. In particular, the computations of the time-consuming material tangent matrix are not involved here; (2) It solves the non-linear dynamic problem with no restriction on the strength of coupling; (3) It is more efficient than the linear *m**a**t**h**b**f**u**v**p* solver discussed in previous works; (4) It requires weaker derivatives and hence lower-order FE can be tested; (5) The boundary conditions are reduced, solution independent and more convenient to apply than in the old *m**a**t**h**b**f**u**v**p* formulation.

For the purpose of validation and comparison, prototypical simulations including analytical solutions are carried out and at the end, an adaptive time stepping (ATS) procedure is introduced to handle the rapid change in the numbers of non-linear iterations that may occur. This article is protected by copyright. All rights reserved.

We present three new sets of *C*^{1} hierarchical high-order tensor-product bases for conforming finite elements. The first basis is a high-order extension of the Bogner-Fox-Schmit basis. The edge and face functions are constructed using a combination of cubic Hermite and Jacobi polynomials with *C*^{1} global continuity on the common edges of elements. The second basis uses the tensor product of fifth-order Hermite polynomials and high-order functions and achieves global *C*^{1} continuity for meshes of quadrilaterals and *C*^{2} continuity on the element vertices. The third basis for triangles is also constructed using the tensor product of one-dimensional functions defined in barycentric coordinates. It also has global *C*^{1} continuity on edges and *C*^{2} continuity on vertices. A patch test is applied to the three considered elements. Projection and plate problems with smooth fabricated solutions are solved and the performance of the *h*- and *p*-refinements are evaluated by comparing the approximation errors in the *L*_{2}- and energy norms. A plate with singularity is then studied and *h*- and *p*-refinements are analysed. Finally, a transient problem with implicit time integration is considered. The results show exponential convergence rates with increasing polynomial order for the triangular and quadrilateral meshes of non-distorted and distorted elements. This article is protected by copyright. All rights reserved.

The paper is concerned with the modeling of the planar motion of a horizontal sheet of metal in a rolling mill. Inhomogeneous velocity profiles, with which the material is generated at one roll stand and enters the next one, lead to the time evolution of the deformation of the metal strip.

We propose a novel kinematic description, in which the axial coordinate is a Eulerian one, while the transverse motion of the sheet is modeled in a Lagrangian framework. The material volume travels across a finite element mesh, whose boundaries are in contact with the roll stands. The concise mathematical formulation of the method is different from the classical form of the Arbitrary Lagrangian-Eulerian approach with a rate form of constitutive relations.

The undeformed state of the strip is incompatible owing to the varying time rate of the generation of material. We treat this phenomenon by introducing the notion of intrinsic strains, which are mathematically described using the multiplicative decomposition of the deformation gradient.

We currently present the approach for quasistatic simulations of in-plane elastoplastic deformations of the strip. A practically relevant problem with two strip segments and three roll stands is studied in a numerical example. This article is protected by copyright. All rights reserved.

Couple stress formulations have been given much attention lately due to the possibility to explain cases, when the classical theory of elasticity fails to describe adequately the mechanical behavior. Such cases may include size-dependent stiffness, high stress gradients and the response of materials with microstructure. Here a new mixed Lagrangian formulation is developed for elastodynamic response within consistent size-dependent skew-symmetric couple stress theory. With a specific choice of mixed variables, the formulation can be written with only C^{0} continuity requirements, without the need to introduce a Lagrange multiplier or penalty method. Furthermore, this formulation permits, for the first time, the determination of natural frequencies within the consistent couple stress theory. Details for the strong form of the equilibrium equations, constitutive model relations, boundary conditions, and the corresponding weak form are provided. In addition, the discrete forms are also discussed with two approaches for reducing variables. Several simple two-dimensional computational example problems are then examined, along with a brief investigation of the effect of couple stress on natural frequencies, which exhibit size-dependence for most, but not all, modes.

In this paper, a simple locking-free triangular plate element, labeled here as MTP9 (Mindlin type triangular plate element with nine degrees of freedom), is presented. The element employs an incompatible approximation with nine d.o.f. (degrees of freedom) independent of the nodes and the shape of the triangle to define the displacements *u*/*v*/*w* (which is similar to a general solid element) along the *x*/*y*/*z* axes. It is free of shear locking, has a proper rank, and provides stable solutions for thick and thin plates. Moreover, the paper provides a new way to develop simple and efficient locking-free thick-thin plate/shell elements. A variety of numerical examples demonstrate the convergence, accuracy and robustness of the present element MTP9.

This work introduces a semi-analytical formulation for the simulation and modeling of curved structures based on the Scaled Boundary Finite Element Method (SBFEM). This approach adapts the fundamental idea of the SBFEM concept to scale a boundary to describe a geometry. Until now, scaling in SBFEM has exclusively been performed along a straight coordinate which enlarges, shrinks or shifts a given boundary. In this novel approach, scaling is based on a polar or cylindrical coordinate system such that a boundary is shifted along a curved scaling direction. The derived formulations are used to compute the static and dynamic stiffness matrices of homogeneous curved structures. The resulting elements can be coupled to general SBFEM or FEM domains. For elastodynamic problems computations are performed in the frequency domain. Results of this work are validated using the Global Matrix Method and standard Finite Element analysis. Copyright © 2016 John Wiley & Sons, Ltd.

The variational approach to fracture is effective for simulating the nucleation and propagation of complex crack patterns, but is computationally demanding. The model is a strongly nonlinear non-convex variational inequality that demands the resolution of small length scales. The current standard algorithm for its solution, alternate minimization, is robust but converges slowly and demands the solution of large, ill-conditioned linear subproblems. In this paper, we propose several advances in the numerical solution of this model that improve its computational efficiency. We reformulate alternate minimization as a nonlinear Gauss-Seidel iteration and employ over-relaxation to accelerate its convergence; we compose this accelerated alternate minimization with Newton's method, to further reduce the time to solution; and we formulate efficient preconditioners for the solution of the linear subproblems arising in both alternate minimization and in Newton's method. We investigate the improvements in efficiency on several examples from the literature; the new solver is 5–6× faster on a majority of the test cases considered. This article is protected by copyright. All rights reserved.

Accurate sizing functions are crucial for efficient generation of high-quality meshes, but to define the sizing function is often the bottleneck in complicated mesh generation tasks due to the tedious user interaction involved. We present a novel algorithm to automatically create high-quality sizing functions for surface mesh generation. First, the tessellation of a CAD model is taken as the background mesh, in which an initial sizing function is defined by considering geometrical factors and user-specified parameters. Then, a convex nonlinear programming problem is formulated and solved efficiently to obtain a smoothed sizing function that corresponds to a mesh satisfying necessary gradient constraint conditions and containing a significantly reduced element number. Finally, this sizing function is applied in an advancing front mesher. With the aid of a walk-through algorithm, an efficient sizing-value query scheme is developed. Meshing experiments of some very complicated geometry models are presented to demonstrate that the proposed sizing-function approach enables accurate and fully automatic surface mesh generation.

This paper builds on recent work developed by the authors for the numerical analysis of large strain solid dynamics, by introducing an upwind cell centred hexahedral Finite Volume framework implemented within the open source code OpenFOAM [http://www.openfoam.com/www.openfoam.com]. In Lee, Gil and Bonet [1], a first order hyperbolic system of conservation laws was introduced in terms of the linear momentum and the deformation gradient tensor of the system, leading to excellent behaviour in two dimensional bending dominated nearly incompressible scenarios. The main aim of this paper is the extension of this algorithm into three dimensions, its tailor-made implementation into OpenFOAM and the enhancement of the formulation with three key novelties. First, the introduction of two different strategies in order to ensure the satisfaction of the underlying involutions of the system, that is, that the deformation gradient tensor must be curl-free throughout the deformation process. Second, the use of a discrete angular momentum projection algorithm and a monolithic Total Variation Diminishing Runge-Kutta time integrator combined in order to guarantee the conservation of angular momentum. Third, and for comparison purposes, an adapted Total Lagrangian version of the Hyperelastic-GLACE nodal scheme of Kluth and Després [2] is presented. A series of challenging numerical examples are examined in order to assess the robustness and accuracy of the proposed algorithm, benchmarking it against an ample spectrum of alternative numerical strategies developed by the authors in recent publications. Copyright © 2016 John Wiley & Sons, Ltd.

This paper describes a novel methodology that combines smoothed discrete particle hydrodynamics (SDPH) and finite volume method (FVM) to enhance the effective performance in solving the problems of gas-particle multiphase flow. To describe the collision and fluctuation of particles, this method also increases a new parameter, namely, granular temperature, according to the kinetic theory of granular flow. The coupled framework of SDPH–FVM has been established, in which the drag force and pressure gradient act on the SDPH particles and the momentum sources of drag force are added back onto the FVM mesh. The proposed technique is a coupled discrete–continuum method based on the two-fluid model. To compute for the discrete phase, its SDPH is developed from smoothed particle hydrodynamics (SPH), in which the properties of SPH are redefined with some new physical quantities added into the traditional SPH parameters, so that it is more beneficial for SDPH in representing the particle characteristics. For the continuum phase, FVM is employed to discretize the continuum flow field on a stationary grid by capturing fluid characteristics. The coupled method exhibits strong efficiency and accuracy in several 2-D numerical simulations. This article is protected by copyright. All rights reserved.

In this paper we develop a dual-horizon peridynamics (DH-PD) formulation that naturally includes varying horizon sizes and completely solves the “ghost force” issue. Therefore, the concept of dual-horizon is introduced to consider the unbalanced interactions between the particles with different horizon sizes. The present formulation fulfills both the balances of linear momentum and angular momentum exactly. Neither the “partial stress tensor” nor the “slice” technique are needed to ameliorate the ghost force issue in [1]. We will show that the traditional peridynamics can be derived as a special case of the present DH-PD. All three peridynamic formulations, namely bond based, ordinary state based and non-ordinary state based peridynamics can be implemented within the DH-PD framework. Our DH-PD formulation allows for *h*-adaptivity and can be implemented in any existing peridynamics code with minimal changes. A simple adaptive refinement procedure is proposed reducing the computational cost. Both two- and three- dimensional examples including the Kalthoff-Winkler experiment and plate with branching cracks are tested to demonstrate the capability of the method. This article is protected by copyright. All rights reserved.

A shear-flexible isogeometric Reissner–Mindlin shell formulation using non-uniform rational B-splines basis functions is introduced, which is used for the demonstration of a coupling approach for multiple non-conforming patches. The six degrees of freedom formulation uses the exact surface normal vectors and curvature. The shell formulation is implemented in an isogeometric analysis framework for computation of structures composed of multiple geometric entities. To enable local model refinement as well as non-matching domains coupling, a conservative multi-patch approach using Lagrange multipliers for structured non-uniform rational B-splines patches is presented. Here, an additional border frame mesh is used to couple geometries during structural analyses. This frame interface approach avoids the problem of excessive constraints when multiple patches are coupled at one point. First, the shell formulation is verified with several reference cases. Then the influence of the frame interface discretization and frame penalty stiffness on the smoothness of the results is investigated. The effects of the perturbed Lagrangian method in combination with the frame interface approach is shown. In addition, results of models with T-joint interface connections and perpendicular stiffener patches are presented. Copyright © 2016 John Wiley & Sons, Ltd.

A non-gradient-based approach for topology optimization using a genetic algorithm is proposed in this paper. The genetic algorithm used in this paper is assisted by the Kriging surrogate model to reduce computational cost required for function evaluation. To validate the non-gradient-based topology optimization method in flow problems, this research focuses on two single-objective optimization problems, where the objective functions are to minimize pressure loss and to maximize heat transfer of flow channels, and one multi-objective optimization problem, which combines earlier two single-objective optimization problems. The shape of flow channels is represented by the level set function. The pressure loss and the heat transfer performance of the channels are evaluated by the Building-Cube Method code, which is a Cartesian-mesh CFD solver. The proposed method resulted in an agreement with previous study in the single-objective problems in its topology and achieved global exploration of non-dominated solutions in the multi-objective problems. © 2016 The Authors International Journal for Numerical Methods in Engineering Published by John Wiley & Sons Ltd

Reproducing kernel particle method (RKPM) has been applied to many large deformation problems. RKPM relies on polynomial reproducing conditions to yield desired accuracy and convergence properties but requires appropriate kernel support coverage of neighboring nodes to ensure kernel stability. This kernel stability condition is difficult to achieve for problems with large particle motion such as the fragment-impact processes that commonly exist in extreme events. A new reproducing kernel formulation with ‘quasi-linear’ reproducing conditions is introduced. In this approach, the first-order polynomial reproducing conditions are approximately enforced to yield a nonsingular moment matrix. With proper error control of the first-order completeness, nearly second-order convergence rate in L2 norm can be achieved while maintaining kernel stability. The effectiveness of this quasi-linear RKPM formulation is demonstrated by modeling several extremely large deformation and fragment-impact problems. Copyright © 2016 John Wiley & Sons, Ltd.

We propose a formulation for linear elastic fracture mechanics in which the stress intensity factors are found directly from the solution vector of an extended boundary element method formulation. The enrichment is embedded in the boundary element method formulation, rather than adding new degrees of freedom for each enriched node. Therefore, a very limited number of new degrees of freedom is added to the problem, which contributes to preserving the conditioning of the linear system of equations. The Stroh formalism is used to provide boundary element method fundamental solutions for any degree of anisotropy, and these are used for both conventional and enriched degrees of freedom. Several numerical examples are shown with benchmark solutions to validate the proposed method. Copyright © 2016 John Wiley & Sons, Ltd.

The fractional step method (FSM) is an efficient solution technique for the particle finite element method, a Lagrangian-based approach to simulate fluid–structure interaction (FSI). Despite various refinements, the applicability of the FSM has been limited to low viscosity flow and FSI simulations with a small number of equations along the fluid–structure interface. To overcome these limitations, while incorporating nonlinear response in the structural domain, an FSM that unifies structural and fluid response in the discrete governing equations is developed using the quasi-incompressible formulation. With this approach, fluid and structural particles do not need to be treated separately, and both domains are unified in the same system of equations. Thus, the equations along the fluid–structure interface do not need to be segregated from the fluid and structural domains. Numerical examples compare the unified FSM with the non-unified FSM and show that the computational cost of the proposed method overcomes the slow convergence of the non-unified FSM for high values of viscosity. As opposed to the non-unified FSM, the number of iterations required for convergence with the unified FSM becomes independent of viscosity and time step, and the simulation run time does not depend on the size of the FSI interface. Copyright © 2016 John Wiley & Sons, Ltd.

Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models (ROMs) can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory bounding the approximation error in time of proper orthogonal decomposition-based ROMs, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers' test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full-order model is recovered to within discretization error. A parallel version of the resulting method can be used on supercomputers to generate proper orthogonal decomposition-based ROMs, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space. Copyright © 2016 John Wiley & Sons, Ltd.

The identification of the geological structure from seismic data is formulated as an inverse problem. The properties and the shape of the rock formations in the subsoil are described by material and geometric parameters, which are taken as input data for a predictive model. Here, the model is based on the Helmholtz equation, describing the acoustic response of the system for a given wave length. Thus, the inverse problem consists in identifying the values of these parameters such that the output of the model agrees the best with observations. This optimization algorithm requires multiple queries to the model with different values of the parameters. Reduced order models are especially well suited to significantly reduce the computational overhead of the multiple evaluations of the model.

In particular, the proper generalized decomposition produces a solution explicitly stating the parametric dependence, where the parameters play the same role as the physical coordinates. A proper generalized decomposition solver is devised to inexpensively explore the parametric space along the iterative process. This exploration of the parametric space is in fact seen as a post-process of the generalized solution. The approach adopted demonstrates its viability when tested in two illustrative examples. Copyright © 2016 John Wiley & Sons, Ltd.

This paper proposes an efficient metamodeling approach for uncertainty quantification of complex system based on Gaussian process model (GPM). The proposed GPM-based method is able to efficiently and accurately calculate the mean and variance of model outputs with uncertain parameters specified by arbitrary probability distributions. Because of the use of GPM, the closed form expressions of mean and variance can be derived by decomposing high-dimensional integrals into one-dimensional integrals. This paper details on how to efficiently compute the one-dimensional integrals. When the parameters are either uniformly or normally distributed, the one-dimensional integrals can be analytically evaluated, while when parameters do not follow normal or uniform distributions, this paper adopts the effective Gaussian quadrature technique for the fast computation of the one-dimensional integrals. As a result, the developed GPM method is able to calculate mean and variance of model outputs in an efficient manner independent of parameter distributions. The proposed GPM method is applied to a collection of examples. And its accuracy and efficiency is compared with Monte Carlo simulation, which is used as benchmark solution. Results show that the proposed GPM method is feasible and reliable for efficient uncertainty quantification of complex systems in terms of the computational accuracy and efficiency. Copyright © 2016 John Wiley & Sons, Ltd.

A nonparametric probabilistic approach for modeling uncertainties in projection-based, nonlinear, reduced-order models is presented. When experimental data are available, this approach can also quantify uncertainties in the associated high-dimensional models. The main underlying idea is twofold. First, to substitute the deterministic reduced-order basis (ROB) with a stochastic counterpart. Second, to construct the probability measure of the stochastic reduced-order basis (SROB) on a subset of a compact Stiefel manifold in order to preserve some important properties of a ROB. The stochastic modeling is performed so that the probability distribution of the constructed SROB depends on a small number of hyperparameters. These are determined by solving a reduced-order statistical inverse problem. The mathematical properties of this novel approach for quantifying model uncertainties are analyzed through theoretical developments and numerical simulations. Its potential is demonstrated through several example problems from computational structural dynamics. Copyright © 2016 John Wiley & Sons, Ltd.

By separation of scales and the homogenization of a flow through porous media, a two-scale problem arises where a Darcy-type flow is present on the macroscale and a Stokes flow on the subscale. In this paper, the problem is given as the minimization of a potential. Additional constraints imposing periodicity in a weak sense are added using Lagrange multipliers. In particular, the upper and lower energy bounds for the corresponding strongly periodic problem are produced, quantifying the accuracy of the weakly periodic boundary conditions. A numerical example demonstrates the evaluation of energy bounds and the performance of weakly periodic boundary conditions on a representative volume element. © 2016 The Authors. International Journal for Numerical Methods in Engineering Published by John Wiley & Sons Ltd

In this paper, a four-node quadrilateral flat shell element is proposed for geometrically nonlinear analysis based on updated Lagrangian formulation with the co-rotational kinematics concept. The flat shell element combines the membrane element with drilling degrees of freedom and the plate element with shear deformation. By means of these linearized elements, a simplified nonlinear analysis procedure allowing for warping of the flat shell element and large rotation is proposed. The tangent stiffness matrix and the internal force recovery are formulated in this paper. Several classic benchmark examples are presented to validate the accuracy and efficiency of the proposed new and more proficient element for practical engineering analysis of shell structures. Copyright © 2016 John Wiley & Sons, Ltd.

It is common in solving topology optimization problems to replace an integer-valued characteristic function design field with the material volume fraction field, a real-valued approximation of the design field that permits ‘fictitious’ mixtures of materials during intermediate iterations in the optimization process. This is reasonable so long as one can interpolate properties for such materials and so long as the final design is integer valued. For this purpose, we present a method for smoothly thresholding the volume fractions of an arbitrary number of material phases which specify the design. This method is trivial for two-material design problems, for example, the canonical topology design problem of specifying the presence or absence of a single material within a domain, but it becomes more complex when three or more materials are used, as often occurs in material design problems. We take advantage of the similarity in properties between the volume fractions and the barycentric coordinates on a simplex to derive a thresholding, method which is applicable to an arbitrary number of materials. As we show in a sensitivity analysis, this method has smooth derivatives, allowing it to be used in gradient-based optimization algorithms. We present results, which show synergistic effects when used with Solid Isotropic Material with Penalty and Rational Approximation of Material Properties material interpolation functions, popular methods of ensuring integerness of solutions. Copyright © 2016 John Wiley & Sons, Ltd.

This work discusses a discontinuous Galerkin (DG) discretization for two-phase flows. The fluid interface is represented by a level set, and the DG approximation space is adapted such that jumps and kinks in pressure and velocity fields can be approximated sharply. This adaption of the original DG space, which can be performed ‘on-the-fly’ for arbitrary interface shapes, is referred to as extended discontinuous Galerkin. By combining this ansatz with a special quadrature technique, one can regain spectral convergence properties for low-regularity solutions, which is demonstrated by numerical examples. This work focuses on the aspects of spatial discretization, and special emphasis is devoted on how to overcome problems related to quadrature, small cut cells, and condition number of linear systems. Temporal discretization will be discussed in future works. Copyright © 2016 John Wiley & Sons, Ltd.

Digital imaging technologies such as X-ray scans and ultrasound provide a convenient and non-invasive way to capture high-resolution images. The colour intensity of digital images provides information on the geometrical features and material distribution which can be utilised for stress analysis. The proposed approach employs an automatic and robust algorithm to generate quadtree (2D) or octree (3D) meshes from digital images. The use of polygonal elements (2D) or polyhedral elements (3D) constructed by the scaled boundary finite element method avoids the issue of hanging nodes (mesh incompatibility) commonly encountered by finite elements on quadtree or octree meshes. The computational effort is reduced by considering the small number of cell patterns occurring in a quadtree or an octree mesh. Examples with analytical solutions in 2D and 3D are provided to show the validity of the approach. Other examples including the analysis of 2D and 3D microstructures of concrete specimens as well as of a domain containing multiple spherical holes are presented to demonstrate the versatility and the simplicity of the proposed technique. Copyright © 2016 John Wiley & Sons, Ltd.

A new isogeometric density-based approach for the topology optimization of multi-material structures is presented. In this method, the density fields of multiple material phases are represented using the isogeometric non-uniform rational B-spline-based parameterization leading to exact modeling of the geometry, removing numerical artifacts and full analytical computation of sensitivities in a cost-effective manner. An extension of the perimeter control technique is introduced where restrictions are imposed on the perimeters of density fields of all phases. Consequently, not only can one control the complexity of the optimal design but also the minimal lengths scales of all material phases. This leads to optimal designs with significantly enhanced manufacturability and comparable performance. Unlike the common element-wise or nodal-based density representations, owing to higher order continuity of density fields in this method, their gradients required for perimeter control restrictions are calculated exactly without additional computational cost. The problem is formulated with constraints on either (1) volume fractions of different material phases or (2) the total mass of the structure. The proposed method is applied for the minimal compliance design of two-dimensional structures consisting of multiple distinct materials as well as functionally graded ones. Copyright © 2016 John Wiley & Sons, Ltd.

We propose a numerical method for a fluid–structure interaction problem. The material of the structure is homogeneous, isotropic, and it can be described by the compressible neo-Hookean constitutive equation, while the fluid is governed by the Navier–Stokes equations. Our study does not use turbulence model. Updated Lagrangian method is used for the structure and fluid equations are written in Arbitrary Lagrangian–Eulerian coordinates. One global moving mesh is employed for the fluid–structure domain, where the fluid–structure interface is an ‘interior boundary’ of the global mesh. At each time step, we solve a monolithic system of unknown velocity and pressure defined on the global mesh. The continuity of velocity at the interface is automatically satisfied, while the continuity of stress does not appear explicitly in the monolithic fluid–structure system. This method is very fast because at each time step, we solve only one linear system. This linear system was obtained by the linearization of the structure around the previous position in the updated Lagrangian formulation and by the employment of a linear convection term for the fluid. Numerical results are presented. Copyright © 2016 John Wiley & Sons, Ltd.

In this paper, we present a generalized prismatic hybrid meshing method for viscous flow simulations. One major difficulty in implementing a robust prismatic hybrid meshing tool is to handle boundary layer mesh collisions, and normally an extra data structure (e.g. quadtree in two-dimensional and octree in three-dimensional) is required. The proposed method overcomes this difficulty via an heuristic approach, and it only relies on constrained delaunay triangulation/tetrahedralization(CDT). No extra data structures are required. Geometrical reasoning is used to approximate the maximum marching distance of each point by walking through the CDT. This is combined with post-processing of marching vectors and distance and prohibition of multilevel differences to form an automatic and robust mechanism to remove boundary layer mesh collisions. Benefiting from the matureness of CDT techniques, the proposed method is robust, efficient and simple to implement. Its capability is demonstrated by generating quality prismatic hybrid meshes for industrial models with complex geometries. The proposed method is believed to be able considerably reduce the effort to implement a robust hybrid prismatic mesh generator for viscous flow simulations. © 2016 The Authors. International Journal for Numerical Methods in Engineering. Published by John Wiley & Sons Ltd.

A computational framework for assisting in the development of novel textiles is presented. Electronic textiles are key in the rapidly growing field of wearable electronics for both consumer and military uses. There are two main challenges to the modeling of electronic textiles: the discretization of the textile microstructure and the interaction between electromagnetic and mechanical fields. A director-based beam formulation with an assumed electrical current is used to discretize the fabric at the level of individual fibrils. The open-source package FEniCS was used to implement the finite element model. Contact integrals were added into the FEniCS framework so that multiphysics contact laws can be incorporated in the same framework, leveraging the code generation and automated differentiation capabilities of FEniCS to produce the tangents needed by the implicit solution method. The computational model is used to construct and determine the mechanical, thermal, and electrical properties of a representative volume elements of a plain woven textile. Dynamic relaxation to solve the mechanical fields and the electrical and thermal fields is solved statically for a given mechanical state. The simulated electrical responses are fit to a simplified Kirchhoff network model to determine effective resistances of the textile. Copyright © 2016 John Wiley & Sons, Ltd.

This paper develops and demonstrates a strategy for identifying the lack of dynamic interaction, or coupling, between potential design modifications to a vibrating base structure. Such decoupling may be exploited to efficiently compare the performance of competing engineering designs. In particular, it is shown how different design modifications may be represented as the addition or removal of substructures. When these substructures are uncoupled according to the metric developed here, the computational cost of determining the optimal system design can be greatly reduced. For example, if a designer considers seven possible modifications and wishes to examine all possible combinations of the modifications, 128 possible structures must be analyzed. However, if all modifications are dynamically uncoupled, significant computational effort need only be spent on eight of the possible structures in order to generate the responses for all remaining designs. Example problems demonstrate this cost reduction and illustrate cases where dynamic uncoupling occurs. Copyright © 2016 John Wiley & Sons, Ltd.

This article revisits a topology optimization design approach for micro-manufacturing and extends it to optical microlithography with partially coherent illumination. The solution is based on a combination of two technologies, the topology optimization and the proximity error correction in microlithography/nanolithography. The key steps include (i) modeling the physical inputs of the fabrication process, including the ultraviolet light illumination source and the mask, as the design variables in optimization and (ii) applying physical filtering and heaviside projection for topology optimization, which corresponds to the aerial image formulation and the pattern development processes, respectively. The proposed approach results in an effective source and a binary design mask, which can be sent directly to fabrication without additional post-processing steps for proximity error correction. Meanwhile, the performance of the device is optimized and robust with respect to process variations, such as dose/photo-resist variations and lens defocus. A compliant micro-gripper design example is considered to demonstrate the applicability of this approach. Copyright © 2016 John Wiley & Sons, Ltd.

A new higher-order accurate method is proposed that combines the advantages of the classical *p*-version of the FEM on body-fitted meshes with embedded domain methods. A background mesh composed by higher-order Lagrange elements is used. Boundaries and interfaces are described implicitly by the level set method and are within elements. In the elements cut by the boundaries or interfaces, an automatic decomposition into higher-order accurate sub-elements is realized. Therefore, the zero level sets are detected and meshed in a first step, which is called reconstruction. Then, based on the topological situation in the cut element, higher-order sub-elements are mapped to the two sides of the boundary or interface. The quality of the reconstruction and the mapping largely determines the properties of the resulting, automatically generated conforming mesh. It is found that optimal convergence rates are possible although the resulting sub-elements are not always well-shaped. Copyright © 2016 John Wiley & Sons, Ltd.

The contribution of this work is the implementation of a new elastic solution method for thick laminated composites and sandwich structures based on a generalized unified formulation using finite elements. A quadrilateral four-node element was developed and evaluated using an in-house finite element program. The C-1 continuity requirements are fulfilled for the transversal displacement field variable. This method is tagged as Caliri's generalized formulation. The results employing the proposed solution method yielded coherent results with deviations as low as 0.05% for a static simply supported symmetric laminate and 0.5% for the modal analyses of a soft core sandwich structure. Copyright © 2016 John Wiley & Sons, Ltd.

A robust and efficient strategy is proposed to simulate mechanical problems involving cohesive fractures. This class of problems is characterized by a global structural behavior that is strongly affected by localized nonlinearities at relatively small-sized critical regions. The proposed approach is based on the division of a simulation into a suitable number of sub-simulations where adaptive mesh refinement is performed only once based on refinement window(s) around crack front process zone(s). The initialization of Newton-Raphson nonlinear iterations at the start of each sub-simulation is accomplished by solving a linear problem based on a secant stiffness, rather than a volume mapping of nonlinear solutions between meshes. The secant stiffness is evaluated using material state information stored/read on crack surface facets which are employed to explicitly represent the geometry of the discontinuity surface independently of the volume mesh within the generalized finite element method framework. Moreover, a simplified version of the algorithm is proposed for its straightforward implementation into existing commercial software. Data transfer between sub-simulations is not required in the simplified strategy. The computational efficiency, accuracy, and robustness of the proposed strategies are demonstrated by an application to cohesive fracture simulations in 3-D. Copyright © 2016 John Wiley & Sons, Ltd.

A new nonconforming brick element is introduced, which only has 13 DOFs locally and takes as its shape functions space. The vector-valued version generates, together with a discontinuous approximation, an inf-sup stable finite element pair of order 2 for the Stokes problem in the energy norm. Copyright © 2016 John Wiley & Sons, Ltd.

Three new contributions to the field of multisurface plasticity are presented for general situations with an arbitrary number of nonlinear yield surfaces with hardening or softening. A method for handling linearly dependent flow directions is described. A residual that can be used in a line search is defined. An algorithm that has been implemented and comprehensively tested is discussed in detail. Examples are presented to illustrate the computational cost of various components of the algorithm. The overall result is that a single Newton-Raphson iteration of the algorithm costs between 1.5 and 2 times that of an elastic calculation. Examples also illustrate the successful convergence of the algorithm in complicated situations. For example, without using the new contributions presented here, the algorithm fails to converge for approximately 50% of the trial stresses for a common geomechanical model of sedementary rocks, while the current algorithm results in complete success. Because it involves no approximations, the algorithm is used to quantify the accuracy of an efficient, pragmatic, but approximate, algorithm used for sedimentary-rock plasticity in a commercial software package. The main weakness of the algorithm is identified as the difficulty of correctly choosing the set of initially active constraints in the general setting. Copyright © 2016 John Wiley & Sons, Ltd.

In this work, a mixed variational formulation to simulate quasi-incompressible electro-active or magneto-active polymers immersed in the surrounding free space is presented. A novel domain decomposition is used to disconnect the primary coupled problem and the arbitrary free-space mesh update problem. Exploiting this decomposition, we describe a block-iterative approach to solving the linearised multiphysics problem, and a physically and geometrically based, three-parameter method to update the free space mesh. Several application-driven example problems are implemented to demonstrate the robustness of the mixed formulation for both electro-elastic and magneto-elastic problems involving both finite deformations and quasi-incompressible media. Copyright © 2016 John Wiley & Sons, Ltd.

Uniform grid solvers of the periodic Lippmann–Schwinger equation have been introduced by Moulinec and Suquet for the numerical homogenization of heterogeneous materials. Based on the fast Fourier transform, these methods use the strain as main unknown and usually do not produce displacement fields. While this is generally not perceived as a restriction for homogenization purposes, some tasks might require kinematically admissible displacement fields. In this paper, we show how the numerical solution to the periodic Lippmann–Schwinger equation can be post-processed to reconstruct a displacement field. Our procedure applies to any variant of the Moulinec–Suquet solver. The reconstruction is formulated as an auxiliary elastic equilibrium problem of a homogeneous material, which is solved with displacement-based finite elements. Taking advantage of periodicity, uniformity of the grid and homogeneity of the material, the resulting linear system is formulated and solved efficiently in Fourier space. The cost of our procedure is lower than that of one iteration of the Lippmann–Schwinger solver. Two applications are proposed, in two and three dimensions. In the first application, the reconstructed displacement field is used to compute a rigorous upper bound on the effective shear modulus. In the second application, the quality of the reconstruction is assessed quantitatively. Copyright © 2016 John Wiley & Sons, Ltd.

Mechanical computations in multiphase domains raise numerous difficulties from the generation of the initial mesh to its adaptation throughout the simulation. All alternatives to mesh adaptation, such as level-set methods, have the well-known drawback of inducing volume conservation issues. In this paper, a moving mesh method is coupled to a topological mesh adaptation technique in order to track moving and deforming interfaces in multiphase simulations, with a robust control of mesh quality. Level-set functions are used as intermediaries to enhance the mesh adaptation technique with a volume conservation constraint, which is compatible both with implicit and with body-fitted interfaces. Results show that this method has the same advantage of permitting important displacements, deformations, and topological changes (coalescence of interfaces, for example) as a standard level-set method, while volume diffusion is drastically reduced. Copyright © 2016 John Wiley & Sons, Ltd.

This study concerns the development of a numerical methodology for initializing immersed interface-based CFD solvers for using complex computer-aided design (CAD) geometry. CFD solvers with higher-order discretization stencils require larger stencil widths, which become problematic in regions of space where insufficient mesh resolution is available. This problem becomes especially challenging when convoluted triangulated surface meshes generated from complex solid models are used to initialize the cut-cells. A pragmatic balance between desired local geometry resolution and numerical accuracy is often required to find a practical solution. Here, a robust iterative fill algorithm is presented that balances geometry resolution with numerical accuracy (*via* stencil size). Several examples are presented to illustrate the use of this initialization procedure that employs both the original CAD generated triangulated surface mesh, along with a level set representation of the surface to initialize cut-cells and boundary proximity measures for creation of CFD stencils. Convergence error analysis of surface area and enclosed volumes is first presented to show the effects of fill on the geometry as a function of desired stencil size and grid resolution. The algorithm is then applied to geometrically complex problems using large eddy simulation. Two problems are considered. The first is flow around the Eiffel Tower. The second is a combustion swirler in the context of a design problem. Copyright © 2016 John Wiley & Sons, Ltd.

A comprehensive study of the two sub-steps composite implicit time integration scheme for the structural dynamics is presented in this paper. A framework is proposed for the convergence accuracy analysis of the generalized composite scheme. The local truncation errors of the acceleration, velocity, and displacement are evaluated in a rigorous procedure. The presented and proved accuracy condition enables the displacement, velocity, and acceleration achieving second-order accuracy simultaneously, which avoids the drawback that the acceleration accuracy may not reach second order. The different influences of numerical frequencies and time step on the accuracy of displacement, velocity, and acceleration are clarified. The numerical dissipation and dispersion and the initial magnitude errors are investigated physically, which measure the errors from the algorithmic amplification matrix's eigenvalues and eigenvectors, respectively. The load and physically undamped/damped cases are naturally accounted. An optimal algorithm-Bathe composite method (Bathe and Baig, 2005; Bathe, 2007; Bathe and Noh, 2012) is revealed with unconditional stability, no overshooting in displacement, velocity, and acceleration, and excellent performance compared with many other algorithms. The proposed framework also can be used for accuracy analysis and design of other multi-sub-steps composite schemes and single-step methods under physical damping and/or loading. Copyright © 2016 John Wiley & Sons, Ltd.

This contribution presents a numerical strategy to evaluate the effective properties of image-based microstructures in the case of random material properties. The method relies on three points: (1) a high-order fictitious domain method; (2) an accurate spectral stochastic model; and (3) an efficient model-reduction method based on the proper generalized decomposition in order to decrease the computational cost introduced by the stochastic model. A feedback procedure is proposed for an automatic estimation of the random effective properties with a given confidence. Numerical verifications highlight the convergence properties of the method for both deterministic and stochastic models. The method is finally applied to a real 3D bone microstructure where the empirical probability density function of the effective behaviour could be obtained. Copyright © 2016 John Wiley & Sons, Ltd.

In this contribution, a novel method for a fail-safe optimal design of structures is proposed, which is a coupled approach of optimization employing a genetic algorithm, the structural analysis conducted in the framework of fracture mechanics and uncertainty analysis. The idea of fail-safe structures is to keep their functionality and integrity even under damage conditions, for example, a local failure of substructures. In the present work, a design concept of a substructure exhibiting a damage accumulating function due to the application of crack arresters is introduced. If such a substructure is integrated within a system of coupled substructures, it will accumulate the damage arising from the boundary conditions change induced by the failure of certain neighbouring structural elements and hinder further damage escalation. The investigation of failure of the damage accumulating substructure is introduced within a finite element framework by a combination of discrete fracturing and configurational mechanics based criteria. In order to design a structure, which will fail safely according to a predefined scenario, uncertainties are taken into account. The developed approach optimizes the configuration of crack arresters within the damage accumulating substructure so that the uncertain crack propagation is hindered and only a local failure of this element occurs. Copyright © 2016 John Wiley & Sons, Ltd.

In this paper, a multibody meshfree framework is proposed for the simulation of granular materials undergoing deformations at the grain scale. This framework is based on an implicit solving of the mechanical problem based on a weak form written on the domain defined by all the grains composing the granular sample. Several technical choices, related to the displacement field interpolation, to the contact modelling, and to the integration scheme used to solve the dynamic equations, are explained in details. A first implementation is proposed, under the acronym Multibody ELement-free Open code for DYnamic simulation (MELODY), and is made available for free download. Two numerical examples are provided to show the convergence and the capability of the method. Copyright © 2016 John Wiley & Sons, Ltd.

Parametric and implicit methods are traditionally thought to be two irrelevant approaches in structural shape optimization. Parametric method works as a Lagrangian approach and often uses the parametric boundary representation (B-rep) of curves/surfaces, for example, Bezier and B-splines in combination with the conformal mesh of a finite element model, while implicit method relies upon level-set functions, that is, implicit functions for B-rep, and works as an Eulerian approach in combination with the fixed mesh within the scope of extended finite element method or finite cell method. The original contribution of this work is the unification of both methods. First, a new shape optimization method is proposed by combining the features of the parametric and implicit B-reps. Shape changes of the structural boundary are governed by parametric B-rep on the fixed mesh to maintain the merit in computer-aided design modeling and avoid laborious remeshing. Second, analytical shape design sensitivity is formulated for the parametric B-rep in the framework of fixed mesh of finite cell method by means of the Hamilton–Jacobi equation. Numerical examples are solved to illustrate the unified methodology. Copyright © 2016 John Wiley & Sons, Ltd.

This paper studies the behaviour of the error incurred when numerically integrating the elasto-plastic mechanical relationships of a constitutive model for soils using an explicit substepping formulation with automatic error control. The correct update of all the variables involved in the numerical integration of the incremental stress–strain relationships is central to the computational performance of the integration scheme, and, although often missed in the literature, all variables (including specific volume) should be explicitly considered in the algorithmic formulation. This is demonstrated in the paper by studying, in the context of the Cam clay formulations for saturated soils, the influence that the updating of the specific volume has on the accuracy of the numerical solution. The fact that the variation of the local error with the size of the integrated strain depends on the order of local accuracy of the numerical method is also used in the paper to propose a simple and powerful strategy to verify the correctness of the implemented mathematical formulation. Copyright © 2016 John Wiley & Sons, Ltd.

An efficient method is proposed to estimate the first-order global sensitivity indices based on failure probability and variance by using maximum entropy theory and Nataf transformation. The computational cost of this proposed method is quite small, and the proposed method can efficiently overcome the ‘dimensional curse’ due to dimensional reduction technique. Ideas for the estimation of higher-order sensitivity indices are discussed. Copyright © 2016 John Wiley & Sons, Ltd.

This paper describes a novel method for mapping between basis representation of a field variable over a domain in the context of numerical modelling and inverse problems. In the numerical solution of inverse problems, a continuous scalar or vector field over a domain may be represented in different finite-dimensional basis approximations, such as an unstructured mesh basis for the numerical solution of the forward problem, and a regular grid basis for the representation of the solution of the inverse problem. Mapping between the basis representations is generally lossy, and the objective of the mapping procedure is to minimise the errors incurred. We present in this paper a novel mapping mechanism that is based on a minimisation of the *L*^{2} or *H*^{1} norm of the difference between the two basis representations. We provide examples of mapping in 2D and 3D problems, between an unstructured mesh basis representative of an FEM approximation, and different types of structured basis including piecewise constant and linear pixel basis, and blob basis as a representation of the inverse basis. A comparison with results from a simple sampling-based mapping algorithm shows the superior performance of the method proposed here. © 2016 The Authors. International Journal for Numerical Methods in Engineering Published by John Wiley & Sons Ltd.

In finite element analysis of volume coupled multiphysics, different meshes for the involved physical fields are often highly desirable in terms of solution accuracy and computational costs. We present a general methodology for volumetric coupling of different meshes within a monolithic solution scheme. A straightforward collocation approach is compared to a mortar-based method for nodal information transfer. For the latter, dual shape functions based on the biorthogonality concept are used to build the projection matrices, thus further reducing the evaluation costs. We give a detailed explanation of the integration scheme and the construction of dual shape functions for general first-order and second-order Langrangian finite elements within the mortar method, as well as an analysis of the conservation properties of the projection operators. Moreover, possible incompatibilities due to different geometric approximations of curved boundaries are discussed. Numerical examples demonstrate the flexibility of the presented mortar approach for arbitrary finite element combinations in two and three dimensions and its applicability to different multiphysics coupling scenarios. Copyright © 2016 John Wiley & Sons, Ltd.

This paper presents an eight-node nonlinear solid-shell element for static problems. The main goal of this work is to develop a solid-shell formulation with improved membrane response compared with the previous solid-shell element (MOS2013), presented in . Assumed natural strain concept is implemented to account for the transverse shear and thickness strains to circumvent the curvature thickness and transverse shear locking problems. The enhanced assumed strain approach based on the Hu–Washizu variational principle with six enhanced assumed strain degrees of freedom is applied. Five extra degrees of freedom are applied on the in-plane strains to improve the membrane response and one on the thickness strain to alleviate the volumetric and Poisson's thickness locking problems. The ensuing element performs well in both in-plane and out-of-plane responses, besides the simplicity of implementation. The element formulation yields exact solutions for both the membrane and bending patch tests. The formulation is extended to the geometrically nonlinear regime using the corotational approach, explained in . Numerical results from benchmarks show the robustness of the formulation in geometrically linear and nonlinear problems. Copyright © 2016 John Wiley & Sons, Ltd.

Reliability analysis with both aleatory and epistemic uncertainties is investigated in this paper. The aleatory uncertainties are described with random variables, and epistemic uncertainties are tackled with evidence theory. To estimate the bounds of failure probability, several methods have been proposed. However, the existing methods suffer the dimensionality challenge of epistemic variables. To get rid of this challenge, a so-called random-set based Monte Carlo simulation (RS-MCS) method derived from the theory of random sets is offered. Nevertheless, RS-MCS is also computational expensive. So an active learning Kriging (ALK) model that only rightly predicts the sign of performance function is introduced and closely integrated with RS-MCS. The proposed method is termed as ALK-RS-MCS. ALK-RS-MCS accurately predicts the bounds of failure probability using as few function calls as possible. Moreover, in ALK-RS-MCS, an optimization method based on Karush–Kuhn–Tucker conditions is proposed to make the estimation of failure probability interval more efficient based on the Kriging model. The efficiency and accuracy of the proposed approach are demonstrated with four examples. Copyright © 2016 John Wiley & Sons, Ltd.

This paper deals with the numerical analysis of instabilities for elastic-plastic materials undergoing large deformations in non-isothermal conditions. The considered isotropic model is fully thermomechanically coupled and includes temperature-induced softening, which is another source of strain localization next to geometrical effects. Due to complexity of the model, a symbolic-numerical tool *Ace* is used for the preparation of user-supplied subroutines for the finite element method. The computational verification is performed using two benchmark tests: necking of circular bar in tension and shear banding of elongated rectangular plate in plain strain conditions. The attention is focused on mesh dependence of the numerical results and the regularizing effect of heat conduction. The research reveals that the conductivity influences the shear band width and ductility of the material response; however, for the adiabatic case, the results are discretization sensitive, and another regularization is needed. A new gradient-enhanced thermomechanical model is developed that introduces an internal length parameter governing the size of the shear band caused by thermal softening. The numerical verification of the non-local model is performed for the adiabatic case. Subsequently, the simultaneous application of the gradient enhancement and heat conduction in the model is analyzed, which reproduces an evolving shear band. Copyright © 2016 John Wiley & Sons, Ltd.

Computational aspects of a recently developed gradient elasticity model are discussed in this paper. This model includes the (Aifantis) strain gradient term along with two higher-order acceleration terms (micro-inertia contributions). It has been demonstrated that the presence of these three gradient terms enables one to capture the dispersive wave propagation with great accuracy. In this paper, the discretisation details of this model are thoroughly investigated, including both discretisation in time and in space. Firstly, the critical time step is derived that is relevant for conditionally stable time integrators. Secondly, recommendations on how to choose the numerical parameters, primarily the element size and time step, are given by comparing the dispersion behaviour of the original higher-order continuum with that of the discretised medium. In so doing, the accuracy of the discretised model can be assessed a priori depending on the selected discretisation parameters for given length-scales. A set of guidelines can therefore be established to select optimal discretisation parameters that balance computational efficiency and numerical accuracy. These guidelines are then verified numerically by examining the wave propagation in a one-dimensional bar as well as in a two-dimensional example. Copyright © 2016 John Wiley & Sons, Ltd.

In this article, a black-box higher-order fast multipole method for solving boundary integral equations on parametric surfaces in three spatial dimensions is proposed. Such piecewise smooth surfaces are the topic of recent studies in isogeometric analysis. Due to the exact surface representation, the rate of convergence of higher-order methods is not limited by approximation errors of the surface. An element-wise clustering strategy yields a balanced cluster tree and an efficient numerical integration scheme for the underlying Galerkin method. By performing the interpolation for the fast multipole method directly on the reference domain, the cost complexity in the polynomial degree is reduced by one order. This gain is independent of the application of either - or -matrices. In fact, several simplifications in the construction of -matrices are pointed out, which are a by-product of the surface representation. Extensive numerical examples are provided in order to quantify and qualify the proposed method. Copyright © 2016 John Wiley & Sons, Ltd.

A differential quadrature hierarchical finite element method (DQHFEM) is proposed by expressing the hierarchical finite element method matrices in similar form as in the differential quadrature finite element method and introducing interpolation basis on the boundary of hierarchical finite element method elements. The DQHFEM is similar as the fixed interface mode synthesis method but the DQHFEM does not need modal analysis. The DQHFEM with non-uniform rational B-splines elements were shown to accomplish similar destination as the isogeometric analysis. Three key points that determine the accuracy, efficiency and convergence of DQHFEM were addressed, namely, (1) the Gauss–Lobatto–Legendre points should be used as nodes, (2) the recursion formula should be used to compute high-order orthogonal polynomials, and (3) the separation variable feature of the basis should be used to save computational cost. Numerical comparison and convergence studies of the DQHFEM were carried out by comparing the DQHFEM results for vibration and bending of Mindlin plates with available exact or highly accurate approximate results in literatures. The DQHFEM can present highly accurate results using only a few sampling points. Meanwhile, the order of the DQHFEM can be as high as needed for high-frequency vibration analysis. Copyright © 2016 John Wiley & Sons, Ltd.

This manuscript presents the development of novel high-order complete shape functions over star-convex polygons based on the scaled boundary finite element method. The boundary of a polygon is discretised using one-dimensional high order shape functions. Within the domain, the shape functions are analytically formulated from the equilibrium conditions of a polygon. These standard scaled boundary shape functions are augmented by introducing additional bubble functions, which renders them high-order complete up to the order of the line elements on the polygon boundary. The bubble functions are also semi-analytical and preserve the displacement compatibility between adjacent polygons. They are derived from the scaled boundary formulation by incorporating body force modes. Higher-order interpolations can be conveniently formulated by simultaneously increasing the order of the shape functions on the polygon boundary and the order of the body force mode. The resulting stiffness-matrices and mass-matrices are integrated numerically along the boundary using standard integration rules and analytically along the radial coordinate within the domain. The bubble functions improve the convergence rate of the scaled boundary finite element method in modal analyses and for problems with non-zero body forces. Numerical examples demonstrate the accuracy and convergence of the developed approach. Copyright © 2016 John Wiley & Sons, Ltd.

In this contribution, we propose a dynamic gradient damage model as a phase-field approach for studying brutal fracture phenomena in quasi-brittle materials under impact-type loading conditions. Several existing approaches to account for the tension–compression asymmetry of fracture behavior of materials are reviewed. A better understanding of these models is provided through a uniaxial traction experiment. We then give an efficient numerical implementation of the model in an explicit dynamics context. Simulations results obtained with parallel computing are discussed both from a computational and physical point of view. Different damage constitutive laws and tension–compression asymmetry formulations are compared with respect to their aptitude to approximate brittle fracture. Copyright © 2016 John Wiley & Sons, Ltd.

Many challenging engineering and scientific problems involve the response of nonlinear solid materials to high-rate dynamic loading. Accompanying hydrodynamic effects are crucial, where shock-driven pressures may dominate material response. In this work, a hydrodynamic meshfree formulation is developed under the Lagrangian reproducing kernel particle method framework. The volumetric stress divergence is enhanced using a Rankine–Hugoniot-enriched Riemann solution that introduces the essential physics; oscillation control is introduced through appropriate state and field variable approximations that define the Riemann problem initial conditions. Consequently, non-physical numerical parameters and length scales required in the traditional artificial viscosity technique for shock modeling are avoided here. Several numerical examples are provided to verify the formulation accuracy across a range of shock loading conditions. Copyright © 2016 John Wiley & Sons, Ltd.

The production of new composite laminates with variable stiffness within the surface of plies was enabled by tow-placement machines. Because of the variation of stiffness, these materials are called variable stiffness composite laminates (VSCL). Recently, many attempts were made to investigate their structural behaviour. In this contribution, a first-order shear deformation theory is selected to model the multilayered composite laminates. The adopted theory is enhanced by the extended finite element method (XFEM) to describe discontinuities at element level of any interface of interest. To predict the location of the delamination onset, a traction–separation law is developed that is consistent with the XFEM topology. An exponential softening behaviour is implemented within the interface to model the delamination growth in a mixed-mode direction. In order to solve the non-linear equations of the delamination propagation, an arc-length method is applied. The effect of the curvilinear fibre orientation on the location of the delamination onset is investigated. Subsequently, the structural response of the laminates is computed. According to the simplicity of the new approach using the XFEM; and based on the computational cost for calculating the stiffness of VSCL, the method is able to determine structural response of VSCL with less computational effort. Copyright © 2016 John Wiley & Sons, Ltd.

Modal derivative is an approach to compute a reduced basis for model order reduction of large-scale nonlinear systems that typically stem from the discretization of partial differential equations. In this way, a complex nonlinear simulation model can be integrated into an optimization problem or the design of a controller, based on the resulting small-scale state-space model. We investigate the approximation properties of modal derivatives analytically and thus lay a theoretical foundation of their use in model order reduction, which has been missing so far. Concentrating on the application field of structural mechanics and structural dynamics, we show that the concept of modal derivatives can also be applied as nonlinear extension of the Craig–Bampton family of methods for substructuring. We furthermore generalize the approach from a pure projection scheme to a novel reduced-order modeling method that replaces all nonlinear terms by quadratic expressions in the reduced state variables. This complexity reduction leads to a frequency-preserving nonlinear quadratic state-space model. Numerical examples with carefully chosen nonlinear model problems and three-dimensional nonlinear elasticity confirm the analytical properties of the modal derivative reduction and show the potential of the proposed novel complexity reduction methods, along with the current limitations. Copyright © 2016 John Wiley & Sons, Ltd.

Computational modelling of fracture has been attempted in the past with a range of numerical approaches including finite element, extended finite element and meshless methods. The cracking particle method (CPM) of Rabczuk is a pragmatic alternative to explicit modelling of crack surfaces in which a crack is represented by a set of cracking particles that can be easily updated when the crack propagates. The change of cracking angle is recorded in discrete segments of broken lines, which makes this methodology suitable to model discontinuous cracks. In this paper, a new CPM is presented that improves on two counts: firstly, crack path curvature modelling is improved by the use of bilinear segments centred at each particle and secondly, efficiency for larger problems is improved via an adaptive process of both refinement and recovery. The system stiffness is calculated and stored in local matrices, so only a small influenced domain should be recalculated for each step while the remainder can be read directly from storage, which greatly reduces the computational expense. The methodology is applied to several 2D crack problems, and good agreement to analytical solutions and previous work is obtained. Copyright © 2016 John Wiley & Sons, Ltd.

This paper describes a selective mass scaling method which is designed for the analysis of wave propagation problems in nearly incompressible materials. The incompressibility of materials leads to a high value of the compressional wave speed, which makes the time step extremely small in explicit time integration method. The proposed selective mass scaling method selects the eigenfrequencies related to volumetric deformation modes to decrease them, while it keeps the shear eigenmodes unchanged. This makes the time step no longer limited by the compressional wave speed but by the shear wave speed. A significant reduction of CPU time is obtained with a good accuracy for transient problems in small strains on free or largely prestressed media. Copyright © 2016 John Wiley & Sons, Ltd.

This paper presents a 3D formulation for quasi-kinematic limit analysis, which is based on a radial point interpolation meshless method and numerical optimization. The velocity field is interpolated using radial point interpolation shape functions, and the resulting optimization problem is cast as a standard second-order cone programming problem. Because the essential boundary conditions can be only guaranteed at the position of the nodes when using radial point interpolation, the results obtained with the proposed approach are not rigorous upper bound solutions. This paper aims to improve the computing efficiency of 3D upper bound limit analysis and large problems, with tens of thousands of nodes, can be solved efficiently. Five numerical examples are given to confirm the effectiveness of the proposed approach with the von Mises yield criterion: an internally pressurized cylinder; a cantilever beam; a double-notched tensile specimen; and strip, square and rectangular footings. Copyright © 2016 John Wiley & Sons, Ltd.

The numerical errors was used to verify the correctness of key results. The truncation errors, which are larger than the round-off errors by orders of magnitude, have a superlinear relationship with both the simulation time-step and the interparticle collision speed. This remains the case regardless of the simulation details including the chosen contact model, particle size distribution, particle density or stiffness. Hence, the total errors can usually be reduced by choosing a smaller time-step. Increasing the polydispersity in a simulation by including smaller particles necessitates choosing a smaller time-step to maintain simulation stability and reduces the truncation errors in most cases. The truncation errors are increased by the dissipation of energy by frictional sliding or by the inclusion of damping in the system. The number of contacts affects the accuracy, and one can deduce that because 2D simulations contain fewer interparticle contacts than the equivalent 3D simulations, they therefore have lower accrued simulation errors. Copyright © 2016 John Wiley & Sons, Ltd.

A computational framework for scale-bridging in multi-scale simulations is presented. The framework enables seamless combination of at-scale models into highly dynamic hierarchies to build a multi-scale model. Its centerpiece is formulated as a standalone module capable of fully asynchronous operation. We assess its feasibility and performance for a two-scale model applied to two challenging test problems from impact physics. We find that the computational cost associated with using the framework may, as expected, become substantial. However, the framework has the ability of effortlessly combining at-scale models to render complex multi-scale models. The main source of the computational inefficiency of the framework is related to poor load balancing of the lower-scale model evaluation We demonstrate that the load balancing can be efficiently addressed by recourse to conventional load-balancing strategies. Copyright © 2016 John Wiley & Sons, Ltd.

An optimization framework is developed for surface texture design in hydrodynamic lubrication. The microscopic model of the lubrication interface is based on the Reynolds equation, and the macroscopic response is characterized through homogenization. The microscale setting assumes a unilateral periodic texture but implicitly accounts for the bilateral motion of the surfaces. The surface texture in a unit cell is described indirectly through the film thickness, which is allowed to vary between prescribed minimum and maximum values according to a morphology variable distribution that is obtained through the filtering of a design variable. The design and morphology variables are discretized using either element-wise constant values or through first-order elements. In addition to sharp textures, which are characterized by pillars and holes that induce sudden transitions between extreme film thickness values, the framework can also attain a variety of non-standard smoothly varying surface textures with a macroscopically isotropic or anisotropic response. Copyright © 2016 John Wiley & Sons, Ltd.

The macroscopic behavior of materials is affected by their inner micro-structure. Elementary considerations based on the arrangement, and the physical and mechanical features of the micro-structure may lead to the formulation of elastoplastic constitutive laws, involving hardening/softening mechanisms and non-associative properties. In order to model the non-linear behavior of micro-structured materials, the classical theory of time-independent multisurface plasticity is herein extended to Cosserat continua. The account for plastic relative strains and curvatures is made by means of a robust quadratic-convergent projection algorithm, specifically formulated for non-associative and hardening/softening plasticity. Some important limitations of the classical implementation of the algorithm for multisurface plasticity prevent its application for any plastic surfaces and loading conditions. These limitations are addressed in this paper, and a robust solution strategy based on the singular value decomposition technique is proposed. The projection algorithm is then implemented into a finite element formulation for Cosserat continua. A specific finite element is considered, developed for micropolar plates. The element is validated through illustrative examples and applications, showing able performance. Copyright © 2016 John Wiley & Sons, Ltd.

The main feature of the geometrically exact theory of contact is that all objects, which are necessary for computation, weak form and residual, linearized weak form, and tangent matrices are given in a covariant closed form in the local coordinate system corresponding to the geometry of contact pairs. This allows easily to construct computational algorithms for the normal and tangential follower forces as an inverse contact algorithm. In this case, following the definition of the follower forces as given and not changing in the local coordinate system, we have to modify all objects for the contact taking into account the definition of follower forces instead of constitutive relationships for the contact interfaces. The main feature is that the tangent matrices for both normal and tangential part being split into the rotational and the curvature parts are symmetric for any order of approximation. The following numerical examples are selected in the current article to illustrate the effectiveness of implementation: (1) the modeling of a pure bending with a moment applied either as a pair of single forces or a distributed follower forces (pressure) in both 2D and 3D cases; (2) modeling of inflation of a plate as application of the distributed follower normal forces (pressure); and (3) modeling of twisting of a beam with the rectangular cross-section as application of the tangential follower forces algorithm. Copyright © 2016 John Wiley & Sons, Ltd.

This article deals with the computation of guaranteed lower bounds of the error in the framework of finite element and domain decomposition methods. In addition to a fully parallel computation, the proposed lower bounds separate the algebraic error (due to the use of a domain decomposition iterative solver) from the discretization error (due to the finite element), which enables the steering of the iterative solver by the discretization error. These lower bounds are also used to improve the goal-oriented error estimation in a substructured context. Assessments on 2D static linear mechanic problems illustrate the relevance of the separation of sources of error and the lower bounds' independence from the substructuring. We also steer the iterative solver by an objective of precision on a quantity of interest. This strategy consists in a sequence of solvings and takes advantage of adaptive remeshing and recycling of search directions. Copyright © 2016 John Wiley & Sons, Ltd.

A new boundary element formulation is developed to analyze two-dimensional size-dependent piezoelectric response in isotropic dielectric materials. The model is based on the recently developed consistent couple stress theory, in which the couple-stress tensor is skew-symmetric. For isotropic materials, there is no classical piezoelectricity, and the size-dependent piezoelectricity or flexoelectricity effect is solely the result of coupling of polarization to the skew-symmetric mean curvature tensor. As a result, the size-dependent effect is specified by one characteristic length scale parameter *l*, and the electromechanical effect is specified by one flexoelectric coefficient *f*. Interestingly, in this size-dependent multi-physics model, the governing equations are decoupled. However, the problem is coupled, because of the existence of a flexoelectric effect in the boundary couple-traction and normal electric displacement. We discuss the boundary integral formulation and numerical implementation of this size-dependent piezoelectric boundary element method, which provides a boundary-only formulation involving displacements, rotations, force-tractions, couple-tractions, electric potential, and normal electric displacement as primary variables. Afterwards, we apply the resulting BEM formulation to several computational problems to confirm the validity of the numerical implementation and to explore the physics of the flexoelectric coupling. Copyright © 2016 John Wiley & Sons, Ltd.

When using particle methods to simulate water–air flows with compressible air pockets, a major challenge is to deal with the large differences in physical properties (e.g., density and viscosity) between water and air. In addition, the accurate modeling of air compressibility is essential. To this end, a new two-phase strategy is proposed to simulate incompressible and compressible fluids simultaneously without iterations between the solvers for incompressible and compressible flows. Water is modeled by the recently developed 2-phase Consistent Particle Method for incompressible flows. For air modeling, a new compressible solver is proposed based on the ideal gas law and thermodynamics. The formulation avoids the problem of determining the actual sound speed that is dependent on the temperature and is therefore not necessarily constant. In addition, the compressible air solver is seamlessly integrated with the incompressible solver 2-phase Consistent Particle Method because they both use the same predictor–corrector scheme to solve the governing equations. The performance of the proposed method is demonstrated by three benchmark problems as well as an experimental study of sloshing impact with entrapped air pockets in an oscillating tank. Copyright © 2016 John Wiley & Sons, Ltd.

No abstract is available for this article.

]]>No abstract is available for this article.

]]>In this paper, we define the spurious kinematic modes of hybrid equilibrium 2D and 3D simplicial elements of general degree and present the results of studies on the stability of star patches of such elements. The approach used in these studies is based on first establishing the kinematic properties of a pair of elements that share an interface. This approach is introduced by its application to star patches of hybrid equilibrium triangular plate elements for modelling membrane and plate bending problems and then generalised to 3D continua. It is then shown how the existence of spurious kinematic modes depends on the topological and geometrical properties of a patch, as well as on the degree of the polynomial approximation functions of stress and displacement. Copyright © 2015 John Wiley & Sons, Ltd.

The present work investigates the shape optimization of bimaterial structures. The problem is formulated using a level set description of the geometry and the extended finite element method (XFEM) to enable an easy treatment of complex geometries. A key issue comes from the sensitivity analysis of the structural responses with respect to the design parameters ruling the boundaries. Even if the approach does not imply any mesh modification, the study shows that shape modifications lead to difficulties when the perturbation of the level sets modifies the set of extended finite elements. To circumvent the problem, an analytical sensitivity analysis of the structural system is developed. Differences between the sensitivity analysis using FEM or XFEM are put in evidence. To conduct the sensitivity analysis, an efficient approach to evaluate the so-called velocity field is developed within the XFEM domain. The proposed approach determines a continuous velocity field in a boundary layer around the zero level set using a local finite element approximation. The analytical sensitivity analysis is validated against the finite differences and a semi-analytical approach. Finally, our shape optimization tool for bimaterial structures is illustrated by revisiting the classical problem of the shape of soft and stiff inclusions in plates. Copyright © 2015 John Wiley & Sons, Ltd.

In this paper, based on the general stress–strain relationship, displacement and stress boundary-domain integral equations are established for single medium with varying material properties. From the established integral equations, single interface integral equations are derived for solving general multi-medium mechanics problems by making use of the variation feature of the material properties. The displacement and stress interface integral equations derived in this paper can be applied to solve non-homogeneous, anisotropic, and non-linear multi-medium problems in a unified way. By imposing some assumptions on the derived integral equations, detailed expressions for some specific mechanics problems are deduced, and a few numerical examples are given to demonstrate the correctness and robustness of the derived displacement and stress interface integral equations. Copyright © 2015 John Wiley & Sons, Ltd.