High‐order discontinuous Galerkin method for time‐domain electromagnetics on geometry‐independent Cartesian meshes

In this work we present the Cartesian grid discontinuous Galerkin (cgDG) finite element method, a novel numerical technique that combines the high accuracy and efficiency of a high‐order discontinuous Galerkin discretization with the simplicity and hierarchical structure of a geometry‐independent Cartesian mesh. The elements that intersect the boundary of the physical domain require special treatment in order to minimize their effect on the performance of the algorithm. We considered the exact representation of the geometry for the boundary of the domain avoiding any nonphysical artifacts. We also define a stabilization procedure that eliminates the step size restriction of the time marching scheme due to extreme cut patterns. The unstable degrees of freedom are eliminated and the supporting regions of their shape functions are reassigned to neighboring elements. A subdomain matching algorithm and an a posterior enrichment strategy are presented. Combining these techniques we obtain a final spatial discretization that preserves stability and accuracy of the standard body‐fitted discretization. The method is validated through a series of numerical tests and it is successfully applied to the solution of problems of interest in the context of electromagnetic scattering with increasing complexity.


INTRODUCTION
introduces excessive numerical dispersion and/or dissipation, which limits its scope of application to simulations where the electromagnetic wave propagates along just a few wavelengths. High-order methods are a powerful alternative to Yee's algorithm. The polynomial approximation of the geometry allows the boundary to be captured with greater precision which reduces the effect of the nonphysical diffraction. Likewise, by increasing the degree of the approximation, the dispersion and dissipation errors inherent to the numerical method are reduced.
Among all the available high-order methods, FE techniques based on the discontinuous Galerkin (DG) formulation have gained increasing interest since their appearance in the 1970s. 5,6 These strategies exhibit some desirable features that alternative approaches do not. Unlike for high-order FV or FD methods, the order of accuracy of the approximation depends on trial functions that are locally defined on each element. In comparison with standard FE methods, the order of the approximation can be easily adapted at the element level regardless of the discretization used in its surroundings. When combined with explicit time marching schemes, the mass matrix arising from the weak formulation of the problem is block diagonal, which means that the global system can be split into a set of element-by-element problems that can be computed in parallel. 7,8 Taking advantage of the multicore architectures of modern CPUs, this feature significantly speeds up the computations. Finally, the use of numerical fluxes provides an easy way to stabilize the numerical scheme for convection dominated problems.
The combination of DG and explicit Runge-Kutta methods leads to algorithms with a high convergence rate in both space and time. 6 However, explicit methods obey the Courant-Friedrichs-Levy (CFL) 9 condition that limits the maximum size of the time step that can be used. Furthermore, the overall CFL condition of a numerical approach depends on local properties of the discretization. In meshes with very different element sizes, due to local h-refinement, the smallest element defines the maximum size of the time step. In the literature, a variety of methods that seeks to minimize the effect of this dependency has been proposed. Strong-stability preserving Runge-Kutta methods 10,11 or the multistep Runge-Kutta methods [12][13][14] are some examples. As an alternative we may consider implicit methods. 15,16 These time integrators do not suffer from instability issues even for large step sizes but the cost to pay is the solution of the global system of equations. The matrix of the system to be solved each time step is no longer block-diagonal. Hybrid methods called locally implicit Runge-Kutta methods that seek to combine the best of both worlds have been also proposed. [17][18][19][20] The accurate representation of the geometry is critical in the framework of high-order methods since it can become the limiting factor in terms of accuracy. Previous research shows that the approximate representation of the geometry must be at least of the same order as the solution in order to maintain the convergence rate of the method. 21 In this regard, the use of isoparametric approaches with curved elements is the most extended. However, it requires the generation of curvilinear high-order meshes, which is still challenging for complex geometries in three dimensions. 22 The use of exact boundary representation has shown advantages in this context, 23,24 but the automatic generation of meshes for NEFEM is even more challenging and, nowadays, it is only available for 2D geometries. 25 The mesh generation problem can be overcome by using nonconforming methods, in which the analysis mesh is computed starting with a simple and easy-to-mesh shape that contains the problem domain. The result is a geometry independent discretization of the domain of interest. Among all the available nonconforming methods, we find the fictitious domain methods (FDM) 26,27 that capture the physical domain of the problem by assigning different properties to each point based on which domain it lies on. The Cartesian grid finite element method (cgFEM) 28 is an example of FDM whose main feature is the use of a mesh made up of regular quadrilateral (2D) or regular hexahedral (3D) elements. This spatial discretization presents very convenient characteristics from the computational point of view. The hierarchy of the mesh can be used to project information among meshes through h-refinement processes preventing from redundant calculations. Moreover, the structure of the mesh simplifies tasks related to locating points in elements or finding connectivities between entities of the mesh. The similarity of the elements leads to a major reduction in computational time and memory requirements as the integrals of all elements with the same integrand can be computed by scaling those obtained in a reference element. The cgFEM has been successfully applied for the solution of both image and geometry based structural analysis in a range of problem such as linear elasticity, 29 plasticity, 30 contact, [31][32][33] optimization, 28,34,35 image-based analysis, 36,37 and bone regeneration controlled by mechanoregulatory models. 38 In this work, the use of the DG formulation with high-order approximations in the context of cgFEM for the solution of Maxwell's equations in the time domain is proposed for the first time. The resulting method brings together the advantages of both methods allowing the numerical simulation of the propagation of electromagnetic waves in complex domains with a high level of accuracy. The high fidelity of the high-order method is ensured by using integration strategies based on NEFEM that allow the exact representation of the geometry. 21,39 For elements with a small fraction of the area inside the computational domain a novel stabilization strategy is proposed to ensure that the maximum allowed time step leads to an stable time marching scheme. The strategy consists of extending the solution of an element over the domain assigned to an adjacent one while reducing to zero the support of the shape functions defined on this second element. Therefore, the nodal values linked to this interpolation functions can be removed from the system of equations. Following this approach, we get rid of the unstable degrees of freedom. Independently, other authors have proposed similar strategies to improve the conditioning in elliptic problems 40 and to solve the Navier-Stokes equations with nonconforming meshes. [41][42][43] The strategy described in this contribution is more sophisticated than previous ones, which translates in greater robustness and versatility. Moreover, we performed a set of numerical test that let better understand the effects of this kind of stabilization techniques on the overall performance of the algorithm.
The remainder of the article is organized as follows. In Section 2 the DG formulation applied to the electromagnetic problem is briefly presented. Next, Section 3 summarizes the key aspects of cgFEM and points out some important features regarding its implementation. The combination of the two techniques that leads to the cgDG method is covered in Section 4. The maximum efficiency of the algorithm is sought by identifying casuistry and deriving calculation procedures that minimize the computational cost and the memory requirements related to the evaluation of the discrete operators. Section 5 is intended to present the stabilization strategy. Once the core idea of this approach is presented, a detailed description of the domain matching algorithm is described. The numerical tests and the results obtained for problems of interest in the context of computational electromagnetics are collected in Section 6. The performance of the proposed method is studied and applied to a range of problems with increasing difficulty, either as a consequence of an increase in frequency, an increase in the complexity of the geometry or both. Some conclusions and a series of future lines of research are presented in Section 7 to close this contribution.

DISCONTINUOUS GALERKIN FORMULATION
In this section, we give a brief description of the DG formulation. The reader is urged to consult 6,44-46 for further information.
Let us consider the transient Maxwell's equations written in conservation dimensionless form and using Einstein's notation: where U is the vector of conservative variables, F k (U) is the hyperbolic flux in the kth direction, and S (U) is the vector gathering all source terms. In the context of 2D scattering problems, those vectors are: for the transverse electric mode (TE z ) and: for the transverse magnetic mode (TM z ). In (2) and (3), E j represents the j component of the scattered electric field vector, H j represents the j component of the scattered magnetic field vector, and and are the relative permittivity and permeability of the medium (we consider a linear isotropic material without current sources), respectively. Superscript i identifies the incident field. Note that we adopt the scattered field formulation to avoid the propagation of the incident field from the outer boundary of the model to the scatterer which reduces the computational cost of the simulation and has a positive impact on the accuracy of the solution. See Reference 2 for further details. Let us consider a bounded domain Ω and its decomposition into a set of nonoverlapping subdomains Ω e in such a way that Ω = ⋃ e Ω e . We locally approximate the solution via a polynomial interpolation defined by a set of nodal values on Ω e . The functions used to interpolate constitute a basis of the space of polynomials up to degree p in Ω e , denoted by P p (Ω e ). The test functions considered to derive the weak form are polynomials also contained in the space P p (Ω e ) and they are gathered in a vector of the same size as U, namely, W. Multiplying (1) by W, integrating over Ω e and integrating by parts, we obtain the following weak form in an element: where U e represents the restriction of the vector of conservative variables U to the element domain and F n represents the flux in the normal direction n, being n the outward unit normal vector to the boundary Ω e : To account for the discontinuous nature of the approximation, the normal flux is replaced by a numerical normal flux  n (U e , U out ) that depends not only on the solution within the element U e but also depends on the solution on the neighboring element U out (x), defined as: Choosing the numerical flux is a crucial step as it dictates the stability and the accuracy of the numerical scheme. Some of the most widespread approaches consist of using a centered flux, an upwind flux or a linear combinations of the two previous schemes, see  For linear hyperbolic problem, such as the one considered here, a typical choice consists of using a flux splitting approach. This formulation is based on dividing the flux across the boundary of the element into outgoing (superscript +) and incoming (superscript −) fluxes regarding the positive and negative eigenvalues of the Jacobian matrix A n = F n ∕ U, respectively: The upwind numerical flux is computed by considering that the incoming component of the flux is due to the distribution of electromagnetic field on the outside, while the outgoing flux depends on the distribution inside the element: Substituting (8) into (4) and integrating by parts again, we obtain the following weak form in an element: Considering the jump operator ⟦G (U e )⟧ = G (U out ) − G (U e ), in which G represents any operator, the difference between fluxes can be simplified leading to: Note that the integral of the flux over the boundary penalizes the difference between the incoming flux due to the internal and external fields, namely, the incoming flux jump. For an internal edge, both fields are elemental solutions and the weak continuity of the global solution is imposed as a result. For an external edge, that is, an edge over Ω, the outer state is not defined. As usual in the framework of hyperbolic equations, the Rankine-Hugoniot jump conditions are employed to compute the incoming flux jump that, when integrated over the boundary, translates into a weak enforcement of the boundary conditions.
The examples considered in this article focus on the electromagnetic scattering by perfect electric conductors (PEC). The tangential component of the total electric field vanish on the boundary of a PEC, namely, The expression of the boundary term results for the TE z mode and for the TM z mode. The physical domain Ω ph may be both a closed domain or an infinite domain. To reproduce the infinite domain behavior on the boundary of Ω an appropriate boundary condition that approximates the Silver-Muller radiation conditions is imposed. We consider a perfectly matched layer (PML) in combination with an absorbing boundary condition (ABC) to absorb outgoing waves. We use a first-order ABC that impose null incoming flux at the outer boundary, namely, We recommend the reader to consult References 53-57 for further details on artificial boundary conditions for truncated infinite domains. After the spatial discretization is performed, the resulting system of ordinary differential equations can be written as: where M is the mass matrix, U h is the vector which contains the nodal values of the conservative variables, and R ( U h ) represents the vector of residuals.
Equation (15) is integrated in time using an explicit Runge-Kutta-type method. The use of high-order explicit integrators of this type decouples the system of Equation (15). The splitting of the global problem into a set of smaller local ones has important advantages from the computational point of view, such as a great reduction in the size of the systems of equations to be solved, leading to faster calculations and memory saving, and the possibility of applying parallel processing. However, for the algorithm to behave in a stable way, it is necessary to set a time step for the integration that complies with the CFL stability condition, which, in the particular case of the classic Runge-Kutta, is formulated: being  a positive constant, h a length that quantifies the minimum size of the spatial discretization, c the speed of light in the medium, and p the degree of the polynomial approximation.

CARTESIAN GRID FINITE ELEMENT METHOD
As mentioned in Section 1, cgFEM is a FDM characterized by the use of meshes exclusively made of regular quadrilateral elements in 2D or regular hexahedral elements in 3D. In the following, we present the key ideas of this discretization strategy. Even though they are also valid for 3D problems, in this exposition we focus on the 2D case, for the sake of simplicity. The process to generate the analysis mesh starts defining the fictitious domain. In cgFEM, the fictitious domain is a squared domain that contains the physical domain, namely, the bounding box . Next, a set of grids containing all possible elements that may appear in the analysis mesh is computed. The grid pile is constructed following an iterative The starting point is a grid that contains just an element of the same size as , which we refer to as the reference element. The procedure last for a number of iterations specified by the user which defines the minimum element size. Note that, if d is the number of spatial dimensions of the problem, the grid obtained after iteration l, namely, the level-l grid, contains 2 dl identical elements which are similar to the reference element. Within this stage, each element may be identified by the virtual coordinates of its bottom-right vertex. The virtual coordinates system is a R d+1 coordinate system with its origin on the upper-left corner of the bounding box and whose basis, in relation to the Cartesian basis {e 1 , where h(l) represents the size of the element for a given level l. In Figure 1, we present level-0 and level-1 grids and the virtual coordinates associated to its elements. The next step is to find the intersections of the geometry with the grid pile. Depending on the location of the elements with respect to the boundary of the physical domain, these are classified into external, internal and boundary elements. Considering the computational domain Ω, the element e with domain K e is classified as: Figure 2A,B, we show the grid pile up to level 2 and the classification of the elements in the top-level grid computed for an arbitrary domain.
The analysis grid is built up by selecting internal and boundary elements from the grid pile in such a way that there is no overlapping among elements and the physical domain is completely covered. This can be easily verified by using the virtual coordinates to explore the relations among elements in the grid pile.
To obtain the high-order nodal distribution of an element, the nodal distribution of the reference element is mapped to the physical elements. Notice that, by construction, this mapping is affine. Therefore, the Jacobian matrix of the transformation is constant and there is no loss of consistency in the interpolation, that is, a polynomial interpolation of degree p in local coordinates of the reference element preserves its degree when transferred to Cartesian coordinates. As exposed later, these features provide great advantages for numerical integration.
Taking as starting point the analysis mesh, we need to define a set of subdomains that fits the geometry, see Figure 2C. These subdomains are employed to design the numerical quadratures used to evaluate the terms of the weak form of the problem. This way, the actual computational domain Ω is consider, instead of . The procedure used to obtain the integration subdomains elementwise depends on the element type. Internal elements are treated as standard elements and, therefore, the integration domain matches with the element domain. In the case of boundary elements, the portion of the physical domain within the element splits into simpler subdomains (triangles or quadrilaterals) and special integration quadratures are defined over each of them. The integration over boundary elements must provide a sufficient level of accuracy in order to keep integration error below the discretization error. In the context of cgFEM, the two different strategies illustrated in Figure 3 are considered: 1. Polynomial approach: A polynomial curve is fitted to a set of sampled points (red squares in Figure 3) over each portion of the boundary delimited by intersection points with the element edges or vertices of the geometry (green squares in Figure 3). The integration subdomains are then computed considering this polynomial approximation of the exact geometry. 2. NEFEM approach: The parametric definition of the boundary provided by the CAD is taken into account. This approach allows to remove any geometric error induced by a polynomial approximation of the boundary.
In this contribution we only consider the NEFEM approach to ensure that nonphysical artifacts due to a poor representation of the domain do not appear. We follow this strategy to evaluate integrals both over the computational domain and over its boundary.
Concerning the imposition of boundary conditions in cgFEM, the fact that the boundary is not conforming with the mesh means that there are some differences in comparison with the standard finite element method. Since the mesh nodes are not placed on the boundary, generally speaking, the essential boundary conditions cannot be strongly imposed. Alternatively, they are imposed in weak form. In the case of Neumann boundary conditions, the integral along internal boundaries gives rise to terms in the RHS of the system of equations at all degrees of freedom (DOFs) of the element. Unlike in the standard finite element methods, where these terms only appear on the DOFs tied to nodes located on the contour.
Processing Dirichlet boundary conditions is significantly more complex. There is a range of techniques that allow the imposition of Dirichlet conditions in an weak form, among which we can mention the penalty methods, the vital vertex method, the mortar method or the Nitsche's method. In the cgFEM we use a strategy developed in our research group and which has been validated for the elastic problem. The condition on the primal variable is imposed through a Lagrange multiplier field evaluated at the integration points of the curve defining the Dirichlet boundary of the physical domain. This field leads to an overconstrains system of equations, making it necessary to introduce a stabilization term in the formulation that ensures compliance with the inf-sup condition. To restrict the range of values that these multipliers can take and thus ensure optimal convergence, a stabilization term is added to the functional that penalizes the difference between the Lagrange multiplier field and a smoothed field on the dual variable obtained by postprocessing of the solution F I G U R E 3 Example of different resulting integration subdomain depending on the boundary approximation: polynomial (top) and NEFEM (bottom) of the problem. This stabilization strategy results in an iterative resolution process that ensures optimal convergence to the exact solution to the problem through refinement. The interested reader is addressed to Reference 29.

DISCONTINUOUS GALERKIN FORMULATION IN THE FRAMEWORK OF CGFEM
In the framework of the FDM, the calculated solution only makes physical sense when evaluated on the problem domain. Therefore, the approximate field U h (x, t) is mathematically represented by piecewise defined function: where U h i (t) is the vector that contains the values of the conservative variables evaluated at the node i of the element e at the instant t and N i (x) represents the shape functions of degree p associated with the node i evaluated at the global coordinate x. This approximate solution is contained in a polynomial space up to degree p with support in Ω e , P p (Ω e ), as stated in Section 2. A Fekete-nodal distribution 58,59 is considered to maximize the accuracy of the resulting approximation.
Regarding the construction of the weighting functions, a similar formulation is used: In the following, we consider that flux operators F k and F − n and the source operator S are linear, as usual for the electromagnetic problem. Likewise, the analytical expression of the source term will be substituted by a polynomial approximation obtained by interpolating the values of the source evaluated at the nodes. Plugging (17) and (18) into (10) and taking into account the aforementioned assumptions we obtain: where M is the mass matrix, C k is the convection matrix in the kth direction, and represents the electromagnetic flux through the boundary of the elements. These terms are computed as: In general, coordinate transformations are used to map the domain of the integral into a space in which that domain correspond to a reference one for which an integration quadrature is available. We will be dealing with three different spaces, namely: the physical space Ω x , the local space Ω , and the integration space Ω . The coordinates of a point in each one of these will be denoted by x, , and , respectively. The mapping function between those spaces is: Note that ( ) = N i ( ) x i which, because of the Cartesian structure of the mesh, is an affine transformation that formulates where h e and x e represent the size and the center of the element e, respectively. That property let us obtain the projection of a point of the integration space on the local space as: A graphical representation of these spaces and their relations is shown in Figure 4.
Remark 1. For the sake of simplicity, we expressed N i (x) in terms of the global coordinate x. However, these functions are defined using local coordinates to improve accuracy, namely, N i ( ). Thus, N i (x) computes in two stages: mapping from global to local coordinates using ( ) and evaluating the function N i ( ). This license does not affect the validity of the exposition.
Let us begin by studying the integrals over the interior of the domain. We focus on the case of the mass matrix. Applying the coordinate transformations defined above: where n is is the number of integration subdomains within the element and J n represents the Jacobian of the transformation n associated with the integration subdomain n.
Internal elements are treated as standard finite elements and, therefore, they contain a single integration subdomain that matches the element domain. In the following, the superscripts n are removed for simplicity. Besides , the mapping for internal elements is also affine since both the element domain and the reference domain for integration are hypercubes. As a consequence, the Jacobian determinant of the transformation is constant and can be taken out of the integrand. In the same way, given the similarity between the elements of the mesh, it is possible to relate their Jacobian taking into account the mesh level to which they belong according to: For convenience, we use the 0-level element to compute the reference Jacobian and we obtain: If we study the integrand of the expression, we observe that it only depends on the interpolation functions and the mapping from the integration space to the local space. On the one hand, the interpolation functions are completely defined from a nodal distribution which is specific of the degree of approximation p used. On the other hand, as the mappinĝis a combination of two affine transformations affected by the Jacobian in the same way,̂is an affine transformation common to all the internal elements. Therefore, we can rewrite the expression as: where M 0,p represents the mass matrix of the 0-level element of degree p and d is the number of the spatial dimensions. Proceeding in a similar way with the convection matrix we arrive at the expression: where represents the convection matrix of the 0-level element of degree p and l symbolizes the derivative with respect to local coordinates.
With this formulation we compute the mass and convection matrices of any of the n ie internal elements of the mesh by scaling the reference matrices. This allows us to save a large amount of memory space, since we move from having to save 2n ie matrices to 2n ip matrices, where n ip stands from the number of different degrees of approximations that have been used for the spatial discretization.
In the case of boundary elements, the domain Ω e does not match the element domain K e and the integral is computed as a sum of integrals over simpler subdomains which are obtained by splitting Ω e . Subsequently, one of the techniques mentioned in Section 3 is applied to obtain the transformation functions and̂. It is not possible to do any simplification since the number of terms of the summation in (26) will depend on the geometry and approximation of Ω e and the integrand of each of these terms will depend on the transformations and̂, whose definitions comes from the geometry and position in the element of each integration subdomain. Therefore, the domain integrals of each of the intersected elements are treated independently according to the general formulation. It is worth to remark that, even though the computational burden associated to these integrals is higher than that associated to internal elements, the number of intersected elements is small with respect to the total amount of elements in the mesh.
Regarding the term i , we differentiate two groups depending on the characteristics of the integrals: • Integrals of interfaces between elements: the external and internal fields are defined by the solution of the finite elements analysis in the neighbor element and in the current element, respectively. Likewise, the values are computed by interpolating the solutions of those nodes located on the boundary under study, since the influence of the rest of the nodes is null. Due to the Cartesian structure of the mesh and its independence from geometry, all element faces possess a constant normal, which is aligned with one of the Cartesian directions. As a consequence, the incoming flow operator is identical for all point on the face and, since it is linear with respect to the conservative field, it can be taken from the integrand in (22). Furthermore, this operator can only respond to 2d expressions, which correspond to the Cartesian directions. For the 2D case: With this in mind, we can express (22) as: where: F I G U R E 5 Topology of two different internal faces and code for processing is a mass matrix of dimension d − 1 and the subindices j and k identify the rows in U h that correspond to the nodes of the element and its neighbor, respectively. In turn, this set of integrals can be subdivided into: -Integrals over complete edges: m ij corresponds to the mass matrix of a Cartesian element of one lower dimension than the dimension of the problem. Following a similar development to that presented for the volume integrals, for a given element of degree p, we can express m ij as a rescaling of a mass matrix of degree d − 1 calculated for the 0-level element considering weighting functions of degree p and test functions of the degree n, where n is the degree of the polynomial field being evaluated (internal or external): This way of obtaining boundary integrals on complete edges reduces the amount of information that needs to be stored in memory. However, it must be considered that, for h-refinement nonuniform meshes, an element may be connected through its boundary with more than one neighboring elements or that the integral on the face of the neighboring element does not cover its entire length but only part of it. This situation can be seen in Figure 5, where two different examples of interfaces between elements and their coding are represented. The encoding we use is composed of four numbers grouped in two pairs of two numbers. The first number of each pair represent is the ratio between the element side dimension and the size of the interface being integrated, while the second number represents the position of the interface on the element edge. It is possible to identify all the typologies of integrals over nonintersected element faces and calculate their respective reference matrices for the 0-level element. Something similar happens in the case of using p-refining strategies, where a mass matrix m 0,pn ij will be obtained for each combination of degrees in the mesh.
-Integrals over intersected edges: When the element edge intersects the boundary of the physical domain, the integrand is null over the outer section and the integral restricts to K e ∩ Ω. This domain is different for each of the flux integrals over intersected faces and, therefore, their mass matrices are calculated individually according to: • Integrals over the boundary: The calculation of the incoming numerical flux will be subjected to the boundary conditions imposed on the contour being integrated while the conservative variable distribution due to the numerical solution in the element will be a function of the calculated values for all its nodes, not only those on the element's boundary. This last characteristic is a feature of FDM methods and is a consequence of the physical domain boundary being internal to the mesh. Integrals over boundaries will be calculated following the general expression (22).
As a previous step to integral definition, we need to identify what elements meet at each of the interfaces in a process known as neighbor search. In the context of cgFEM, we can use the virtual numbering presented in Section 3 to find out the connectivity between elements. Using the virtual coordinates this process becomes trivial. Furthermore, we can also identify the set of different combination of degrees p, level jumps Δl and relative positions of neighbor elements in the mesh, avoiding unnecessary computations.
Intersected elements may contain disjointed regions where the solution is physically independent. If we use the standard discretization of cgFEM, the polynomial approximation would be shared by all the regions contained within the element. Trying to represent with a single smooth function a set of independent fields leads to a significant increase in the local error associated to these elements and the pollution of the global solution. This problem becomes more relevant F I G U R E 6 Algorithm employed to detect isolated regions within an element as the size of the elements increases and the details of the geometry become smaller than the characteristic length of the mesh. This situation is quite usual in high-order methods.
In the following, we describe the algorithm that we employ to detect isolated regions. The reader can find an example of application in Figure 6. If we focus on a single element on the boundary, the output of the intersection routine is the set of curves internal to the element and the segments of the curves limited by intersection points and endings of the curves such that any point on the segment is internal to the element. Next, we split the contour of the element into a set of segments taking into account the intersection points and the corners (see Figure 6A). Then we obtain the connectivity of these segments and curves (hereafter the edges) through the intersection points and the corners (hereafter the vertices). The edges on the geometry are directed paths according to the definition of the geometry (see Figure 6B). To find an independent region in the element, we identify the tail of a directed edge, move to the successor, and go through the graph always taking the first edge from left to right at the vertices until we arrive to the starting vertex. This procedure is executed taking different starting points until all directed edges have been traversed once. The loops computed this way are classified as Type I. Then, it repeats changing the turning criterion to always take the first path from right to left. The new loops are classified as Type II (see Figure 6C). The region at both sides of the geometry is completely defined and, therefore, the turning criterion at the vertices characterizes the physical properties of the domain inside each loop. By the end of this algorithm, we have obtained all independent regions within the element and their physical properties. Once those regions outside the physical domain are removed, the remaining regions are necessarily isolated regions (see Figure 6D).
In this work, the spatial discretization is modified in those elements that contain two or more regions of the physical domain disjoined to each other (from now on referred to as isolated regions). Let us consider the element e located on the boundary of the physical domain Ω in such a way that their intersection leads to a set of n ir isolated regions in it, namely, Ω e = ∪ n ir r=1 Ω r e . See Figure 7B. The set of nodes e associate with the element e is split into n ir subsets { 1 e , 2 e , … , n ir e } and the shape functions N i used to interpolate the nodal values are redefined considering each subset r e separately. The approximate field U h (x, t) and the weighting functions W h (x) in a boundary element with multiple isolated regions using standard summation notation formulate as: respectively. The final discretization is completely independent from one region to another so this strategy allows the use of different polynomial degrees for each of them. Indeed, we employ an independent Fekete nodal distribution mapped on K e for each subset r e . An example of how the dissociation strategy works is shown in Figure 7. In Reference 60 a similar dissociation approach was introduced to handle discontinuities internal to the elements in the context of 2D elasticity problems. That same approach was later on applied to other problems, see Reference 61 and the references therein. Our implementation surpasses existing approaches in the literature in the sense that the number of isolated region within an element is not restricted, the regions are bounded by the exact representation of the geometry provided by the CAD software, the FE approximation is defined at region-level and high-order polynomial basis are considered. Moreover, the DG solver adopted in this contribution let us weakly imposed the continuity of the solution across those interfaces that lie on the computational domain without the need of adding any additional term to the original formulation.

STABILIZED SCHEME
As previously mentioned in Section 1, the computational advantage exhibited by the DG formulation is closely related to the use of explicit integrators to advance in time. The stabilization strategy proposed in this section seeks to minimize the negative effect that the presence of cut elements in the mesh has on the maximum stable time step of the explicit time integrator. We want to identify and address the stabilization of unstable DOFs in the very first stages of the workflow in order to avoid redundant calculations. To do this, we need to select a local parameter that let us predict which intersected elements will induce a time step substantially smaller than the one associated to noncut elements. The global stability of an explicit integrator is quantified by its CFL condition which depends on the polynomial degree and support of the most unfavorable element. Since the degree of approximation is not affected by the mesh-geometry interaction, a parameter that is related to the size of the support domain will be sought. In particular, we use the relative area r e as indicator. Following the notation presented in the previous section, this parameter represents the area ratio of the region r with respect to the extension of the whole element e, namely, where | ⋅ | is the operator that provides the area of a given domain. In Section 6.1, we show the convenience of this coefficient to predict unstable behaviors. The study of how its reduction affects the maximum step size shows a monotonic relationship between these magnitudes which justifies its use. Furthermore, it is a purely geometric parameter that is easy to evaluate in the initial stages of the algorithm. Finally, a set of DOFs will be consider unstable if the relative area of its support region is less than a certain value specified by the user th .
Remark 2. Instability evaluation is performed at the level of isolated regions. For the sake of facilitating the comprehension of the algorithm, in the following we assume that each element contains one isolated region at most, n ir = 1. Therefore, the DOFs linked to a region always correspond to those associated with the element that encompasses it, namely, e = 1 e . With that in mind, we simplify the notation using e and Ω e instead of 1 e and Ω 1 e , respectively The stabilization strategy we propose is based on modifying the support domains of the interpolation functions. Once the unstable region has been identified, its domain will be reassigned to the polynomial approximation defined on a neighboring element considered as stable. The resulting elements is called extended element as, when evaluating the operators in weak form, this means that the integrals extend beyond the boundary of the element. The shape, nodal distribution, and shape functions of the extended elements remain the same before and after the reassignment procedure. Therefore, as far as implementation is concerned, they can be treated as standard intersected elements and do not entail any additional computational burden. Since the unstable regions are small with respect to the size of the extended element, problems related to polynomial instability due to extrapolation have not been detected. Once the reassignment has been completed, the DOFs initially associated with the unstable region have zero support and can be eliminated from the analysis mesh.
Remark 3. An alternative strategy to define the approximate solution on extended elements was also investigated. In this alternative approach, the reference element was mapped to a rectangle that contained all the regions associated with each element at hand. As the support of the shape functions is completely within the boundary of the nodal distribution, no extrapolation is applied. In comparison with the approach adopted in this contribution, this alternative did not provide any major advantage in terms of the overall stability of the method while the departure from the cgFEM data structure of the mesh reduced the efficiency of the resulting scheme.
The value of the maximum stable time step that may be expected from the resulting discretization will be greater than or equal to that associated with an element with a relative area of th . In this way, we limit the maximum stable time step and eliminate its dependence on the cut elements.
Prior to go through the steps of the reassignment algorithm we need to define some further notation. Let us consider two adjacent elements i and j that intersect the physical domain Ω leading to Ω i = K i ∩ Ω and Ω j = K j ∩ Ω where K i and K j are the domains of the whole elements. We represent as ij the relative area of the combined domain Ω i ∪ Ω j with respect to K j . This corresponds to the extension of the element j over Ω i . Note that the subscripts identify the physical subdomains to combine and the element we compare to is that of the last index. Besides the combined relative area, we need to compute the centroids of the domains. For the domains Ω i and Ω j , these positions are represented as x i and x j , respectively. Finally, we identify as d ij the distance between the centroids of the domains Ω i and Ω j , which formulates as The following briefly describes the algorithm used to reassign unstable domains. First, we identify the regions that limit the stability of the marching algorithm by computing the relative areas of all isolated regions within the boundary elements of the mesh and comparing them to th . Next, the mesh is checked to verify that all selected regions belong to elements that are not greater than their neighbors. The elements that do not meet this condition are refined and the Remark 4. The refinement criterion presented in this section prevents from extending over regions even greater than the size of the element, which may lead to polynomial instabilities associated to extrapolation. Alternatively, mesh refinement may be understood as a way of splitting the original unstable regions into smaller ones. With this in mind, the strategy let us perform a better reassignment of the unstable region, linking each subregion to the most suitable element independently. We can even find children elements that are inherently stable without modifying the time integration scheme. In Figure 8, we consider an initial unstable discretization ( Figure 8A) and compare the outputs of the reassignment algorithm when it is directly applied ( Figure 8B) and when the mesh is refined in advance ( Figure 8C). The figures show how this last approach, the one adopted in this contribution, leads to a more effective rearrangement of integration subdomains and to minor extrapolations. It is worth noting that the refinement criterion does not affect the stability of the base mesh since the stable time step is governed by the finest discretization.
Once the final mesh is obtained, we loop through the unstable regions in increasing order of relative area, namely, e ≤ e+1 ∀e ∈ {1, 2, … , n ue }. Let us consider the unstable region Ω i . The first step is to identify the neighboring elements, which will be those that share an edge with the element i. This operation is trivial given the Cartesian structure of the mesh in cgFEM. Next, we explore the regions within the neighboring elements and discard those that are not adjacent to Ω i . We obtain the preliminary set of candidates  p that contains the elements that may be extended. In the following stage, the combined relative area resulting from each of the possible reallocations is computed, namely, { ij ∀j ∈  p }. The final set of candidates  is computed removing from  p the elements whose combined relative are do not met the condition ij ≥ th . If  is an empty set, the iteration ends and we move on to the next subdomain without performing any change on the discretization. If some elements remain, the centroid of all regions is evaluated {x k ∀k ∈ { i,  } } and the distance with respect to the centroid of the unstable region is computed {d ij ∀j ∈  }. Finally, the unstable region is reassigned to the element that present the minimum distance, min . This procedure is depicted in Figure 9. If the final combination involves two initially unstable domains, both are removed from the set of unstable domains. If the element chosen for extension was a stable element, the region is marked as resolved. The process continues until there are no domains classified as unstable.
Note that, unstable neighbor elements are not removed from the first selection, but may remain as candidates. If, at the time of evaluating stability, the support resulting from merging the domains causes the initial discretization to become stable, it will make it likely to be extended. In this way, a smaller number of DOFs is eliminated and the size of the extensions is contained. By traversing the target domains in ascending order of relative area, we favor these types of assignments to occur.
When selecting the element to be extended, we may be interest in considering other characteristics beyond the proximity between domains. As an example, level of stability, orientation, or complexity of the assembled domain may be important features we must control. These aspects can be implicitly taken into account for the selection as a series of factors f c that multiply the distance d ij obtaining:

F I G U R E 9 Procedure to reassign unstable regions
where n f refers to the total number of modifiers taken into account.
In the current implementation we consider the following factors: • f 1 : penalizes those elements with a relative area close to the threshold value. It gives an advantage to those elements with better characteristics from the stability point of view. This factor fits to the expression: being A 1 the parameter that controls the range of values that are penalized. We recommend to set A 1 = 10.
• f 2 : penalizes those elements with an unfavorable distribution of the physical domain. It reduces the appearance of very slender integration domains. It is formulated as: where I k ij represents the moment of inertia of the merged region with respect to the k-axis that pass through its centroid and A 2 is a number that limits the range of values that are penalized. We recommend to set A 2 = 10.
• f 3 : penalizes those elements that have already been extended. It indirectly reduces the possibility of integration domains with internal corners. It is computed as: where n t represents number of times an element has been previously selected for extension.
Configurations may appear for which not all unstable regions are resolved by the end of the loop. This is the case shown in Figure 10A,B. When trying to reassign the domain encompassed in element K 1 , the extension of the K 2 and K 3 elements is checked but in both cases, the resulting discretization is identified as unstable since 1j < th with j = 2, 3. If we study elements K 2 and K 3 at the end of the loop, they will both have been removed from the mesh and their domains have been rearrange and now belongs to element K 4 . If we start over the loop taking into account the reallocation, a new study for the physical domain in K 1 leads to a valid reassignment. The domain could be reallocated to the element K 4 , since it fulfills both the continuity conditions and the stability condition 1234 ≥ th . Therefore, the domain reassignment process is performed iteratively until all target regions are solved.
The removal of nodes associated to unstable elements implies a reduction in the nodal density and, therefore, the richness of the spatial discretization decreases. To minimize the effect of stabilization on the discretization error, we propose to increase the degree of the polynomial approximation in the extension direction. We can easily do this thank to the special features of the cgDG. Inherited from cgFEM, the mesh is made up of quadrilaterals and we can easily modify the nodal distribution since it will be calculated as the tensor product of independent high-degree 1D nodal distributions. In addition, the element shape or size is independent of the interpolation, so changing the nodal distribution does not have any effect on the structure of the Cartesian mesh. Moreover, DG formulation let us locally enrich the spatial discretization regardless of the discretization used in the surrounding elements. The drawback of this technique is that increasing the degree reduces the stability of the discretization. Therefore, it will be necessary to achieve a balance between the quality of the spatial discretization and the size of the time step.
As previously mentioned, the stabilization strategy only affects the spatial discretization of the problem and is carried out in the preprocessing stage. Changes in the polynomial approximation have no effect on the DG formulation of the problem, so no additional terms need to be evaluated and the burden of computing the different operators is the same as for the nonstabilized scheme. Furthermore this strategy is transparent to the temporary integrator, so the methodology we propose can be used to solve any problem using DG formulation and with any explicit temporal integrator.

6
NUMERICAL RESULTS

Verification
In this section, a series of numerical tests that verify the proposed method and strategies are shown. First, a convergence analysis will be performed. The optimal convergence rate of the original DG method for purely hyperbolic problems was proved by Peterson 62 to be equal to h p+1∕2 , where h represents the characteristic length of the discretization and p is the degree of the polynomial approximation. Under some assumptions, the optimal convergence rate rises up to h p+1 as confirmed by LeSaint and Raviart 63 for Cartesian grids and by Richter 64 for some structured non-Cartesian grids. Considering the Cartesian structure of the mesh in cgDG a h p+1 convergence rate is expected.
The setup of the numerical test corresponds to a square domain [− , + ] × [− , + ] with its sides oriented in the same direction as the Cartesian axes. We impose boundary conditions, initial field and source term in such a way that the analytical solution of the problem is given by: The set of increasingly refined meshes is defined so that the intersection points of the boundary elements with the geometry are at the midpoint of the edge. Starting with the coarsest mesh, each finer mesh possesses twice as many elements per direction as the immediate coarser one. Regarding the temporal integration, we use a small enough time step so that the spatial discretization error dominate over any other. Figure 11A shows the errors of the cgDG solution field in the L 2 -norm computed at time t = T, ‖e(T)‖ 2 , with respect to the element size, h. The slope of the curves in the asymptotic region equals to p + 1 which correspond to the optimal convergence rate of DG when using Cartesian grids. This result puts the proposed method in advantage with respect to standard FE methods, that exhibits lower convergence rates. Figure 11B shows the same results versus the number of DOFs n DOFs . It is worth noting that, to obtain a certain level of error, interpolations of a higher degree require a lower number of DOFs and, for this problem, a lower computational cost, due to the smoothness of the analytical solution. Furthermore, even for the same number of DOFs, using high-degree polynomials is preferable as it leads to a reduction in the number of elements and, therefore, the amount of numerical fluxes that needs to be evaluated. This results support the use of high-degree polynomial approximations to analyze problems involving wave propagation.
Next analysis pursues two goals: quantify the effect of the intersected elements on the stability of the method and evaluate the effectiveness of the domain extension strategy as a stabilization procedure. For this test, we consider a TE z electromagnetic plane wave that propagates in the positive direction of the x-axis through an infinite domain. The electric components of the wave are computed as: with: The infinite domain is modeled as a rectangular domain with periodic boundary conditions on the boundary. Given an initial distribution that satisfies (46), the system is advanced in time. With this configuration, we want to evaluate the stability of the method regardless of the influence of the boundary conditions. In the test that we propose, the fluxes through the boundary only depend on the numerical solution computed in the previous step. We run the same problem iteratively following a bisection method until we get the maximum time step for which the L 2 -norm of the solution does not grow in time, hereinafter Δt st . This result is an approximation of the time step that makes the spectral radius of the method equals to one. For each discretization, we considered two different scenarios. Firstly, all intersected and internal elements are active in the analysis mesh. Secondly, we apply the stabilization strategy before starting the integration loop. This second scenario uses a value of th big enough so that all intersected elements will be classified as unstable and the stabilization procedure will be applied to them. In the final discretization all intersected elements of the initial mesh disappear since their domains are reassigned to the internal neighbors and their DOFs are also removed from the system of equations. We consider the interpolation degrees p = {2, 4, 6, 8}. In order to facilitate the understanding of the results, Δt st is normalized with respect to the maximum stable time step calculated for an equivalent discretization without cut elements, namely, Δt st . Figure 19 shows the evolution of Δt st with respect to . For the nonstabilized scheme we observe that Δt st drops with , tending to 0 as approaches this value. The slope of the curves is more pronounced as we move to smaller values of . If we focus on the stabilized discretization, the stable time step remains almost constant around one. The results show that the stability of the stabilized method is governed by the base mesh and is independent of the cut patterns that appear in the intersected elements.
The results also show a common problem of the FDM: when the analysis mesh contains any element that posses very little support, the system becomes ill-conditioned. As the degree of interpolation increases, the conditioning worsens and the domain necessary to ensure a certain precision of the solution increases. In Figure 13A, the results obtained for discretizations that exhibit this behavior have been represented in red. In these cases, the vector of unknowns has been calculated using the Moore-Penrose pseudoinverse. 65 It is important to point out that these situations only appear when nonstabilized discretization are used. The stabilization strategy eliminates the polynomial approximations with little support whose DOFs suffer from dependency problems. Therefore, the proposed stabilization strategy not only improves the (A) (B) F I G U R E 13 Dependence of stable step size for the marching algorithm with respect to the relative ratio for both the standard and that stabilized discretizations behavior of the method with regard to time integration, but also let us compute FE models that, even using very small time steps, could not be calculated accurately due to ill-conditioning.
In Figure 13B, the inverse of Figure 13A is drawn. On this occasion, the y-axis represents the factor by which the number of necessary iterations of the time loop is multiplied in order to reach the end of the simulation. Given that in most practical examples the calculation time corresponding to the time loop represents mostly the entire analysis time and that the computational cost of each iteration is independent of the step, this factor also applies to the overall analysis length. We see that the negative effects of from the point of view of time integration are significant for ≤ 0.4. As an example, for a polynomial approximation of order p = 3, if the mesh contains an intersected element with = 0.2, this supposes that the simulation prolongs 2.26 times. Contrary, the stabilized scheme is not affected by the parameter and, therefore, the simulation times will be the same regardless of how the geometry intersects the mesh. In this latter case, the simulation time will only depends on the target level of accuracy, which will define the size of the elements and the approximation degree.
The same test has been carried out for a range of different element sizes. In all cases, the method behaves as in case described above.
Extending the support of the interpolation polynomials implies the reduction in the richness of the solution since the number of DOFs per wavelength reduces. To counteract the expected loss of accuracy, in Section 5 we proposed to increase the degree of the polynomial approximation used on the extended elements. The p-refinement must be done in such a way that the stabilized scheme keeps the magnitude of the discretization error. To calculate the expected errors, a characteristic length of the interpolation support domain is considered. We define the approximation degree on element i based on the characteristic length l d i , which is the size of the projection on the Cartesian axes of the integration domain associated to element i. The degree is assigned according to the expression: where p r and h r are two parameter defined by the user that correspond to the degree and element size of a reference discretization, respectively. The previous expression is evaluated independently in each of the directions. Once the appropriate polynomial degrees for each direction are computed, we may use all this information to defined the final discretization, leading to a directional p-refinement; or we may only consider the worst case and use the same degree of approximation in both directions.
The following test seeks to quantify the effect of the stabilization strategy on the global error and the performance of the proposed p-refinement strategies. We use a numerical test that consists of a squared computational domain through which a wave is propagated. The distribution of such wave correspond to Equation (45). The dimensions of the domain are modified to sample the space of parameter. Two different meshes that differ in orientation are studied. The border  Figure 14 shows mesh number 1 and mesh number 2 for = 0.25. In Figure 15, the red line represents the error in L 2 -norm of the solution for time t = T for the stabilized scheme while the blue line shows evolution of the same magnitude for the nonstabilized one. First of all, it should be noted the higher error obtained for mesh number 2 is due to its elements being of a larger size. For both examples, we observe that the error for the stabilized scenario is always greater than that of the nonstabilized one. The difference increases as the relative area grows. The gap reaches a size of one order of magnitude. Furthermore, we see that the slope of the curve for the stabilized results is steeper for mesh 2 than for mesh 1. The explanation of this behavior may be related to the different size of the extension from the point of view of linear distance. Although the area of the unstable domain is the same, if we focus on a single direction of space, the extensions for mesh number 2 are greater than for mesh number 1. This yields a reduction of the nodal density in this direction which leads to an increase in the global error.
In contrast, we realize that using any p-refinement strategy driven by (47) we can keep the global error in the same order of magnitude as for the nonstabilized case, which shows that the refinement criterion is appropriate. As the extent of the extrapolation increases, the refinement criterion detects that the error distances from the target value and locally enriches the polynomial approximation to compensate this deviation. In Figure 15, each degree increment translates into a drop in the curves going from left to right. Looking closely, we realize that mesh number 1 experienced two increases in F I G U R E 16 Mesh and element locations the degree of interpolation during the test while this number go up to five for mesh number 2. The higher interpolation degrees used in the computations are 6 and 9, respectively.
Comparing the results for both p-refinement strategies, the drops in the error are more pronounced when the p-refined discretization is computed regardless of the extension direction. Thus, this strategy leads to a lower global error. However, we may not forget that the error target we pursue is that provided by the nonstabilized scheme. From this perspective, the directional p-refinement strategy reaches the desired level of accuracy. In addition, this second alternative brings two main advantages. First, it let us save some computational effort since the number of DOFs of the final discretization is smaller. Second, the stability of the model is hardly affected since the polynomial order only raises in the direction that can be balanced with a larger support. For these reasons, we adopt this last strategy for the remainder of this contribution.
As previously mention, the stability depends on the degree of the polynomial approximation as well as the extent of its support. Increasing the degree of the extended elements to preserve a certain level of accuracy negatively affects the maximum stable time step. A balance have to be found between the target error and the number of steps of the time integrator. In practice, this dependence does not entail an important limitation since the worst cases from the point of view of the time integration do have little influence on the error and vice versa. Values of th around 0.3 have proven to be a good compromise solution for the set of problems studied so far.

Scattering by a PEC circular cylinder
The second problem to be studied is an infinite cylinder with the properties of a PEC which reflects a plane wave that propagates perpendicularly to the height of the cylinder and whose H or E vector fields are aligned to the height at any point. This is a reference problem in the field of scattering as an analytical expression is available for the exact radar cross section (RCS) or, more precisely, the scattered width (SW), the counterpart of the RCS in 2D. See Reference 66 for further details. The 2D model considers a slice of the problem perpendicular to the height of the cylinder.
In the first analysis, a cylinder with a diameter equal to 10 is considered and solved both, with the TE z and TM z formulations, for an incident wave that propagates in the positive direction of the x-axis. The infinite domain is truncated to an extension of 2 and the infinite boundary condition is imposed by using a Berenguer PML with an ABC on the boundary. 55 It is worth mentioning that the PML strategies fit perfectly within the cgFEM framework. The Cartesian structure of the mesh offers an ideal setting to define the subdomains of the PML. Furthermore, by matching the boundary of these subdomains with the element edges, the increase in computational cost associated with the arising of a greater number of intersected elements is avoided.
The discretization used consists of an uniform mesh of elements of fourth degree and a nodal density of 13 nodes per wavelength. Figure 16 depicts the analysis mesh once the stabilization strategy has been applied.   Figure 17 gathers the results obtained for the bistatic scattered width computed in both the TE z and the TM z modes. In these graphs represents the bistatic angle. In both cases, we consider that the time harmonic steady state is reached when the L 2 -norm of the SW varies in less than 10 −4 per unit in two consecutive cycles. Together with the numerical solution, the analytical solution is plotted. We see that the curves overlap for any . Numerically, the relative error in L 2 -norm of the SW is equal 7.7 × 10 −3 for the TE z case and equal 4.53 × 10 −3 for the TM z case. These results confirm the accuracy of the proposed method.
In the following test the influence of the direction in which the incident wave propagates on the overall solution is studied. The objective is to evaluate the influence of the apparent length of the element with respect to the wavelength on the accuracy of the simulation.
Changing the incidence direction leads to an equivalent change in the scattered field. We use this to evaluate if there is any propagation direction in the Cartesian mesh that drives to higher or lower accuracy. We start from the configuration of the previous case and carry out five analyses in which the angle of incidence is increased at a rate of Δ = 9 • . Table 1 gathers the relative errors of the SW calculated according to both the TE z and the TM z mode for each of the angles in the L 2 -norm. The values show a certain dependence to the orientation of the incident wave. However, the error of the SW always remains in the same order of magnitude, so we deduce that the method is not sensitive to the direction of propagation. In other words, the information equally propagates in all directions independently of the Cartesian structure of the mesh. The small variations that we observe are probably caused by the influence of the terms of higher degree presented in the nodal interpolation.
The results shown so far in this subsection rely on a uniform discretization both in terms of element size and approximation degree. The final test of this set focuses on the effects of the stabilization strategy in conjunction with the p-refinement strategy. Unlike the squared geometry studied in previous subsection, the circular boundary generates a great variety of cut patterns with different shapes and relative areas. In this example we reduce the domain of propagation up to 1.5 and simulate considering: uniform p, uniform p-refined and directional p-refined meshes; and the stability thresholds th = {0.2, 0.4, 0.6, 0.8}. In Table 2 we put together six figures that show how the elemental interpolation degrees in each direction vary depending on the p-refinement strategy. These distributions arise for th = 0.8 and preserve simmetry with respect to x and y axes. The relative errors of the SW in the L 2 -norm are gathered in Table 3. The results confirm that increasing the stability threshold has a negative effect on the accuracy as far as no compensation if considered. Regarding the performance of the p-refinement strategies, we do not find any difference in terms of error reduction while the directional p-refinement strategy lets save some computational cost. Therefore, we recommend the use of this approach. Directional p-refinement 7.6 9.0 11.0 13.8

Scattering by a PEC NACA0012 aerofoil
In this section the scattering of a plane electromagnetic wave that falls out a NACA0012 profile modeled as a PEC is studied. The NACA0012 profile is a symmetrical shape that presents a singular point at the trailing edge. Special attention needs to be paid to the spatial discretization around this kind of points in order to accurately capture the distribution of the electromagnetic field. In the first place, the effect of the discretization around the singular point on the calculation of the SW will be studied. We compute the bistatic SW produced by a profile with a chord length equal to 10 . The incident field correspond to a plane TE z wave whose trajectory forms an angle of 10 • with respect to the chord line. The outer boundary of the computational domain is a square whose side length is equal to 13 . The bounding box of the profile is centered inside this domain. We run a collection of seven simulations which differ each other by the geometry orientation. The chord line with respect to the x-axis i varies from 0 = 0 • to 6 = −45 • , with increments of Δ = −9 • . For each configuration, two analyses are performed: the first one employs a uniform mesh while the second one makes use of a hp-refined mesh. This last discretization uses top-level elements (smaller h) in the surroundings of the leading and trailing edges. In all cases, the interpolation degree is assigned to each element according to Equation (47). Such approach means that refined elements are of lower order. Figure 18 shows examples of both meshes for 6 as well as a detail of the elements around the singularity. This last representation makes it clear that multilevel jumps between adjacent elements are allowed. The setup completes deactivating the p-refinement strategy design to counteract of the extension and setting the stability limit to th = 0.4. Figure 19 shows the SW obtained with the uniform mesh (A) and with the hp-refined mesh (B). Both cases are compared with a reference solution computed on a verified FE software using a very fine high-order spatial discretization. We observe that the SW computed with the uniform mesh is much more sensitive to the position of the geometry than the one derived from the hp-adapted mesh. The curves overlap independently of the bistatic angle. We only observe small discrepancies at those angles for which the reflected intensity is lower. Moreover, a very good agreement with the reference values is observed which highlights the higher accuracy of the hp-adapted scheme. The results show that the method is

F I G U R E 19
Influence of the hp-refinement procedure based on geometry on the scattered width sensitive to the complexity of the geometry and its position with respect to the mesh, what translate into a higher complexity of the electromagnetic field within the element. This effect can be counteracted by using geometry-based h-refining strategies and subsequently using an appropriate degree assignment strategy to result in an hp-adapted discretization. In addition, these last results confirm the conclusion reached in the previous example: the precision of the method is independent of how information propagates.
The following example analyses the same geometry but subjected to a high frequency incident wave. This simulation supposes a great challenge as most numerical approaches suffer from numerical dispersion and/or dissipation which limits their scope of application. The wavelength is shortened so that the chord line becomes 100 . The direction of incidence is also modified, and set to coincide with the direction of the chord. The geometry is embedded in a domain that extends 1 in each direction beyond the geometry bounding box and is surrounded by a 1 thick PML. Regarding discretization, we consider a nodal density of 17 nodes per wavelength and a interpolation degree equal to 7 for the elements of the uniform mesh. We apply a h-refinement strategy that split the elements of the uniform mesh up to two times. Regarding criterion (47), the resulting mesh contains elements of degree p = {7, 6, 5}. This brings the total of 363,602 nodes and 5687 elements. These numbers are quite contained for a problem of these characteristics. Figure 20 shows the spatial distribution of the nonzero components of the scattered electromagnetic field once it is fully developed. Figure 21 compares the SW computed with the cgDG implementation and a reference solution. The great match between these curves demonstrates the great performance of the proposed method when applied to high-frequency simulations.

Scattering by complex geometries
The performance of the method when applied to two complex geometries is studied next. The first problem concerns the scattering caused by the middle section of a PEC satellite. The maximum dimension of the satellite bounding box The computational domain is completed by a 1.5 -thick PML that surrounds that region. In this case, the problem is formulated in TM z mode and the plane incident wave travels in the direction of the x-axis. The spatial discretization used for this analysis correspond to a uniform mesh formed by 1582 elements of degree 7, which makes a total of 303,744 DOFs. It should be noted that, despite the fact that the satellite geometry presents details that are small compared with the element size, the cgFEM technology allows this boundary to be taken into account without the need to refine the elements over the boundary until their sizes match those of the geometric features. The spatial discretization is governed by the complexity of the solution but, unlike most of the fitted methods, it is independent of how the geometry is defined. In Figure 22, the third component of the electric field is presented at the moment the steady state is reached. The figure shows how the method is able to capture the complexity of the field in the antenna cavity. In Figure 23 the SW obtained with cgDG is compared with that computed by a contrasted FE software using an overrefined discretization. We see that, although we are using quite a coarse spatial discretization, the results fit the reference solution 25 quite well. Furthermore, it is worth mentioning that the SW obtained with the proposed method is perfectly symmetric. This results from the fact that the spatial discretization presents symmetry about the x-axis, just like the physical problem (domain and boundary conditions). Obtaining this sort of symmetries for the spatial discretization in cgDG is straightforward since the mesh only depends on the definition of the bounding box and the splitting process, both of which present symmetry about the Cartesian axes. In contrast, most mesh generation techniques used in fitted methods evolves to nonsymmetric discretization regardless of the starting point. Next, the scattering problem for the middle section of an airplane is considered. This boundary is particularly challenging from the point of view of the geometry. Small features compared with the global size the profile, such as the front antenna, originate resonance regions that have a great influence on the overall distribution of the electromagnetic field. In order to capture these local complexities of the solution we use an hp-adapted mesh with smaller elements in the F I G U R E 22 H 3 scattered field computed for the satellite profile F I G U R E 23 Bistatic scattered width obtained for the satellite profile environment of this geometric feature. Additionally, as these structures are slender, they give rise to elements with several isolated regions within the physical domain. The dissociation strategy is used on these elements.
In the following, the setup of the airplane problem is depicted. The body is modeled as a PEC whose length is 50 . The physical domain extends 2 beyond the geometry bounding box in both directions with a 2 -thick PML surrounding it. We consider the TM z version of the problem and study the scattering a plane wave that propagates from left to right. As aforementioned, this analysis uses an hp-adapted mesh with smaller elements around the most relevant geometric details and a base discretization of 20 nodes per wavelength. Figure 24 shows the complete mesh as well as a zoom of the antenna surroundings. Note the presence of elements with more than one disconnected regions. We studied how these elements evolve throughout the flow of the program. For the element K i , at the beginning of the program two independent regions K i 1 and K i 2 are detected, each of which is assigned a different set of nodes K i 1 and K i 2 , respectively. When evaluating the stability, it is detected that the discretization of the K i 2 region is unstable and the support domain is reassigned to the element K i+1 1 giving rise to K i+1 2 . Finally, the DOFs linked to K i 2 are removed from the set of DOFs and the discretization K i+1 1 is enriched through p-refinement to obtain k i+1 2 . The third component of the electric field are shown in Figure 25. Figure 26 shows some details of the solution in the regions where it exhibits a higher complexity. At the overall level, we see how the method is able to capture the influence of geometric details. We observed an increase in the amplitude of the scattered field, especially in the lower part of the refueling needle, the front part of the antenna placed in front of the cockpit and around in the electronic countermeasure (ECM) placed on the vertical stabilizer. However, when we go into detail, the polynomial approximation used is not able to correctly represent the very local complexity of the solution caused by the complexity of the boundary. This is clearly seen in the elements that contain the end of thin details of the geometry. This limitation does not affect the general performance of the method, as these are errors that affect locally and, thanks to the mesh refinement, do not pollute the global solution. Figure 27 confirms this assertion. The SW obtained with the proposed method fits the reference solution, computed with a much finer fitted NEFEM mesh 25 and an in-house high-order DG solver, really well, taken into account the total amount of 523,828 nodes considered in this simulation.

F I G U R E 27
Bistatic scattered width obtained for the aircraft profile

CONCLUSIONS
In this article, the cgDG has been presented. This high-order fictitious domain method possesses very convenient properties to address the simulation of wave propagation problem through intricate domains. Beyond developing the mathematical background of the conjunction, a collection of complementary strategies that ensure the highest performance of the resulting scheme are described. The stabilization strategy we implemented to deal with instabilities caused by intersected elements shows a very robust behavior and its use can be easily extrapolated to other numerical schemes or physical problems as it only affects the discretization of the space of interest. We have shown that, to maintain the accuracy of the approximation, any countermeasure must be considered and we propose the use of a priori directional p-refinement that cancels out the error increase due to the reduction in the number of degrees of freedom. The dissociation strategy we adopted to manage independent solution fields within the same elements also has a great influence on the overall performance of the methods. It allows the use of higher-order elements as the presence of this kind of elements no longer supposes a problem. Next steps in the development of this method are its extension to 3D problems and its application to new physical phenomena such as fluid dynamics. The implementation of an error-based refinement strategies within the time loop would be of great interest as would let us more efficiently capture the solution on those regions such as cavities for electromagnetic simulations or turbulences for fluid simulations where the complexity of the field being analyzed increases. Regarding the stabilization strategy further research may be devoted to explore alternative stability criteria as well as new factors to tune up the weighted distance we used to select the extended elements. Even though this approach performs well for the collection of problems studied so far, new schemes may need slight variations. The dissociation strategy has shown to be a very powerful tool and further applications should be explored.