Are you using the right tools in computational electromagnetics?

Have you ever wondered why we use certain computational electromagnetics methods and how we decide on the choice of tools for modeling problems in electromagnetics? Though true for any field, particularly in electrodynamics, there is a gap between current practitioners' knowledge and knowhow about different tools and the state‐of‐the‐art developments. Rapid advancements made in material science and (nano)‐fabrication techniques are demanding new tools with multiscale and multiphysics capabilities. In this article, we cover the recent developments and highlight capabilities and limitations of different electromagnetic modeling tools. One has to answer a series of questions about modeling constraints and objectives before picking the appropriate tool. We aim to clarify some of the misconceptions, discuss limits and capabilities of some of the popular electromagnetic modeling tools, and provide a few practical tips, so that applied physicists and engineers can make informed decisions while choosing appropriate tools for their applications.

first two are acceptable reasons to choose one tool over the other. However, the last reasoning is unjustified. This only reflects a poor judgment on the part of the practitioners due to lack of knowledge and training (knowhow) about recent developments. To certain extent, this is also due to the lack of openly available data for comparing different tools and their capabilities. [5][6][7] Of course, there is no single method that qualifies to be called as the perfect or ideal tool to model all types of problems. It goes along the adage that even a blade of grass becomes a handy weapon, if one knows how and when to properly use it. Hence, the expertise of physicists and engineers plays a crucial role in choosing the most appropriate tool to model a given problem.
Electromagnetic problems range from electro-or magnetostatics to high-frequency (microwave) and extremely high-frequency (visible-light, x-rays, γ-rays) applications. Radio frequency (RF) techniques, 8,9 antenna engineering, [10][11][12] spaceborne radar imaging, [13][14][15][16][17] terahertz, and optical applications [18][19][20][21] are some examples of major electromagnetics applications. In the recent years, RF photonics, combining the technologies from radio and optical frequencies, is gaining more attention in defense and security applications. 22,23 In all these application developments, CEM tools play a crucial bridging role between theory and experiment. They help us in improving our understanding, so that we are able to design more compact, robust, efficient, and cost-effective products and applications. In the following, we will briefly review some of the important methods and recent developments in CEM. Our aim is to highlight some of the main features, limitations, and advantages of popular CEM methods. One cannot cover the complete scope of tools available in computational electromagnetics within this paper. Hence, we do not attempt to present any comprehensive solutions to all CEM modeling problems. We aim to clarify some of the misconceptions, discuss limits and capabilities of some of the popular electromagnetic modeling tools, and provide a few practical tips, so that applied physicists and engineers can make informed decisions while choosing appropriate tools for their applications.

CLASSIFICATION OF CEM TOOLS
In CEM, we are essentially modeling the underlying physics of electric and magnetic fields and their interactions with matter. The quantities being modeled are functions of space and time. While time is represented as one-dimensional (1D) axis, the space could be of one, two, or three dimensions, depending on the model. In a discrete setting, we sample space and time into small space-time cells. The electromagnetic quantities (E, B, D, H, etc) and material parameters ( , , , etc) are computed inside these small cells using various algorithms. We can broadly classify CEM tools based on how the spatial and temporal dimensions are treated in the model and what kind of formulations we use to model the governing equations. Based on these three broad categories, we have several options given below.

STRUCTURED VERSUS UNSTRUCTURED MESH METHODS
Structured mesh consists of lines, squares, or cubes of equal size covering the one-, two-, or three-dimensional space, respectively, as shown on the left-hand side of Figure 1. The main advantage of the structured grid is that the numbering FIGURE 1 Structured (left) and unstructured (right) spatial discretization in one, two, and three dimensions employing lines, triangles, and tetrahedra, respectively FIGURE 2 Multiscale modeling of horn antenna with cell size growing from λ min ∕100 to λ min ∕10. Here, λ min corresponds to the smallest wavelength involved in the simulation and ordering of the cells are continuous. This greatly simplifies referencing of the nodes in the numerical algorithm. When developing one's own computational models, this is of great help in debugging errors in the algorithm. Another advantage of structured mesh is that we do not need any special meshing tool. This is because we can set the cell-size in our algorithm and compute automatically coordinates of all the nodes in the model. As all nodes, lines, surfaces, and volumes follow a regular pattern, there relationships are automatically determined based on a straightforward logic. This is a great simplification. Meshing is a complex process and, if done improperly, can lead to many complications in our modeling. Now, let us consider the same domain meshed using unstructured one-, two-, and three-dimensional cells. In one dimension, there is only one obvious possibility for doing unstructured grid. However, in two and three dimensions, we have many options. In general, we use triangular and tetrahedral cells for two and three dimensions, respectively, as shown on the right-hand side of Figure 1. It is worth mentioning that, for rectilinear geometries, like the cube in our example, structured grid is more than sufficient and employing unstructured mesh is not necessary.
Though structured grids have the aforementioned advantages, they are not always preferred to model certain kinds of problem. One such example from antenna engineering is shown in Figure 2. As one can notice in the antenna model, the cells near the feed or balun region are of smaller dimensions as compared to the cells on the antenna side flares. 24,25 If one has to employ structured mesh for such problems, the smallest part in our design will constrain the size of the cell, which will be used to mesh the entire model. This will unnecessarily increase the number of cells in the domain. Hence, the overall computational cost (time and memory) required to simulate the problem becomes prohibitive. However, if we resort to unstructured mesh for such problems, we can elegantly scale from tiny to large cells and vice-versa, as shown in the Figure 2. This flexibility is important to massively reduce the number of cells needed to mesh the model. This, in turn, reduces the computational cost. Moreover, we can also reduce the staircasing error, which often occurs while using structured grid for curved geometries. As it is illustrated in Figure 3, we can accurately and easily model even complex curved or slanted geometries by using unstructured mesh. Unstructured mesh also has a clear advantage over structured FIGURE 3 Complex curved geometry composed of high dielectric contrast materials mesh when we model materials with very high dielectric-contrast. As shown in the Figure 3, we can have smaller cells in materials where the refractive index is high and bigger cells in materials where the refractive index is low. This is because the electromagnetic wavelength is smaller inside the high refractive index material than inside the low refractive index material.
Therefore, in order to properly resolve smaller wavelength inside the high refractive index material, we need to have smaller cells compared to low refractive index material. This flexibility in adapting the size of the cells in regions where it is actually needed is not readily available in standard structured mesh methods. In other words, if we are using structured mesh, we have to unnecessarily mesh the entire domain using the smallest cell dimension corresponding to the highest refractive index region. There are special algorithms developed to expand the capabilities of structured mesh methods by locally refining the cell-size using subgrids. [26][27][28] Subgridding techniques are of great use in structured mesh methods, however, at the cost of increasing the complexity of algorithm.

FREQUENCY-VERSUS TIME-DOMAIN METHODS
Electromagnetic waves have two fundamental parameters: characteristic frequency and wavelength. If we are interested in the response of the model at a particular frequency, then we can represent all the parameters in the Maxwell-Heaviside equations as a function of frequency and space. This is known as frequency-domain modeling. Here, we are interested in the overall steady-state response of a system at a particular frequency. However, if we are interested in the transient response of the system, we go for the time-domain modeling. Here, we represent all electromagnetic variables as functions of space and time.
Consider an electromagnetic pulse traveling through a 1D space represented by x-axis. We can present this scenario in two ways, as illustrated in Figure 4. Let us choose three points along the x-axis. Assume a pulse with different frequency components is passing through these three points. Then, these three planes shown in Figure 4 contain time-domain information about the pulse passing through these points seen through space-time window. For simplicity, we have shown only the information captured at one point over period of time. One can simulate the same problem in the frequency-domain. Information captured at the same point in frequency-domain can be seen through the space-frequency window, as shown in Figure 4.
Both time-and frequency-domain methods can employ structured or unstructured mesh for the space discretization, as described in the earlier section. When modeling a problem in frequency-domain, normally, we will start with a single-frequency and simulate the problem. Then, we will repeat the same procedure over a range of frequencies to simulate the complete operating bandwidth of the application.
Most of the CEM tools are frequency-domain solvers because of various reasons. One of the main reasons is due to the fact that most of the material properties strongly depend on the operating frequency. For example, the material permittivity or refractive index depends on the frequency of the propagating wave, material plasma-, and resonance-frequencies. This frequency dependence is expressed using Drude-Lorentz or Drude-Sommerfeld models. 4,29,30 When we try to capture this frequency dependence in the time domain, we have to perform memory-intensive integration at each time-step of the simulation.
Another obvious advantage of frequency-domain solvers is that there is no need for time discretization. Here, we focus only on the spatial discretization and, hence, completely avoid all the instability related issues common in one class of time-domain methods discussed in the next section. Frequency-domain solvers are ideal for single-frequency or narrow-band simulations. The method, however, becomes more time consuming when we are interested in broadband simulations. By broadband, we mean typically simulations with the ratio between the lowest and highest frequencies involved in the order of 100 or 1000. For modeling such problems, one has to go for a time-domain methods.
The finite-difference time-domain (FDTD) method [31][32][33] and the finite-element method (FEM) [34][35][36][37] are extensively used for practical problem solving. The FDTD method is based on a simple approximation of continuous derivatives on a square lattice. Some of the earliest known numerical methods have used this approach to find approximate solutions. 38,39 The FDTD method is popular primarily due to the simplicity of its algorithm and ease of learning the tool.
An important additional strength of FDTD and related explicit time-domain grid-based methods is the ability to naturally and efficiently model (in a single simulation run) materials characterized by spatially varying, multipole linear frequency-dependent dielectric dispersions concurrently with spatially varying, and multipole frequency-dependent nonlinearities. 40 Furthermore, time-domain grid-based methods permit the straightforward modeling of nonlinear frequency-dependent amplification (gain) phenomena arising from multilevel quantum processes involving electron populations (such as those in lasing media [41][42][43][44][45] ). Frequency-domain tools are incapable of modeling such problems in a single simulation run and, more generally, are likely utterly incapable of treating materials having complex frequency-dependent nonlinearities.
In its classical formulation, the FDTD method is limited to structured mesh. The structured mesh, as we earlier noted, inherently suffers from stair-casing errors while modeling complex, curved, or irregular geometries. For this reason, the FDTD method may not be an ideal choice to model a microwave antenna with very small feed structure.
In such cases, it would be efficient to use multiscale modeling feature, which is not readily available in standard FDTD method. 25 The basic requirement for multiscale modeling is unstructured mesh. This is where FEM becomes really powerful. This feature enables us to easily refine the mesh only in regions where it is really needed. This way, we can avoid the need to refine the mesh globally like in the case of standard FDTD method. As mentioned in the previous section, methods employing unstructured mesh can adapt naturally to the curved or slanted boundaries. Therefore, there is no need of special treatments of the curved or slanted boundaries. Though FEM provides these nice features, the standard FEM implementations have some inherent limitations. These are briefly addressed in the next section. Though we have discussed finite-difference method here only in time domain (FDTD), for some specific applications finite-difference techniques are also employed in frequency-domain (FDFD). 46,47 Likewise, one can also employ finite-element techniques in time domain, as discussed in the next section. Apart from these well-known methods, there are other advanced methods in time and frequency domain. We will briefly address those advanced methods in later sections.

IMPLICIT VERSUS EXPLICIT TIME-DOMAIN METHODS
The classical FEM is predominantly used in frequency domain, not in time domain, due to algorithmic complexities. 48,49 The FEM, when formulated in the time domain (FETD), results in a computationally-heavy updating algorithm. This is because the field quantities at a given time-step (n + 1) depend on the values at time-step (n + 1) and earlier time-steps, as shown in Figure 5. This means that, when we calculate the field values at (n + 1), we need to perform a global matrix inversion at each time-step. This procedure leads to different implicit time-stepping schemes. They are computationally FIGURE 5 Basic update algorithm comparison between implicit and explicit time-stepping schemes heavier than explicit time-stepping schemes because, in the latter case, there is no need for any matrix inversion. In other words, explicit-methods use simple updating procedure where the field quantities at any time-step (n + 1) depend only on earlier time-steps, as illustrated in Figure 5. As long as the total number of cells in the model is small, we can easily employ implicit methods. One main advantage of the implicit methods over explicit methods is that their formulations are inherently stable. Unlike explicit methods, they are not limited by maximum time-step constraints given by the famous Courant-Friedrichs-Lewy condition. 50 However, when the total number of cells in the model becomes large, the increase in memory requirement is not proportional to increase in the number of cells in the computational domain. For example, if we increase the number of cells in the computational domain by a factor of two by refining the mesh, then the corresponding increase in memory requirement to run this simulation is more than a factor of two.
Nowadays, compilers can perform big number crunching tasks using efficient (linear algebra) routines. The speed of matrix inversion operation depends highly on the sparseness of the matrix. The related computational effort, however, grows nonlinearly and limits the maximum size of problems that can be solved with given computational resources (time and memory). Implicit schemes will become prohibitively expensive for the aforementioned reasons for majority of practical problems with large number of cells. Because of these constraints, most of the FEM tools that are available are mainly used in frequency domain. Getting all features within one single time-domain method is challenging.
It is important to note that all frequency-domain methods require solutions of systems of complex-coefficient simultaneous equations, which can exhibit ill-conditioning and slow convergence when solved using iterative techniques. Even using modern fast-multipole methods, the current state-of-the-art method limits the number of solvable electromagnetic field unknowns to order 10 7 . On the other hand, explicit time-domain grid-based methods such as FDTD can solve for more than 10 1 2 field unknowns on available the United States National Science Foundation and Department of Energy sponsored petaflops supercomputers such as Blue Waters 51,52 and Mira. 53 There is no upper limit on the number of field unknowns solvable using explicit time-domain grid-based methods.

INTEGRAL VERSUS DIFFERENTIAL EQUATION METHODS
Equations modeled in electrodynamics are mostly linear in nature. This linearity is an important requirement for the development of various integral equation solvers. One can say that, in the case of differential equation solvers, we solve for the unknown fields at different space-time stamps. In integral equation methods, however, our emphasis is on solving for the sources that satisfy certain (radiation) boundary condition. 54,55 Integral equation methods are ideal for modeling and simulating open-region scattering and radiation problems. 56 The identification of Green's function is at the core of all integral equation methods. 57 Green's function is the impulse response of a linear differential equation with certain initial or boundary conditions. Once the Green's function is known, using the principle of superposition, we can derive the response of the system to any arbitrary source.
Often times, the unknown arbitrary source that we are modeling resides on a surface, for example, a patch antenna. In these instances, the integral equation method leads to surface integral formulation. This actually reduces the 3D modeling problem into a 2D problem. It is, in fact, one of the key advantages of using integral equation solvers over the differential equation solvers for modeling surface dominated source problems. In the latter case, we have to inevitably mesh the entire 3D space between the surface source (or scatterer) and the point of observation. For small problems, this might not be an issue. However, when the size of the problem increases, the integral equation methods are more efficient for modeling surface-dominated problems compared to differential equation counterparts. Please note that the Green's function approach gives an exact procedure to compute the value of fields at an observation point due to some source located at a distance. We get this exact solution directly by solving the integral equation(s). That too, this is done without the need for computing unknown field vectors at different spatial and temporal points as in the case of differential equation methods. By avoiding the space-time grids, we also get rid of grid-induced dispersion errors in the computed solution in the case of integral equation methods. 58 These grid-induced errors are more common in differential equation methods and will briefly revisit this topic toward the end.
Integral equation methods in comparison to differential equation methods employ lesser number of unknowns. This feature is important when we are modeling large problems. The order of increase in computational requirements is lower for integral equation solvers as compared to differential equation solvers. It is worth mentioning a few words about the recent developments in the integral equation solvers. These advanced solvers use the state-of-the-art fast multipole methods for dealing with the large matrices resulting from integral equation methods. [59][60][61][62] Though integral equation solvers come with the aforementioned advantages, they too have some limitations. For instance, they are not ideal for modeling inhomogeneous complex (anisotropic) materials. Such complex material compositions are seen in many practical applications, which require a volume integral formulation. Performing volume integration is a complex and computationally heavy procedure. In these circumstances, one has to resort to differential equation solvers, which provide more straightforward modeling approach. Moreover, it is important to note that differential equation solvers can handle nonlinear equations. This is an important positive feature that is not possible while using integral equation solvers. Though not a strict limitation in the case of electrodynamics, linearity of the media has to be assumed employing the integral equation methods. Generally speaking, integral equation methods are more complex to implement as compared to differential equation methods. This is due to the need for the Green's function to exist, which normally involves the evaluation of complex integrals. In addition to this, because of the resulting dense-matrix system, integral equation solvers need acceleration algorithms to be computationally competitive with differential equation solvers.
As the complexity of the problem (volume-effects, nonlinear materials, etc) increases, the differential equation solvers provide a straightforward and computationally efficient procedure. For example, when there are N unknowns in the problem, a differential equation method leads to an equivalent system of spare matrix of the order (N). For the same problem, normally, the integral equation solver leads to a dense-matrix system of the order (N 2 ). The time required to compute this problem using the integral equation solver is more than (N 2 ). This is a serious drawback of integral equation methods compared to differential equation methods. However, fast and efficient algorithms and advanced parallel processors can partially remedy this issue. 54 The grid-induced dispersion error along with the unfavorable scaling issues make differential solvers less favorable for large surface scattering problems compared to integral equation solvers. However, for transient nonlinear or volume-dominated phenomena, one has to resort to differential equation methods for efficient and accurate procedure.

SCALE OF OBJECTS, WAVELENGTH, SPACE, AND TIME DERIVATIVES
The behavior of electromagnetic fields and their interaction with matter can be understood by comparing the sizes of the interacting object and the wavelength of the interacting wave. For example, consider two incoming waves, one with relatively very short wavelength λ s and the other with very large wavelength λ l , where λ l ≫ λ s . Now, consider a square patch antenna with a dimension L ∼ λ s , as illustrated in Figure 6. Please note that the long-wave is substantially larger than the short wave. This is noticeable as we have shown only part of the long wave in the illustration. Maxwell-Heaviside equations describing the behavior of electromagnetic fields have two types of derivatives: spatial ( x , y , and z ) and temporal ( t ) derivatives. The dimensions of spatial and temporal derivatives are the inverse of length and time, respectively. By comparing the relative magnitudes of spatial and temporal derivatives with the size of the modeled objects, we can easily determine the nature of electromagnetic interaction. For example, let us assume When the contribution of the time derivative becomes comparable to, or more than that of, space derivatives, then the underlying physics becomes coupled dynamic variation of electric and magnetic fields. This happens for wavelengths from λ ≪ L to λ ≈ L. However, when λ ≫ L (long-wavelengths), the underlying physics is mostly dominated by static fields. For low-frequency (electro-or magnetostatics) interactions, the spatial derivatives play a crucial role. In midfrequency (electrodynamics) range, contributions from both the space and time derivatives become important. However, when we FIGURE 6 Square patch antenna of dimension L and two incoming electromagnetic waves of different wavelengths (λ l ≫ λ s ). Interaction happens only when the wavelength matches with the interacting objects (L ∼ λ s ) move toward extremely high frequencies, the underlying physics follow the ray or geometric optics. At these frequencies, more than the space derivatives, the contribution due to time derivative becomes dominant.
Assume d to be the point of observation where we compute field values and let d be comparable to the size of objects modeled L. Then, we have three regions of interest, namely, near-field (d ≪ λ), mid-field (d ≈ λ), and far-field (d ≫ λ) zones. 58 Near-field region is dominated by static physics. In the mid-field region, we use the wave physics to model the behavior. The far-field region is best described using ray or geometric optics.
While modeling electro-or magnetostatics, we are essentially dealing with boundary-value problems. Here, the contribution of the spatial derivatives dominates and defines the boundary conditions. In these scenarios, we model only the divergence equations, which are elliptic in nature. The fields are very smooth in the static regime. Hence, one can use a lot of advanced tools based on multigrid [63][64][65] or wavelet techniques. 66,67 For midfrequency applications, the Maxwell-Heaviside curl equations are modeled. These are hyperbolic in nature. 25 In this case, the solution might not be as smooth as in the static (elliptic) case. Therefore, we cannot use wavelet or multigrid methods, which are used in static case. For the high-frequency interactions, we can go for either differential or integral equation solvers depending on the applications. Moving even higher in the frequency, the wavelengths become insignificantly small and the wave behaves more like a particle. In these extremely high frequencies, Maxwell-Heaviside equations become inaccurate to describe the underlying physics. We have to use Schrödinger wave equation to model the quantum nature of underlying physics. 68

ABSORBING BOUNDARY CONDITIONS
For solving many practical problems, CEM tools generally require different boundary conditions. These boundary conditions are special formulations given for calculating field components at various domain boundaries. For example, the perfect electric conductor (PEC) or perfect magnetic conductor are simple examples of boundary treatments. They are useful, for example, to model the walls of a waveguide or metallic surfaces of an antenna. 69 There are also some special impedance boundary conditions, which are extensively used to model practical materials. 70,71 There are two types of important boundary conditions that are important to accurately truncate the computational domain, namely, absorbing boundary conditions (ABCs) and perfectly matched layers (PMLs). ABC or PML is needed only to terminate the computational spaces of grid-based methods both in the time or frequency domain. These are used when we model open-region radiation and scattering problems. An elaborate discussion of these two features is beyond the scope of this paper. However, we will highlight only the main points in this paper. An example domain containing a horn antenna is truncated, as shown in Figure 7. For all practical problems, we assume that the computational resources FIGURE 7 An antenna is being modeled in a domain, which is truncated using absorbing boundary condition (ABC). Ideally, ABC should transmit all incident radiation falling on it from all angles of incidence without reflecting anything back into the domain in terms of memory and time are limited. Therefore, we need to confine the domain using some truncating boundary conditions that can mimic infinite extension of the computational domain. This is where ABCs come into play. This is possible only when there is hardly any numerical reflection coming back into the domain from the ABC. Unlike PEC boundary condition, where the reflection is total and physical, the reflections from the ABC are only a numerical artifact. In other words, there should not be any reflections from the ABC in reality. However, we see it when we implement ABCs in our model. These numerical reflections emerge from truncation error (order of accuracy) of the ABC. There is always a trade-off between the required accuracy of computed results and the availability of computational resources (memory and simulation time).
The ABCs are mathematical approximations enforced on the boundary cells to serve the aforementioned purpose. As with all approximations, ABCs have certain order of truncation error in their formulation that depends on how field values are approximated on the boundaries. As in any finite-difference or finite-element approximation, we can increase or decrease the order of truncation error of ABCs. This is done by increasing the number of data points we use in calculating the field quantities at the boundaries. Increasing the accuracy of the ABC comes with the increased cost of field computation at the boundary. The order of truncation error of the ABC implicitly sets the best possible performance limit (in other words, lowest possible reflection coefficient) achievable using an ABC. In addition to the order of truncation error in the ABC formulation, the performance of the ABC strongly depends on the angle of wave incidence on the ABC.
The ABCs perform at their best when the incoming electromagnetic waves reach them at normal incidence. What we mean by best is that the magnitude of reflections coming back into the domain from the ABC will be the least. This happens generally when the incident wave perpendicularly meets the ABC. This corresponds to 0 degree. There will still be some reflection back into the domain even at this normal incidence angle because of the ABC's order of truncation error. The performance of the ABC deteriorates as we move toward the grazing angle of incidence (90 degrees). For this reason, we normally place the ABCs as far as computationally feasible from the scatterers or source so that most of the incoming waves meet the ABC close to normal incidence. Two popular first-order ABC implementations are Silver-Müller ABC 25,72 and Engquist-Majda ABC. 73 Higher-order ABCs demand information from more number of points in the domain to calculate field values. 74 There is always a trade-off between the accuracy of the ABCs and the computational resources required to implement them.

PERFECTLY MATCHED LAYER
We can substantially improve the accuracy of domain truncation techniques if we can replace absorbing boundaries by several layers of surface surrounding the main domain. Therefore, instead of having a simple surrounding boundary, we have layer of cells enclosing the main domain. These layers satisfy two main conditions. The first condition is that the wave impedance inside the layer should be exactly same as that of the main domain being truncated. This makes the layer perfectly matched, and hence the name PML. The second condition is that the domain should have losses so that the incoming electromagnetic wave is progressively damped while traveling inside the PML. This idea was first introduced in CEM using the FDTD algorithm by Bérenger. 75,76 The standard Bérenger PML consists of different PML layers, namely, left and right x-oriented PMLs, top and bottom y-oriented PMLs, and 4 corner xy-oriented PMLs, as illustrated in Figure 8. The PML orientations are defined by the loss parameter variation inside the PML. In the case of x-oriented PML, the loss parameter depends only on the x-coordinate value inside each PML cell. Similarly, the y-oriented PML will have only y-dependence for the loss parameter. Inside the xy-oriented PML, however, there will be dependence on both the x and y coordinates. For example, consider the right-hand side x-oriented PML. The thickness of this PML is d PML . The losses inside this PML go from 0 at the main-domain-PML interface to a maximum value 0 at the PEC truncation of the PML, as illustrated in the Figure 8. Because of these losses inside the PML, the amplitude of the incoming electric field goes from a maximum value E i at the main-domain-PML interface to 0 at the PEC truncation of the PML, as shown in the illustration. Even if the wave amplitude inside the PML is not going to be completely damped at the PEC boundary, it will not pose a serious issue. This is because the waves once getting reflected from the truncating PEC will be completely damped on its return journey toward the main domain. This guarantees that nothing will enter back into the main domain from the PML region. Generally, the thickness is calculated based on the two-way damping process described earlier.
Another clever option is to truncate the PML with an ABC instead of PEC. [77][78][79] With this option, we can substantially reduce the thickness of PML layers compared to what is normally prescribed in the standard PML-PEC truncation. This is mainly because, if there are any undamped waves reaching ABC at the end of the PML, they will be mostly transmitted through the ABC. This way, we can get rid of the maximum amount of undamped waves inside the PML already in the first forward journey inside the PML. Therefore, there is hardly any wave traveling back from the ABC in the reverse direction inside the PML. Even if there are any, like before, they will be absorbed quickly within two or three cell layers. Since we need only the forward direction damping process, we can significantly reduce the thickness of the PML compared to the standard PML-PEC truncation case.
That being said, combining PML and ABC should be done carefully. This is because some PML implementations are known to exhibit instability issues. 80,81 One can rigorously compare analytical solutions and behavior of fields inside the PML. A few alternative PML implementations with certain advantages avoiding instability issues were suggested. 82 It is worth mentioning why we generally use parabolic-or cubic-type profile for loss term inside the PML. Though the wave impedances between the main domain and PML are matched, there will still be some numerical, and not physical, reflections. This is because of the sudden change in the loss term (conductivity) at the main-domain-PML interface. Note that, in the main domain, loss term is zero and, in the PML, it is nonzero. If we have the loss-term as a step function at the main-domain-PML interface, then the incoming wave will suddenly see a big conductivity increase. If we have no (spatial and temporal) discretization errors, this is not an issue. However, as you know, this is not the case. Discretization errors are inherent in all numerical schemes. Therefore, due to this reason, waves entering the PML from the main domain are numerically reflected. These unphysical reflections are purely caused due to discretization errors of the underlying scheme. We can reduce this numerical reflection if we reduce the size of spatial discretization inside the PML. In practice, we have to at least keep the first few layers inside the PML using small cells and gradually increase the cell size as we go more inside the PML. On a structured regular FDTD mesh, one typically employs classic Berenger PML (or the equivalent uniaxial PML) with a thickness of 10 grid cells. This is generally sufficient to absorb impinging numerical EM wave analogs with very low reflection coefficients below −80 dB when the PML is spaced away from the source of the outgoing waves by more than 10 grid cells. It is worth mentioning that, on an unstructured random mesh, we typically use parabolic or cubic profile for loss term inside the PML. A performance comparison between standard ABC and PML with different loss profiles (constant, linear, parabolic, cubic, and biquadratic) was studied using general unstructured grid. 83 As we discussed earlier, selectively adapting the mesh size in certain regions becomes complicated, if not impossible, while using structured mesh methods (like the standard FDTD method). Such structured mesh methods will demand us to globally refine the mesh to keep the PML numerical reflections. This would, however, become prohibitively expensive in terms of computational load. This may go against the main reason for choosing PML over ABC for many practical problems. We can place the PML very close to objects we are modeling and, hence, drastically reduce the overall computational domain to model a typical problem. However, in structured mesh methods, this comes at the cost of globally refining the mesh. Therefore, taking into account all these aspects, it is always a better option to increase the loss term slowly inside the first few cells of the PML close to the main-domain-PML interface and ramp up the loss term gradually in the consecutive cells following a parabolic profile. This way, we can have typically 10 to 15 layers of PML cells within the PML. 25 Though Bérenger PML is popular, there are other implementations of PML as well. [84][85][86] It is worth mentioning about one important PML implementation known as the complex frequency-shifted (CFS) PML. 87,88 The theory of CFS-PML is based on CFS PML parameters that mimic geometrical stretching of space for different propagating wave frequencies. This approach is of particular interest for damping evanescent waves since all the conventional PML models exhibit a performance degradation in the evanescent regime. 89 Because of this advantage, we can place the CFS-PML much closer to the modeled structure that are known to generate evanescent fields immediately adjacent to its surface. The complex stretching factor can be implemented in two ways. The first approach is using an auxiliary differential equation method 90 and the second approach involves implementing a time-domain convolution. 91

FIGURE 9
Performance comparison between first-order absorbing boundary condition (ABC) and perfectly matched layer (PML) using finite-volume time-domain (FVTD) method. Numerical reflection coefficient computed for various angles of plane wave incidence

NEED FOR CONFORMAL EXPLICIT TIME-DOMAIN METHODS
Several efforts are ongoing to develop methods that combine the simplicity of explicit time-stepping, as in the FDTD method, and the flexibility of employing unstructured mesh, as in FEM. State-of-the-art time-domain solvers like the transmission line matrix method [92][93][94][95] and the finite integration technique 96,97 among others provide special features to handle several practical problems. The readers may not be aware of other nonmainstream tools available to them.
One such nonmainstream tool is the finite-volume method (FVM). It was originally conceived and developed for computational fluid dynamics applications. Later, FVM was adapted to simulate electromagnetic problems. 98,99 The finite-volume time-domain (FVTD) method provides the best of both FEM and FDTD method. Like in FEM, we can employ unstructured mesh, and similar to FDTD method, we can employ an explicit time-stepping scheme. 12 This explicit time-stepping feature helps in extracting the broadband frequency response just from a single simulation run. We obtain this broadband response through postprocessing of the time-domain results using the well-known fast Fourier transform technique. In addition to the aforementioned features, the FVTD method naturally facilitates multiscale modeling of structures with fine details, high dielectric-contrasts, and complex curved geometries. 25 Furthermore, the FVTD method comes with several accurate ABC and PML implementations. A typical performance comparison of first-order ABC and Bérenger PML using the FVTD method is shown in Figure 9. The author and various collaborators have extensively studied PML for conformal time-domain methods. 79,[100][101][102] In addition to the standard Bérenger PML formulation that involves unphysical field-splittin g technique, there are a couple of other important PML formulations. One way to achieve PML is through the convolution integral procedure. 91 Another popular PML technique involves the complex-space stretching approach. 86 These two PML techniques do not require any field splitting and all quantities computed inside the PML follow standard Maxwell-Heaviside equations. Because of this, we do not have any special orientation for the PML and we need only one loss term as compared to two (magnetic and electric) loss terms in the original Bérenger PML formulation. 25 Corner reflections can be fully avoided using the radial PML implementation with a single general PML formulation valid in all directions inside the PML. 103,104 Various instability related issues in certain PML implementations were rigourously studied 82 and their performances were compared for conformal time-domain applications. 89,105 Though FVTD method offers these highly desirable features, there is one serious limitation to this method. The FVTD method suffers from high numerical dissipation and, hence, cannot be prescribed for long-distance wave propagation problems. This is a huge setback to this method. New conformal time-domain methods are successfully tested, which has significantly less numerical dissipation issues, as discussed in the next section.

RECENT DEVELOPMENTS IN CEM
Recently, a couple of interesting explicit FETD implementations were demonstrated using clever matrix manipulations. 106,107 These explicit FETD tools are still not widely used because of the complicated matrix manipulation and conditioning required to run these methods. These tools demand more from the users in terms of understanding the underlying mathematical manipulations. However, not every user is ready to go through this drill and most of them wanted to have a generalized tool to model different problems without worrying about matrix manipulation and conditioning.
When we relax the tangential-continuity condition enforced on the fields across cell boundaries, something interesting happens. In fact, this will transform the standard FEM into a new class of method called the discontinuous Galerkin method (DGM). 108 Instead of forcing the tangential fields on the cell interface, we impose the continuity constraints on the computed flux components in DGM. Readers familiar with the FVTD formulation will notice that there are some similarities between DGM and FVTD method. Both are explicit time-domain formulations, which can be employed on a unstructured mesh. There are a few advantages of DGM over the FVTD and FETD methods. In comparison to implicit FETD formulation, we get an explicit time-domain formulation in DGM. One needs to solve a block-diagonal linear system of equations.
Let K and N denote the number of elements and the number of basis functions per element, respectively. In comparison to implicit FETD method, in the DGM formulation, we can greatly reduce the computational load by inverting K square-matrices of dimension N × N. This is done only once at the preprocessing stage before starting the iteration. There is a small penalty due to storing of twice the number of unknowns at the cell interfaces as compared to standard FEM. This penalty is within acceptable limits as we gain more in terms of computational efficiency with the resulting explicit time-stepping scheme capable of handling unstructured mesh. 5,109,110 In general, the numerical error of DGM is of the order h 2p+1 , where h and p denote the size of the spatial element (cell) and the order of the basis function inside each element, respectively. If we set p = 0 inside each cell, then this means that the basis function remains constant inside each cell and the order of numerical error will be h. At this lowest limit, the order of error for the DGM formulation will be same as that of the standard FVTD method. In sum, by increasing the value of p within each cell, one can naturally improve the accuracy more than what is possible in the standard FVTD method. The DG time-domain (DGTD) method is increasingly used in the recent times. It is certainly one of the state-of-the-art CEM tools one should learn and master. Studies show that DGTD method exhibits low numerical error as compared to FVTD method for the same spatial and temporal discretizations. One can of course increase the order of accuracy of FVTD method by going to higher-order schemes in space and time. In this case, the DGTD algorithm is comparatively simpler than that of the higher-order FVTD method. That being said, the computational time required to simulate a problem using DGTD method could be much longer than the higher-order FVTD methods. The reason for this is the use of higher-order polynomial expansions in DGM inside each cells. For real-time applications, one has to find the right trade-off between accuracy and computational resource requirements. 111,112 Though, in this article, we have considered only CEM tools that employ (single) simple methods for solving CEM problems, we want to briefly highlight other complex methods as well. These advanced methods are often known as mixed methods 113-117 that employ two or more simple methods. For example, one can discretize the computational domain into two regions, ie, one region using structured mesh and the other with unstructured mesh. Inside the structured mesh region, one can use standard Yee's FDTD algorithm. The unstructured region can employ FETD method. This way, one can avoid the need for subgridding and selectively use the FETD method only in those regions where we want to accurately model curved complex structures. However, the difficulty in these mixed methods lies at the interface zones where one has to carefully implement exchange of information between different regions. Depending on the type of problem such mixed methods are very useful for solving efficiently multiscale and multiphysics problems.
Another unconventional nonmainstream approach using the tools of algebraic topology in CEM was developed by the author and others. The domain of algebraic topology is still not widely known to majority of applied physics and engineering communities. The algebraic topological method (ATM) emerges from a radically different way of thinking about modeling electromagnetic problem. In ATM, we do not use field vectors and differential equations. It might equally be considered a heresy to say that we need neither of these tools (field vectors and differential equations) to model problems in electromagnetics. Interested readers are referred to several interesting references, which document the historical developments of this method and the mathematics involved. [118][119][120][121][122][123][124][125][126][127][128][129][130][131][132][133] Among them, the most remarkable contribution was due to the work of Tonti. 134 He demonstrated how to compute stable numerical solution for a system of PDE by capturing the correct topological structures for underlying physical quantities at the discrete level.
ATM provides a more intuitive way to model electromagnetic problems. [135][136][137] ATM has close connections to advanced FEM methods employing differential forms. 35,[138][139][140][141] Finite-element exterior calculus provides excellent mathematical tools to study ATM, where one can deduce conditions for numerical stability and problem well-posedness using concepts from de Rham cohomology and Hodge theory. 142 For electromagnetic modeling using ATM, we start by describing the physical quantities only using physically measurable scalar variables and, hence, avoid vector calculus completely. This is counter intuitive; however, in retrospect, we can understand that, by examining, all the quantities in electromagnetics that can be physically measured, namely, voltage, current, fluxes, charge, etc, are only scalar quantities. We have demonstrated that there is absolutely no need for vectors like electric and magnetic fields and the respective field densities to model an electromagnetic problem. In addition, we can also represent all the relationships between these scalar quantities only using discrete algebraic summation. Hence, there is also no need for differential equations. It is important to note that the algebraic formulation of the underlying physical problem gives an exact discrete representation. This can be directly derived from the experimental principles in electrodynamics without going through the Maxwell-Heaviside (differential) equations. 143 As a side note, it is interesting to know that one can easily compare the methods of finite volume, finite element, and finite difference using the framework of algebraic topology. 144 Such a comparison will enable us to understand the power and elegance of algebraic topology as a generalized framework for modeling many problems in physics and engineering. Advanced users who want to use symbolic computation to model ATM problems can benefit from the user-friendly front-ends for building topological data structures. 145 Such symbolic computations are not needed for using ATM while modeling simple CEM problems. However, these symbolic computation tools are highly valuable for doing advanced mathematical analysis using ATM.
We have seen so far that there is a major push for developing flexible multiscale CEM tools, which can provide explicit time-stepping formulation. We aim to capture complex geometries and phenomena without heavy computational requirements. The recent breakthroughs in material science and engineering, (nano)fabrication techniques, and 3D printing are contributing to advanced applications. Terahertz devices, topological insulators, graphene, and metaand nanomaterial-based devices are some of the new applications where CEM tools are critical for design development. These applications demand CEM tools with multiscale and multiphysics features in addition to explicit time-domain formulation.
When we model these advanced applications, in addition to electrodynamics, we also need modules that can incorporate parameters from thermodynamics, photoelectric, electrochemical phenomena at nano-, micro-, and macro-levels. Major efforts are done to bring these mulitiphysics capabilities within the standard features of CEM tools. To achieve this, we first need a method that can naturally interface between different underlying physics using a single framework. This is where ATM becomes powerful. 20,146,147 Several efforts are ongoing to develop advanced CEM tools with multiscale and multiphysics capabilities in the Radical Innovations Group -RIG. These features will become critical and essential components for modeling and rapid virtual prototyping. Through the National Programme on Technology Enhanced Learning (NPTEL) platform, we are also freely educating and enabling researchers, teachers, and students from different corners of the world in this fascinating domain of CEM. Interested readers are encouraged to refer to NPTEL website for further information about the programme on Computational Electromagnetics & Applications. 148

REMARKS
There are two important metrics used to compare CEM tools: numerical dissipation and dispersion errors. The former is the amplitude error and the latter is the phase error in the computed solution as compared to analytical (exact) solution. Generally, it is differential equation solvers that suffer from both these errors. Integral equation solvers do not have any grid-induced dispersion errors because there are no grids involved in propagating the solution. All the tools that we discussed in this paper have some specific pros and cons when measured using these error metrics. For example, the FDTD method does not suffer from numerical dissipation but has noticeable dispersion error. One has to refine the mesh appropriately to keep the FDTD dispersion error within the required tolerance limits of the problem. The FVTD method, as we discussed earlier, suffers from a high numerical dissipation. It is sometimes not possible to say that all higher-order method lead to better accuracy. For example, the standard FDTD method that we discussed is only second-order accurate in space and time. The numerical dissipation error of this method, however, is much lower as compared to secondor third-order accurate FVTD method. Generally speaking, the order of truncation error in a method gives us an overall assessment about the method's accuracy. Higher-order DGTD method has significantly low numerical dispersion and dissipation errors. Technically, the numerically dissipation and dispersion errors of ATM tools should be comparable or better than that of the standard FDTD method. Further numerical experiments are needed to validate the accuracy of this tool as compared to the DGTD method.

VALIDATION
There are several ongoing efforts to validate different CEM tools and set some benchmarks to make comparisons. The IEEE initiative to validate and standardize CEM tools is worth mentioning in this regard. 149, 150 The IEEE Standard 1597.2-2010 stipulates various criteria to systematically validate CEM simulation codes. Many engineering applications need such a standard approach to meaningfully compare methods and results for their performance (computational efficiency and accuracy) claims. 151 Comparing various data sets obtained through experiments, simulations, analytical processes, etc, one can validate the performance of a method. Such a transparent comparison and validation of results will guide CEM tool users to make informed decisions about CEM tools and to choose the most appropriate tool for their modeling problem. Some authors have suggested procedures to quantitatively, instead of qualitatively, compare simulation data with applications in electromagnetic compatibility simulations. 152,153 Through these procedures, one can track differences between CEM model iterations and capture the variability and range of opinions among different members of the team working on a modeling problem.

SUMMARY
We have shown in this paper how one can make informed decisions on the choice of tools for modeling problems in electromagnetics. We have given various examples and practical tips to update the current knowledge and knowhow of physicists and engineers in this domain. We have clarified several misconceptions, discussed recent developments, and highlighted the capabilities and limitations of different electromagnetic modeling tools.