### Abstract

- Top of page
- Abstract
- 1. Introduction
- 2. Deterministic Setup
- 3. Stochastic Parameterization of the Problem
- 4. Computation of the Moments
- 5. Results
- 6. Conclusion
- Acknowledgments
- References
- Supporting Information

[1] The usefulness of stochastic methods to efficiently quantify uncertainties in computational models of electromagnetic interactions is illustrated. A refined study of the second-order moments of a complex-valued Thévenin model, which represents the coupling between a wire structure and a time-harmonic electromagnetic field, is presented. The configuration of a stochastically undulating thin wire illuminated by a stochastic incident plane wave is investigated in detail. Three computational methods are used to evaluate the mean values and covariance coefficients of the observable: a straightforward Cartesian-product quadrature method, a Monte-Carlo method, and a space-filling-curve method. The underlying patterns of the randomness are revealed by analyzing the covariance matrix' principal components. The study of this interaction configuration shows some general characteristics, which are expected to show up in any stochastic electromagnetic interaction problem. In particular, the results indicate that fluctuations in self-interaction coefficients (impedances) have distinct features and are quite different from the coefficients describing the interaction with externally generated fields (voltages).

### 1. Introduction

- Top of page
- Abstract
- 1. Introduction
- 2. Deterministic Setup
- 3. Stochastic Parameterization of the Problem
- 4. Computation of the Moments
- 5. Results
- 6. Conclusion
- Acknowledgments
- References
- Supporting Information

[2] Modeling the interaction between electromagnetic waves and material objects is a topic of interest in fields as diverse as radio astronomy, biomedical engineering, and electromagnetic compatibility (EMC). In EMC, the study of such interactions is a crucial part of the design of electronic devices, to investigate their immunity to intentional or parasitic electromagnetic fields emanating from internal components of highly integrated circuits, or from external electromagnetic sources [*Sun et al.*, 2007; *Perez*, 2008].

[3] Generally, the models of these interactions are parameterized by a set of inputs describing the interaction configuration. The outputs, also known as the observables, can be as diverse as scattered-field amplitudes, impedances, or the induced voltage at the port of an electronic interconnect system.

[4] In practice the input parameters can exhibit a variability due to, e.g., noise in the measurement of the inputs, to general model assumptions, or to manufacturing defects. Modeling the corresponding variation of the observables by a repeated execution of the model for each possible configuration, can be tedious, firstly due to the numerical effort required by these simulations, secondly because of the need to postprocess the collected data.

[5] A stochastic analysis, where the variations of the unknown inputs are assumed to be random, is an appealing alternative. It implies that the observables become random variables and probability theory can be used, in principle, to compute the distribution of their values. In practice, the mathematics of the explicit dependence of the observables on the configuration parameters are often too complicated to carry out, particularly in the presence of numerical models. Restricting the stochastic configuration to small variations of the inputs around a nominal situation gives more opportunities to construct the probability distributions of the observables [*Ajayi et al.*, 2008; *de Menezes et al.*, 2008]. However the “small-variation” hypothesis limits the scope of the conclusions drawn from such an analysis.

[6] Instead of aiming for the total probability distribution of the observable, its statistical moments can be computed by quadrature rules over the probability space parameterizing the interaction configuration. Hence, rather than precomputing and subsequently postprocessing a large amount of data, the objective of such a stochastic approach is to efficiently compute the statistical moments via the study of a limited number of sample configurations. Through such a rationale, stochastic fluctuations can be handled without restrictions on their amplitude. Once these moments are available, they provide valuable information on the observable, which is generally valid, i.e., the same information would be obtained if a large amount of configurations would have been studied in detail.

[7] Stochastic approaches have previously been proposed in rough-surface scattering problems [*Brown*, 1985], and in mode-stirred-chamber theory [*Hill*, 1998]. Interactions between electromagnetic waves and configurations of wires, which form a topic of prime interest in EMC, have also been studied from a stochastic point of view [*Bellan and Pignari*, 2001; *Michielsen*, 2005; *Pignari*, 2006]. These configurations are for instance present in wiring systems of integrated circuits or in harnesses of vehicles.

[8] However, all these works aim either at characterizing real-valued observables such as the amplitude of the voltage or the current induced at a given port, or at studying complex-valued observables only via their mean and their variance. In our case, where the interactions are formulated in the frequency domain, the observables are exclusively complex-valued. Although the average and the variance provide valuable information on the distribution of the observable values, the variance does not inform on finer details of the spread of the values in the complex plane. In the most general case, where real and imaginary parts are not related, such an approach leads to some loss of statistical information on the original complex variables. Such loss of information can prove penalizing for instance in impedance adaptation studies, which require a distinction between the resistive and reactive parts of the impedance to optimize power transfers.

[9] In the present paper, the aim is to refine the statistical analysis based on the computation of first- and second-order statistical moments of a complex random observable. To this end, the observable is handled as a random ^{2} vector, with its real and imaginary parts as the vector components. The average vector and the covariance matrix of this vector are computed efficiently by a space-filling-curve quadrature rule [*Cukier et al.*, 1973], which is tailor-made for higher-dimensional parameter spaces. This rule offers advantageous performances both in terms of complexity and in terms of convergence properties, when compared to a deterministic Cartesian-product rule and a Monte-Carlo approach. With the covariance matrix at hand, its underlying patterns can be revealed by determining and analysis its principal components.

[10] The outline of this paper is as follows. The configuration of a deterministic electromagnetic interaction between a thin-wire structure and an incident field is first described in section 2. The observables are chosen to be the coefficients of the equivalent Thévenin network model, i.e., a voltage source *V*_{e} and an impedance *Z*_{e}. In section 3, the parameters of the interaction are randomized by regarding the wire geometry, the incident field, as well as the observables *V*_{e} and *Z*_{e} as stochastic objects.

[11] This stochastic parametrization allows for the definition of the average and the covariance of the observables. Next, the covariance matrices are spectrally analyzed to derive an optimal decomposition of *V*_{e} and *Z*_{e}, which eases the expression and the interpretation of their statistical properties. Section 4 presents the quadrature methods employed to efficiently compute the statistical moments of the observable. The quadrature rules range from a deterministic Cartesian-product rule and a Monte-Carlo rule, to a space-filling-curve rule. The results provided in section 5 refer to a fully stochastic interaction that involves a randomly undulating thin wire under a random incident field. Conclusions are then drawn in section 6.

### 4. Computation of the Moments

- Top of page
- Abstract
- 1. Introduction
- 2. Deterministic Setup
- 3. Stochastic Parameterization of the Problem
- 4. Computation of the Moments
- 5. Results
- 6. Conclusion
- Acknowledgments
- References
- Supporting Information

[35] The statistical moments [*V*_{e}], *σ*[*V*_{e}] and *C* are computed numerically by a quadrature rule ��_{L} of level *L* that approximates the integral in equation (10) by a discrete sum

where *L* ∈ and the number of samples *N*_{L} is an increasing function of *L*. The quadrature rule is fully defined by the abscissae ��_{L} = {*γ*_{n} ∣ *n* = 1,…, *N*_{L}} ⊂ ��, and the positive weights ��_{L} = {*w*_{n} ∣ *n* = 1,…, *N*_{L}}. For stable quadrature rules, increasing *L* ensures a higher accuracy in the approximation of [*h*(*V*_{e})] by ��_{L}. At the same time, *N*_{L} represents the complexity of the quadrature formula as it corresponds to the number of evaluations *V*_{e} which, as previously mentioned, bears a certain numerical cost.

[36] A first step toward limiting *N*_{L} consists in taking advantage of the definition of all statistical moments as integrals over the same support ��. The same samples *V*_{e}(��_{L}) = {*V*_{e}(*γ*_{n}) ∣ *n* = 1,…, *N*_{L}} can thus be reused to compute the various integrals, provided that the same quadrature rules are used. This procedure requires a simultaneous convergence of all the integrals being computed.

[37] Secondly, the abscissae are chosen in a nested manner by setting *N*_{L=0} = 1 and *N*_{L} = 2^{L} + 1, for *L* ∈ *. Such a nesting reduces the effort necessary to increase the level of the quadrature rule: only *N*_{L} new function evaluations are needed to obtain _{L+1} from ��_{L}, instead of *N*_{L+1} = 2*N*_{L} + 1, in the nonnested case.

[38] Ideally, the accuracy of the approximation in equation (22) would be evaluated via the absolute error

However, due to the unavailability of [*h*(*V*_{e})], alternative error indicators are employed. The relative error *E*(*L*) of ��_{L} is used instead, to track the variations of ��_{L} as a function of the level *L*, with

when ��_{L}[*h*(*V*_{e}) *f*_{γ}] ≠ 0. As _{L}[*h*(*V*_{e}) *f*_{γ}] converges to [*h*(*V*_{e})], *E*(*L*) will gradually decrease to zero.

#### 4.1. Monte-Carlo Quadrature Rule

[39] One of the most popular multidimensional quadrature approaches is Monte-Carlo's (MC) rule, which is defined as

The abscissae *γ*_{n} are random, statistically independent, and drawn from �� by a random-number generator, which uses the pdf *f*_{γ}. The convergence rate of this rule can be obtained through the Central Limit Theorem [*Krommer and Ueberhuber*, 1998, p. 254], and it evolves as 1/, which is very slow. This convergence rate depends only on the size of the set *V*_{e}(��_{L}) and not on the dimension of *γ*, which makes the rule very robust. However, when aiming for a certain accuracy in *N*_{L} steps, 100 times more samples are required to increase the accuracy by a single digit. Moreover, the smoothness of the integrand *h*(*V*_{e}(*γ*)) as a function of *γ* has little influence on the convergence rate of the MC rule, which is an advantage for the integration of roughly behaved integrands. Owing to these properties, the MC rule is taken as a robust reference, but alternative quadrature rules are employed as well.

#### 4.2. Deterministic Cartesian-Product Quadrature Rule

[40] As a first alternative, a deterministic Cartesian-product (DCP) rule is considered. If the input vector *γ* = (*γ*_{1},…, *γ*_{d}) is *d*-dimensional, and �� is the Cartesian product of one-dimensional domains ��_{1},…, ��_{d}, equation (10) becomes

A 1-D rule �� of level *L*_{1} is then applied to approximate the integral over ��_{1}

where the *N* abscissae {*γ* ∈ ��_{1}, *n*_{1} = 1,…, *N*} are all deterministic.

[42] In the *d*-dimensional case, �� is repeatedly applied to each of the other integrals, which produces to the following approximation

As such, the *d*-dimensional rule ��_{L}^{d} benefits from the advantageous property of favoring smooth integrands over rough ones. This rule however results in a grid that requires *N*_{L} = (*N*)^{d} evaluations of *V*_{e}(*γ*). Such an exponentially increasing complexity in terms of the dimension *d* of ��, also known as the “curse of dimensionality,” is extremely penalizing for high dimensions, i.e., *d* ≥ 3.

#### 4.3. Space-Filling-Curve Quadrature Rule

[43] For higher dimensions (*d* ≥ 3), a space-filling-curve (SFC) quadrature rule has been implemented. It is based on the transformation of the multidimensional integral over ��, into a one-dimensional curvilinear integral. This is achieved by constructing a Peano curve denoted *χ*_{γ}. If all components of *γ* = (*γ*_{1},…, *γ*_{d}) are statistically independent, they can be expressed in terms of a single scalar *s* ∈ [−*π*, +*π*] as follows [see *Saltelli et al.*, 1999]

The functions *G*_{i} are the solutions to a differential equation involving *f*_{γ} and ensure that, for any region ℛ_{0} ⊂ ��, the length of *χ*_{γ} contained in ℛ_{0} equals the probability of having samples *γ* belonging to ℛ_{0}. As *s* varies in [−*π*, +*π*], the points *γ*(*s*) describe a curve *χ*_{γ}, which can be made to go arbitrarily closely to any point of �� by properly selecting the frequencies *w*_{i}. For practical reasons, the frequencies *w*_{i} are chosen as integers that form an incommensurate set of order *M* [*Schaibly and Shuler*, 1973], i.e.,

The frequencies *w*_{i} constitute a linearly independent set of order *M*. By applying Weyl's ergodicity theorem [*Weyl*, 1938], equation (10) is cast into a 1-D integral

The one-dimensional integral in equation (31) is evaluated numerically by a 1-D quadrature rule

The abscissae *s*_{n} are equally spaced in [−*π*, +*π*] to ensure an exponential convergence rate for analytic functions. Regarding the complexity of this rule, Nyquist's criterion yields a lower bound for *N*_{L} as

*Cukier et al.* [1975] have established an empirical formula that links *N*_{L} to the dimension *d* of ��, i.e.,

where �� is Landau's symbol and ɛ > 0. In particular, for *M* = 4, *N*_{L} ≈ 2.6 *d*^{2.5}, which is significantly lower than the exponential complexity of the DCP rule. The accuracy of the SFC rule depends on the ability of the search curve to fill the space ��, which follows from the incommensurability of the frequencies *w*_{i}. The convergence rate of the SFC rule exploits the smoothness of the integrand through the coefficient *M*: this coefficient Fourier determines the level beyond which the interferences in the spectrum of the integrand are rejected.

[44] Similarities can be found between the SFC rule and so-called lattice rules [*Sloan and Joe*, 1994], e.g., regarding their constructions based on the Fourier analysis of the integrand. The main difference lies however in the presence of the probability distribution _{γ} in the definition of the search curve *χ*_{γ}.

### 6. Conclusion

- Top of page
- Abstract
- 1. Introduction
- 2. Deterministic Setup
- 3. Stochastic Parameterization of the Problem
- 4. Computation of the Moments
- 5. Results
- 6. Conclusion
- Acknowledgments
- References
- Supporting Information

[83] A probabilistic approach has been presented to statistically quantify uncertainties in electromagnetic interactions. This method relies on the efficient use of quadrature rules to compute statistical moments of the observables. For higher dimensions, the space-filling-curve rule proves to be very efficient in terms of complexity and convergence rate, when compared to the Monte-Carlo rule, which in turn is far more efficient than a deterministic Cartesian-product rule.

[84] The approach has been applied to a fully stochastic interaction between a random plane wave and a random wire geometry. The complex-valued observables, chosen as the coefficients of a Thévenin model, have been handled as real-valued vectors through their real and imaginary components. Chebychev's inequality has highlighted the isotropic and strict nature of the variance when the spread of complex observables is measured. A finer statistical characterization of both components of the observable was possible with the aid of the average and the covariance of the observables. The correlation coefficient has highlighted the statistical coupling that generally exists between the components of complex observables. The principal component representation provided a refined, decoupled and conformal quantification of the stochastic parameters studied.

[85] Particular differences have been noted between the distribution of the impedance coefficients, which measure the self-interaction of the stochastic geometry, and the distribution of the induced voltage sources, which measure the interaction of the stochastic geometry with external sources. This is probably not specific for the thin-wire example but reveals that when the geometry plays a double role of emitter and receiver, as is the case when the impedance coefficients are computed, it induces additional correlations between real and imaginary parts of the observable.

[86] The determination of additional qualitative information on the probability distribution of the Thévenin parameters requires the analysis of higher-order statistical moments such as the skewness and the kurtosis. This is the subject of our future work.