A probabilistic approach to wave propagation and scattering



[1] The probabilistic approach to wave propagation starts in a way that is similar to ray theory, from the representation of the wave field as a product of the amplitude and of the exponent of the eikonal, which is computed by a canonical technique of analytical mechanics. However, an important difference is that the amplitude is not approximated but is represented by exact probabilistic formulas that admit efficient numerical evaluation, and that is a direct improvement of many asymptotic solutions. This approach is shown to be an effective tool for the analysis of numerous wave propagation problems, including those of wave diffraction by a screen occupying a plane angular sector and of electromagnetic diffraction by a wedge with anisotropic impedance boundary conditions.

1. Introduction

[2] In the theory of wave propagation solutions of the Helmholtz equation

equation image

are customarily sought in the product form Φ(equation image) = u(equation image) eequation image, where the phase S(equation image) satisfies the eikonal equation [equation imageS(equation image)]2 = κ2(equation image), and the amplitude u(equation image) satisfies the complete transport equation

equation image

[3] There are at least three reasons justifying the use of such a representation of the wave field: (1) The eikonal equation admits a constructive solution by use of the canonical Hamilton-Jacobi method of analytical mechanics; (2) the structure of the eikonal is closely connected with the intuitively clear idea of propagation along rays; and (3) in many cases the amplitude u(equation image) can be approximated sufficiently well by the ray theory solution u0 of the equation obtained from (2) by dropping its first term.

[4] If the first term in (2) is dropped then the resulting first-order equation has the solution

equation image

where J(equation image) is a characteristic of the vector field equation image(S) widely known in the literature as its “geometrical divergence,” and equation imaget = equation imageequation imageS(equation imaget)t is the solution of the ordinary differential equation dequation imaget = − equation imageS(equation imaget)dt, equation image0 = equation image, so that the trajectory of equation imaget is the ray along which the wave arrives at equation image.

[5] It is clear that the approximation uu0 is accurate only when k ≫ 1 and when the geometrical divergence J(equation image) has no singularities in the vicinity of the ray passing through the observation point, which is not the case in many important situations arising, for example, in problems of propagation of low-frequency waves, problems of diffraction, and problems of wave propagation in nonhomogeneous media. Such limitations naturally have generated numerous attempts to improve the elementary approximation ϕ ≈ u0 either by constructing a series ϕ ≈ u0 + u1 + …, or by a more complicated choice of the initial approximation u0, or by a combination of both of these ideas.

[6] Although much progress has been achieved in finding asymptotic or approximate solutions of the complete transport equation (2), it is nevertheless instructive and useful to observe that the exact solution u(equation image) can be represented by explicit probabilistic formulas which are exact in the same rigorous sense as f(x) = sin(x) is an exact solution of f″ + f = 0. Correspondingly, these solutions do not fail anywhere including at caustics, and they do not loose any information which may be used for the analysis of the physical phenomena.

[7] The basic ideas of the probabilistic approach to wave propagation are traced back to the 1920s–1930s, when the link between partial differential equations and Brownian motion was first observed [Philips and Wiener, 1923; Courant et al., 1928; Petrovsky, 1934], but the rapid development in this area was made only after the publications of the landmark papers of Feynman [Feynman, 1942, 1948] and Kac [1949] which presented similar but at the same time very different results: Feynman [1942, 1948] represented solutions of the Schrödinger equation by heuristically introduced path integrals which did not admit probabilistic interpretation, and Kac [1949] adapted Feynman's formula to the heat conduction equation which was solved by means of the rigourously justified Wiener integration in a functional space which had a clear probabilistic sense.

[8] Since the Schrödinger equation is closely related to the Helmholtz equation, it is not surprising that there have been attempts to employ Feynman's path integral for the analysis of wave propagation. In the first papers exploring this direction [Buslaev, 1967; Keller and KcLaughlin, 1975] the ray approximation of the wave field was derived from the path integral solution of the Helmholtz equation. Later, the path integrals were used for numerical simulations of acoustical [Schlottmann, 1999] and electromagnetic [Nevels et al., 2000] waves, but as mentioned in the survey [Galdi et al., 2000], the perspectives of broader application of the path integrals to wave propagation were limited, presumably because of the notorious difficulty of computation of the Feynman path integrals.

[9] It is well known [Feynman, 1998] that the probabilistic formulas employed in Kac's solution [Kac, 1949] of the heat conduction equation rest on a rigorous mathematical foundation and admit efficient numerical simulation, but this equation is not directly connected to the Helmholtz equation describing wave propagation. Nevertheless, it has been recently found that there is a natural way of solve the Helmholtz equation by a probabilistic “random walk” method which is based on Kac's formula and provides a direct improvement of the simple ray approximation of the theoretically exact solution of the Helmholtz equation.

[10] In the next section we briefly discuss the principles of random walk and of its relationship with differential equations. Then we derive solutions of the Helmholtz equation which directly improve the approximation provided by ray theory. Finally, to illustrate an application of the random walk method to problems of diffraction we derive a probabilistic solution of the two dimensional problem of diffraction by a wedge with impedance boundary conditions.

2. Probabilistic Solutions of Differential Equations

[11] To expose the relationship between differential equations and random motions we need first to introduce the notions of Brownian motion, of Brownian motion with a drift, and of reflected Brownian motion.

[12] Suppose a particle moves along the real axis −∞ < x < ∞ starting at the time t = 0 from x = 0 and jumping at the instants tn = nΔt the distance ε in either of two equally probable directions (see the left diagram of Figure 1). Then, the particle's position xn in the time interval [tn, tn+1) prior to the (n + 1)th jump is represented by the sum xn = equation image Δxν of independent random variables Δxn = ±ε with two equally probable values. The sequences xn and tn determine a piecewise constant function equation imaget = xequation image, where equation image = [equation image] Δt, is the last instant of the series tn preceding or coinciding with t. It is well known [Dynkin, 1965; Wiener, 1923] that if the time and space meshes decrease together as Δt = ε2 → 0, then the jump motion equation imaget converges in some sense to a continuous random motion wt which is usually referred to as a one-dimensional Brownian motion or, equivalently, as a one-dimensional Wiener process. As for the Brownian motion in equation imageN it is defined as a superposition equation imaget = (wt1, wt2, …, wtN) of one-dimensional Brownian motions along each of the Cartesian axes (see the right diagram of Figure 1).

Figure 1.

Discrete Brownian motions in equation image and in equation image2.

[13] We are now in a position to derive a probabilistic solution of the equation

equation image

considered, for transparency, on the line −∞ < x < ∞.

[14] It is clear that by using the approximation

equation image

we can represent the solution of (4) as

equation image

which can also be written in the form

equation image

where E denotes the average computed over the random choices of equally possible signs in ϕ(x ± ɛ). Then, the values ϕ(x ± ɛ) involved in (7) can themselves be computed by the formula (7) resulting in the representation

equation image

Obviously, the described iteration of (7) can be computed as many times as desired, and after n iterations we arrive at the representation

equation image


equation image

is a position on the n-legged discrete random walk with the spatial step Δx = ɛ corresponding to the chronological step Δt = ɛ2. Then, passing to the limit ɛ → 0 we convert (9) to the expression

equation image

where the mathematical expectation is computed over all possible trajectories of the Brownian motion ξt = x + wt launched from the observation point ξ0 = x.

[15] If the solution ϕ(x) of equation (4) is bounded and if B < 0, then in passing to the limit t → ∞ we arrive at the Feynman-Kac formula

equation image

where wt is the standard Brownian motion.

[16] Equation (4) is not the only one that can be explicitly solved by averaging over trajectories of random motions. In particular, the more general equation with variable coefficients

equation image

can be solved by a formula that is similar to but more general than (12),

equation image

which has the averaging over a Brownian motion with a drift, which is described below.

[17] Let equation image(equation image) be a vector field in equation imageN, and let equation imaget be a random motion (stochastic process) in equation imageN launched from equation image and consisting of the jumps

equation image

where equation imageΔt is the Brownian displacement on the time interval Δt. Then, passing in (15) to the limit Δt → 0 we obtain a stochastic differential equation

equation image

which is usually referred to as Ito's stochastic differential equation [Dynkin, 1965; Ito and McKean, 1965]. It is clear from (15) that the jump Δequation imaget can be considered as a superposition of the deterministic move Δequation imaget = equation image (equation imaget) Δt and of the Brownian displacement equation imageΔt. Because of this interpretation, illustrated in Figure 2, the random motion described by (16) is referred to as a Brownian motion with a drift.

Figure 2.

Discrete Brownian motion with a drift.

[18] It is remarkable that the probabilistic approach can be extended from partial differential equations like (13) in unbounded domains to boundary value problems

equation image

formulated in a domain Gequation imageN with the boundary ∂G. Coefficients equation image, B and F are assumed to be defined inside G, while the coefficients equation image, b and f are defined on ∂G. Additionally, for definiteness, we assume that vectors equation image are oriented inward toward G.

[19] Application of the random walk method to the problem (17) is based on the idea of defining random motions on the closure G ∪ ∂G whose behavior inside G corresponds to the operator LG = equation image σ22 + equation image · equation image, and whose behavior on the boundary ∂G corresponds to the first-order operator LG = equation image · equation image. Since both operators LG and LG are particular cases of the general second-order operator discussed in the previous section, it is natural to expect that inside G the random walk should be a Brownian motion with a drift associated with the vector fields equation image, and on the boundary ∂G it should be a deterministic motion along the vector equation image. More precisely, the random motion equation imaget can be defined by the equations

equation image

as illustrated in Figure 3.

Figure 3.

Discrete Brownian motion with reflections.

[20] Stochastic processes equation imagetx defined by (18) are known as reflecting random motions and they can also be introduced as continuous solutions of the stochastic differential equation

equation image

with an additional unknown λt, which is required to be a continuous nondecreasing stochastic process increasing only on the “visiting” set equation image = {t: equation imaget ∈ ∂G} of instants when the path equation imaget touches the boundary ∂G.

[21] The process λt is called the “local time at ∂G” because it admits interpretation as the measure of the time spent by the path equation imaget on the boundary ∂G. It follows from (18) that the local time λt can be approximated as λt = limε→0NGε), where NGε is the number of times when the corresponding discrete random motion visited the boundary.

[22] Random motions with reflections are discussed in detail by Skorokhod [1961], Watanabe [1971], Ito and McKean [1965], and Freidlin [1985], and it is shown by Freidlin [1985] that the solution of the problem (13), (17) can be represented by the formula

equation image

which may be regarded as an extension of the Feynman-Kac formula (14).

[23] It is worth emphasizing that the expression (20) remains valid in a quite general setting: it represents the solution of the problem (13), (17) in arbitrary domains, with arbitrary coefficients in the equation (13), and with arbitrary coefficients in the boundary conditions (17). In particular, this solution may be used in the case of the Dirichlet boundary conditions ϕ∣G = f, which correspond to the coefficients a = 0 and b = −1 in (17). In this case, the random motion equation imaget stops as soon as it hits the boundary ∂G and the Feynman-Kac formula reduces to a simpler form

equation image

where τ is the “exit time” defined as the first time when equation imaget touches ∂G.

3. Probabilistic Solutions of the Helmholtz Equation

[24] A typical problem of wave propagation can be reduced to computation of the wave field Φ(equation image) which solves the Helmholtz equation

equation image

with the wave number subdivided for convenience into two components: a variable coefficient κ(x) related to the material properties of the medium, and a constant k related to the frequency.

[25] Although equation (22) matches the structure of (13), its solution cannot be straightforwardly computed by the Feynman-Kac formula (14) because the positiveness of the coefficient B(x) = k2κ2(x) > 0 leads to a divergent integral in (14). This difficulty, however, can be overcome by a standard idea of seeking the solution of (22) in the product form

equation image

which has been known since the early 1800s as a convenient anzatz for exact and approximate solutions of the Helmholtz equation.

[26] Direct substitution of (23) into (22) makes it possible to split the Helmholtz equation into the eikonal equation

equation image

and the complete transport equation

equation image

with the coefficients determined through S(x).

[27] Equation (24) is a well-known eikonal equation from ray optics [Keller, 1958; Maslov and Fedoriuk, 1981] and it is a particular case of the Hamilton-Jacobi equation of classical mechanics [Arnold, 1989]. These equations have been exhaustively studied in the literature so it may be taken as granted that the eikonal S(x) is already known either on the domain G or on a multisheeted Lagrangian manifold constructed over G similar to Riemann surfaces in the theory of analytic functions. After the eikonal is computed, equation (25) may be considered as a second-order partial differential equation with predefined coefficients.

[28] Equations (24) and (25) together are equivalent to the Helmholtz equation (22), and equation (25) has been widely used as the starting point of different approximate approaches to the general wave radiation problem.

[29] If k ≫ 1, one may neglect the first term in (25) and arrive at the “transport” equation 2equation imageS · ∇ϕ + (∇2S) ϕ = 0, widely used in the geometrical theory of diffraction [Keller, 1958; Maslov and Fedoriuk, 1981] for derivation of short-wave asymptotic approximations of wave fields.

[30] Another approximate approach to equation (25) arises if instead of neglecting all of the first term in (25) we neglect only part of it. A broad spectrum of “parabolic equation” methods in the theory of high-frequency wave propagation is based on this idea originating from the contributions of Fock and Leontovich [Fock and Leontovich, 1946; Fock, 1965].

[31] It is remarkable that although much attention has been focused on the reduction of the complete transport equation (25) to simpler equations which admit efficient solutions, the probabilistic approach briefly described above makes it possible to solve the complete transport equation (25) directly, without any approximation.

[32] Indeed, observing that equation (22) matches the structure of (13) it is easy to conclude that its solution can be represented by the mathematical expectation

equation image

computed over the trajectories of the random motion equation imaget governed by the stochastic equation

equation image

driven by the standard Brownian motion equation imaget.

[33] A specific feature of the complete transport equation (25) is that its coefficient B = −equation image2S, which appears in the exponent in (26), is related to the coefficient equation image = −equation imageS, which determines the drift of the random motion equation imaget. This relationship makes it possible to convert the solution (26) to an alternative form which establishes the bridge between the exact solution of the problem and its approximations provided by the ray theory and the geometrical theory of diffraction.

[34] To transform the expression (26) we first observe that the Laplacian ∇2S can be represented by the Liouville formula [Arnold, 1989; Maslov and Fedoriuk, 1981] which states that

equation image

where J(equation image) is the “geometrical divergence” (see Figure 4) of the vector field equation image = −equation imageS. Then, we recall the chain rule of the stochastic calculus which states that if equation imaget is a random motion controlled by the stochastic equation dequation imaget = σdequation imaget + equation imagedt, then the differential of f(equation imaget) is delivered by the formula

equation image

usually referred to as Ito's formula [Dynkin, 1965; Ito and McKean, 1965]. In particular, applying Ito's formula (29) to the function f(equation imaget) = ln[J(equation imaget)], we obtain the identity

equation image

which may be combined with (28), resulting in the representation

equation image

Finally, substituting (31) into (26) and taking into account the initial condition equation image0 = equation image, we arrive at the solution of equation (25) in the form

equation image

given in terms of the geometrical divergence J(x) of the ray field equation image = −equation imageS.

Figure 4.

Geometrical divergence of the vector field.

[35] It should be noticed that the solution (32) of equation (25) heavily depends on the wave number k. Consider, for example, the case when k ≫ 1. Then, as follows from (27), the random component of the motion equation imaget becomes negligible and this motion may be approximated by the deterministic motion along the ray defined as the integral line of the ordinary differential equation equation imaget = −equation imageS(equation imaget)dt. Moreover, in cases when the geometrical divergence J(equation image) is bounded, the integrals in the exponent in (32) tend to zero, and (32) can be approximated by the deterministic formula

equation image

which is well known from ray theory.

[36] One of the most serious drawbacks of the ray method is that (33) fails to approximate the solution of the complete transport equation (25) near caustics where J(x) = 0. Since the function J(x) appears also in the denominator of the exact solution (32) it is important to notice that (32) does not fail on caustics, because as J(x) → 0 the exponent in (32) becomes highly oscillatory which compensates for the vanishing of J(x) in the denominator. On the other hand, to compute wave fields on caustics there is no need to use (32) at all, because the equivalent expression (26) with no vanishing denominator may be used instead.

4. Two-Dimensional Problem of Diffraction by an Impedance Wedge

[37] In the previous section we outlined the probabilistic approach to wave propagation in inhomogeneous media over the entire space. However, this method can also be employed for the analysis of wave radiation and diffraction. As an example illustrating the potential of the probabilistic approach we apply it here to the two dimensional problem of diffraction by a wedge with impedance boundary conditions.

[38] Let G be an infinite wedge r > 0, 0 < θ < α given in the polar coordinates (r, θ). Then, a classic problem of diffraction of a plane incident wave U0 = eequation image by a wedge G with impedance boundary conditions can be reduced to the Helmholtz equation

equation image

complemented by condition at the vertex U(r, θ) = 0(1) as r → 0, by the boundary conditions

equation image

and by the condition at infinity which requires that the solution U(r, θ) does not contain any arriving component except for the incident wave U0(r, θ). The ratios ikBn/An, n = 1, 2, are usually referred to as the surface impedances and it is well known that the inequalities

equation image

guarantee the existence of the unique solution of this problem.

[39] Elementary geometric-optical analysis suggests that the solution of this diffraction problem can be represented as a superposition

equation image

of the discontinuous wave fields Ug(r, θ) and Ud(r, θ) = ud(r, θ)eikr, which are referred to as the geometric wave and the diffracted wave, respectively.

[40] The geometric field Ug(r, θ) consists of the incident wave and a finite number of reflected waves, which can be computed a priori by the laws of geometrical optics. To avoid too much emphasis on material that is not essential for our purposes here we do not present general formulas for all angles α and θ0, but only mention that if the following conditions (see Figure 5) are satisfied

equation image

then the geometric field is defined by

equation image

where χ(θ) is the Heaviside step function, θn = π + (−1)nθ0, and

equation image

is the reflection coefficient of the face θ = 0 illuminated by the incident wave.

Figure 5.

Geometry of the problem on the (r, equation image) plane.

[41] Unlike the geometric field Ug(r, θ), the amplitude ud(r, θ) of the diffracted field is an unknown function which has to obey the equation

equation image

accompanied by the interface conditions

equation image
equation image
equation image

together with the condition at infinity

equation image

and the impedance boundary conditions

equation image

[42] Equation (41) in a wedge 0 < θ < α with homogeneous Dirichlet boundary conditions has been studied by Budaev and Bogy [2003] where its exact solution is represented as a mathematical expectation computed over the trajectories of a specified random motion that is stopped as soon as it hits one of the faces θ = 0 or θ = α of the wedge. However, as discussed in Section 2, the probabilistic solution of equation (41) with interface boundary conditions (42)–(43) can be extended to problems with impedance boundary conditions (46) by a simple modification of the behavior of random motion at the boundary, which should reflect the motion back to the domain instead of stopping it.

[43] More precisely, using a straightforward combination of the general theory [Freidlin, 1985] outlined in the end of Section 2 with the specifics of equation (41) and interface conditions (42)–(43), it is easy to arrive at the representation of the amplitude u(r, θ) in the form

equation image

the exact meaning of which is explained below.

[44] The mathematical expectation E is computed over the trajectories of the independent random motions ξt and ηt referred to hereafter as the radial and the angular motions, respectively. The radial motion is launched at the time t = 0 from the position ξ0 = r and is controlled by the stochastic differential equation

equation image

where wt1 is the standard one-dimensional Brownian motion. As shown by Budaev and Bogy [2003] this motion is confined to the first quadrant Re(ξt) > 0, Im(ξt) ≥ 0 and has a drift toward the unreachable point ξ = i/2k. The angular motion ηt is launched from the position ξ02 = θ and is governed by the equations

equation image

which show that inside the segment 0 ≤ η ≤ α the motion ηt runs as a standard Brownian motion, but when it reaches the segment's boundaries it is deterministically reflected back.

[45] The angular motion running inside the interval 0 ≤ η ≤ α crosses interior points η = π ± θ0 at the times t = τν enumerated by the index ν ≥ 1 which determines the factors δν and Qν of (47) by the following rules:

equation image


equation image

It is clear that δν = 1 if at the time τν the interface η = θ1 or η = θ2 is crossed from left to right. Similarly, the value δν = −1 corresponds to the crossing from right to left, and δν = 0 corresponds to the case when the interface is touched but not intersected. The value of Qν, is determined by the particular interface η = θ1 or η = θ2 that is touched at the time t = τν. Finally, λt1 and λt2 in the integrals from (47) represent the local times of the angular motion ηt on the interfaces η = θ1 and η = θ2.

[46] A rigorous discussion of stochastic differential equations, stochastic integrals, and of local times can be found in the literature on stochastic processes [Dynkin, 1965; Freidlin, 1985], but for our purposes it suffices to view the random motions ξt, ηt and the integrals from (47) as the limits as Δt → 0 of discrete processes as described below.

[47] The radial motion ξt that is controlled by the stochastic equation (48) can be considered as a sequence of random jumps

equation image

following each other with an infinitesimally small time increment Δt → 0. Similarly, the angular motion ηt may be approximated by discrete jumps determined by the rule

equation image

depending on the current position of the moving point. These discrete approximations of the radial and angular random motions are closely related to the possibility of approximating the integrals from (47) by the Riemann sums

equation image

where the factors

equation image

indicate the times when the angular motion ηt is reflected by the boundaries η = 0 and η = α, respectively.

[48] To illustrate the feasibility of the obtained probabilistic solution for calculations of the problem of diffraction by a wedge with different impedances on its faces we conducted a series of numerical simulations for the wedge Γ(266°) exposed to the incident plane wave U0(r, θ) = eequation image arriving along the ray θ0 = 43°. In this configuration, which was selected for comparability with Osipov [2004], the shadow domain 223° < θ < 266° is illuminated only by the diffracted waves, the sector 137° < θ < 223° is open to the incident and diffracted waves, and the domain 0 < θ < 137° is exposed to the incident, reflected and diffracted waves.

[49] Figure 6 shows the magnitudes of the total wave fields in the wedge with the coefficients A1 = A2 = 1 and B1 = B2 = ikB, with the impedance B ranging from B = 1/5 to B = 5. The dashed lines correspond to the impedances B = 1/2 and B = 2, while the solid lines correspond to the impedances B = 1/5 and B = 5 which were considered by Osipov [2004] by two conventional methods, including the Maliuzhinets' closed-form solution. Since the computations reported by Osipov [2004] (Figure 1) were made along the arc kr = 4, we set k = 1 and r = 4, which allows us to show that the numerical results obtained by the probabilistic method agree with those delivered by other more traditional techniques. The results provided by the two methods appear to be identical.

Figure 6.

Simulated total wave fields.

[50] All numerical results were obtained by the averaging of 2000 discrete random walks (52)–(54) with the time increment Δt = 0.01. The computations were carried out on a laptop PC using the simple MATLAB code presented below. We include this code just to illustrate its amazingly short length and transparency.function [u, U] = point(r0, f0, a0, M1, M2, alpha, k, ep, N)ik = i.*k;Cr = (sin (a0) − M1)./(sin (a0) + M1);[f, r, Q] = deal (repmat (f0, N, 1), repmat (r0, N, 1), ones (N, 1));[u, J1, J2] = deal (0, (f > pi − a0), (f > pi + a0));while isempty (r)ds = ep./abs (r);wr = ds.*sign (rand (length (r), 1) − 0.5);wf = ds.*sign (rand (length (r), 1) − 0.5);f = f + wf;[Ja, Jb] = deal (f > pi − a0, f > pi + a0);u = u + sum (Q.*(Cr.*(J1 − Ja) + (J2 − Jb)))./N;[I1, I2] = deal (f < 0.02, f > alpha + 0.02);ds(ds > ep) = ep;f (I1) = ds (I1);f (I2) = alpha − ds (I2);Q (I1) = Q (I1).*exp (ik.*ds (I1).*M1.*r (I1));Q (I2) = Q (I2).*exp(ik.*ds (I2).*M2.*r (I2));Q = Q.*exp (0.25.*ik.*r.*wr.^2);r = r.*(1 + wr + (0.5 + ik.*r).*wr.^2);Q = Q.*exp(0.25.*ik.*r.*wr.^2);I = abs (Q) > 1e − 3;[f, r, Q, J1, J2] = deal (f (I), r (I), Q (I), Ja (I), Jb (I));end U = u.*exp(ik.*r0);if f0 < pi − a0 U = U + Cr.*exp(−ik.*r0.*cos(f0 + a0)); end if f0 < pi + a0 U = U + exp(−ik.*r0.*cos(f0 − a0)); end

[51] Input parameters r0, f0 are the polar coordinates of the observation point; a0 is the incidence angle; M1 = ikB1/A1 and M2 = ikB2/A2 are the impedances on θ = 0 and θ = α; alpha is the wedge angle; k = k is the wave number; ep = equation image is the spatial step; and N is the number of averaged random walks. The output parameter u = u(r, θ) is the amplitude of the diffracted field, and U = U(r, θ) is the total field.

[52] Finally, it should be mentioned that we made no attempts to find the most efficient numerical scheme for simulation of the solution (47) or even to optimize the employed scheme. Although we admit the importance of development of efficient algorithms, we believe that this should be a subject of separate research.

5. Conclusion

[53] The results presented here show that the synthesis of the ray method with the probabilistic technique provides a promising approach to problems of wave propagation and diffraction which may be used both for effective numerical evaluation and for asymptotic analysis. The advantages of this combination include, but are not limited to: physical meaningfulness is retained from ray theory; solutions can be computed at individual points instead of on massive meshes; versatility; numerical implementations may employ simple and scalable parallel algorithms with minimal requirement of computer memory.

[54] The basic ideas of the probabilistic approach to wave propagation were developed by Budaev and Bogy [2001, 2002b]. Budaev and Bogy [2002a] showed that the random walk method makes it possible to describe such phenomena as backscattering which is predicted neither by the ray theory nor by a more general method of parabolic equations. More recently, the probabilistic method was successfully applied to a formidable three dimensional problem of diffraction by a plane angular sector. Budaev and Bogy [2004] considered this problem with Dirichlet boundary conditions, and Budaev and Bogy [2005a] extended the analysis to Neumann boundary conditions. Finally, Budaev and Bogy [2005b] extended the random walk method from scalar problems to the three dimensional vector problem of electromagnetic wave diffraction by a wedge with anisotropic impedance boundary conditions.

[55] All of these together make the random walk method attractive for the analysis of wave propagation, and we anticipate that it will evolve into a practical and broadly used tool.


[56] This research was supported by NSF grants CMS-0098418 and CMS-0408381 and by the William S. Floyd Jr. Distinguished Professorship in Engineering held by D. Bogy.