Hitting time distributions for efficient simulations of drift‐diffusion processes

Numerical solutions to partial differential equations in anisotropic, heterogeneous media obtained by their probabilistic representations are useful for a number of purposes, including our own interests in biomedical simulations. These solutions are obtained by a walk with both random and deterministic components hitting a boundary. Hitting time distributions are required to efficiently simulate such processes. The distributions for hitting time T and place XT on a surface for a particle undergoing both diffusion and drift are presented here, generalizing the previous work. Importantly, the distributions directly obtained are, on the whole, not useful for numerical work, being expressed as very poorly and nonuniformly convergent series. Resummed series with rapid convergence for probabilities over a useful range of the hitting time to the boundary are here obtained in different ways. A numerical inverse is constructed to implement the random walk. We have used these hitting time distributions, combined with efficient inverse function construction, to obtain speed‐up of simulations of fluid and particle transport by permitting the steps allowed for a given accuracy to be as large as possible.


INTRODUCTION
Computer simulation of a Brownian process has, as is well known, a very vast number of applications. In our own work, we have used such simulations for applications in neuromedicine; in particular to optimize drug delivery for serious neurological diseases. [1][2][3] For such applications, we need numerical solutions of both parabolic and elliptic partial differential equations that describe fluid flow and particle transport in tissue allowing for the inhomogeneities and anisotropies that are necessarily prevalent in such a medium. 4 (Of course, such equations have a vastly greater field of applications than our own current interests.) We have used the probabilistic representations that mathematicians had worked out for such general cases (with nonconstant coefficients and anisotropic diffusion) utilizing in particular the results detailed in the monograph. 5 For this purpose we need to simulate Brownian processes with arbitrary drift vector field v (x) and diffusion tensor field D (x). In our applications, these fields vary spatially but are considered to be fixed temporally. We shall not discuss the case where these fields are inhomogeneous in time as well. It is important in our applications that these fields vary in the medium so that we cannot directly use solution representations for homogeneous media. We have made use of the general formulas for such spatially dependent equations in both initial value as well as boundary value (and initial/boundary value) problems, as has been indicated in our work quoted. 4 We omit references that are specialized to solution representations for homogeneous media, which may be obtained from stochastic walks on the boundary of the medium. Other workers have used simulation of general Brownian processes for other purposes. In particular, the works of References 6 and 7 have pointed out that the straightforward way of simulating a Brownian walk can be quite inefficient. In their papers, they had large homogeneous regions, and the standard way of simulation is to undertake given time steps Δt which need to be sufficiently small so that the walker is likely to remain in a region of constant drift and diffusion coefficients, with a probability close to unity, according to the accuracy desired. Then the step length is sampled from a Gaussian distribution, the step direction being uniformly distributed over the area of a sphere. For most applications, this ends up being too demanding and the simulation times become unacceptably large even for modest accuracies 6,7 because the requirement on the time step remaining small is too stringent. In order therefore to simulate the random walk more efficiently, 6,7 we need to be able to take as large a step as the homogeneity of diffusion coefficients allow (in their case), or as the constancy of drift vector v and diffusion tensor D allow for the more general cases. In fact, we need to convert from a fixed time step method to a process with fixed step length ΔR. This means that we need to compute the distribution of the hitting time T and hitting place X T on a convenient surface when a particle with drift and diffusion is released from a point within the center of the volume enclosed by the surface, assuming sufficient symmetry. Then the simulation may proceed by choosing a point on the surface with the specified hitting place distribution, and marking a random time as having elapsed, chosen according to the hitting time distribution. The simulation then continues in a similar fashion from this point on. We denote the surface by  and the closure of its interior by . The purpose of this paper is to generalize to the case of a drift velocity field (and an anisotropic diffusion tensor field), the results on hitting time in, 6 which were restricted to the case of the standard Wiener process of isotropic diffusion in the absence of drift. Once we have obtained these distributions, we need to find the inverse functions in order to simulate the walk, * at least in one standard way of generating random variables. It will turn out that the distributions (including the cases found earlier) are expressed as formal power series which are not uniformly convergent, and indeed very poorly convergent and essentially useless for "smaller" steps, where the meaning of "smaller" will be quantified below (essentially where the distribution is not very close to unity as a matter of fact). We therefore develop resummed series with fast convergence by using the Poisson summation formula. To obtain random samples for simulation, we indicate one simple form for the numerical inverse that we have used extensively. However, the principal purpose of this paper is to compute the hitting time distributions in useful (rapidly convergent) form for whatever purposes they may serve for the community.
After this work was completed, we became aware of two papers 8,9 in this area that have been published. We now reference their work in the section where we write the relevant formulas (Equations (32) and (34) in Section 5.3) which were previously obtained by these authors by other means, and we discuss their work in more detail in our Conclusion section.
We compute the distributions required in three steps. First, we review the known results on this distribution when there is no drift, and the diffusion is isotropic. Then, we compute the distribution for the case where there is drift, and make some remarks for the case we are actually interested in, where the diffusion is anisotropic as well. All results are restricted to three dimensions. Then we compute a better convergent representation of the hitting time, in several ways. We build on deep results obtained by mathematicians decades ago. The principal results obtained here are the hitting distributions (Section 5.3). In the appendices, we develop some formulas that would otherwise have impeded the discussion in the main text. The notation we use in this paper is summarized in Table 1. *If a random variable T has as its distribution P (t) and if ← − t (u) is its inverse, that is, = u, then if u is a uniformly distributed random variable in the range [0, 1], the variable ← − t (u) will have the distribution P. In the language of composite functions, we would express the inverse relationship as saying that ← − t •P and P• ← − t each maps its respective domain onto itself and is the identity thereon. Before we commence our study proper of the hitting time distributions, we make some remarks about our original motivation, namely applications in computational medicine to drug delivery. All formulas of the basic probabilistic solutions are deferred to Appendix A, since they are only background, and none of the results developed in this paper are dependent on the formulas quoted in this appendix.

Probabilistic representations for partial differential equations and their application to advective transport in biomedicine
We refer to our papers such as Reference 4, which provides details of our application interests. The equations are those for the transport of therapeutic molecules in solutions injected directly into brain (and other) tissue and are thereby carried (advected) by the flow of the fluid. The equations describing the transport of the molecule are therefore of the advection-diffusion type, along with processes that account for losses of the molecule and of excess fluid. The losses of fluid and molecule are through escape into the system circulation (blood vessels). In addition, molecules are lost through degradation, binding, and other mechanisms. In this paper, we will include only linear loss rates. The equations are of parabolic type, (heat equation in inhomogeneous and anisotropic media) and Appendix A quotes the mathematicians' probabilistic representation 5 of solutions for such equations, and discusses the correspondence between the parameters to translate from their formulas to the equations used in our application. The advection term involves the fluid velocity; therefore in the application, the fluid velocity must be solved for first before the solutions for drug distribution can be obtained. The equation for fluid flow is that of slow flow in porous media using Darcy's law, and these equations are of elliptic type (Laplace's equation for inhomogeneous and anisotropic media). The corresponding discussion of this equation and the representation used for its solution are described in a subsection of the appendix.
We have chosen to use probabilistic methods to solve for the equations, rather than the more traditional finite element methods, because they can more directly incorporate volumetric data obtained from three-dimensional magnetic resonance imaging; they are very amenable to parallelization; they can be used to compute the solution in selected points or regions; and other advantageous characteristics that are mentioned in the paper quoted, with results and further discussion shown for example in 3 and other publications referred to therein.

BROWNIAN MOTION HITTING TIME TO A SPHERE
We first define the hitting time T H and hitting place X H . A stochastic process is first defined, for example, the standard Brownian motion. A particle whose trajectory is defined by this process is released at the origin (or any determined starting point). Then T H is the first time when, and X H the corresponding place (provided T H is finite) where, the particle hits a sphere of radius R centered on the origin (or the starting point, in general). These are random variables, and in the case of standard Brownian motion, that is, isotropic diffusion without drift, Wendel 10 derived the moment generating function (up to a sign) for the probability distributions of these variables, in arbitrary dimension and with arbitrary starting point. We will only be interested where the starting point is the center of the sphere in question embedded in three-dimensional Euclidean space. Specializing his result to starting at the center, and restoring the diffusivity D † and radius R of the sphere, his calculation yields for the moment generating function for T H , In this case, the distribution for the hitting place is obviously uniform with respect to the area measure on a sphere, and also obviously independent of the hitting time distribution. The hitting time density therefore is the inverse Laplace transform of the above: where the line of integration parallel to the imaginary axis must be taken to the right of all the singularities of the integrand. The contour must be closed to the left (e zT → 0). The singularities are simple poles at that is, along the negative x−axis. We therefore choose the real to be slightly negative, and close the contour by a large semicircle which avoids hitting any pole. The integrand is vanishingly small for negative real part as the contour extends to infinity. Since sinh ( √ R 2 (z n + ) ∕D , upto linear order in , the integral is evaluated by residues: The cumulative distribution then has the representation: The right-hand side is a convergent series for T > 0, but (i) the number of terms required for a fixed error bound increases without limit as T → 0 (it is thus not uniformly convergent), and (ii) the inverse function to the above, which we denote ← − t 0 (u), 0 ≤ u ≤ 1, which is needed for numerical simulations is not expressible as a standard function. The standard way of taking care of the first problem is with the Poisson summation formula. 11 We examine these issues in a later section. But first we generalize to the case with drift, and then to anisotropy. The results above for isotropic diffusion have been previously obtained; 6 however, the paper omits all mention for the need of a better convergent series. † In comparing formulas, it is necessary to note a difference between the mathematician's and the engineer's use of the diffusion equation. The mathematician uses the normalized Gaussian u = exp ( −x 2 ∕2t ) ∕ √ 2 t with variance equal to t as the fundamental solution of the diffusion equation with unit diffusivity (in one dimension). Since the expression is a solution of u∕ t = 1 2 2 u∕ x 2 , there is a factor of 1 2 which is absent in the equation physicists and engineers use for the diffusion equation with unit diffusivity. We will use the normalized solution to (

ISOTROPIC BROWNIAN MOTION WITH DRIFT
The above results concerned isotropic Brownian motion with no drift. In order to compute what we want with a deterministic drift velocity included, we begin with a special case of a theorem that Wendel proved in another paper, 12 namely that the joint distribution of hitting time and place, even when drift is included, factorizes for isotropic Brownian motion when a particle released at the center of the sphere. As he remarks, this is a somewhat surprising result. The reason that intuition seems to suggest differently can be perhaps be illustrated with a simple example. Suppose the particle has a strong drift toward the north pole, starting at the center. Then intuition says that the particle is more likely to exit the sphere first from the north pole rather than the south, which is true. However, we sometimes confuse this with: the likelihood it will exit first from the north pole rather than the south is higher for small times, which is false. In fact, as is well known, diffusion is more important the shorter the time; in any case the theorems state that the time and place distributions are independent. This theorem is very useful, indeed crucial for simulations since otherwise the joint distribution of hitting time and place, if not factorizable, would be too complicated for effective use. In any case, in that same paper he quotes an even then standard result: where the left-hand side is the expectation under Brownian motion with drift v of any function F on the path, while the right-hand side is the expectation under the standard Brownian motion (without drift). We write down the generalization of this formula to the case of a general diffusivity, taking account of the factor of 2 mentioned in the earlier footnote: Choose to obtain from the above. In preparation to computing the hitting time and place densities, we note from Wendel that the following is also true: where is the cosine of the polar angle of the hitting place with the z-axis chosen along v. We use a calligraphic font for the Legendre polynomial  n , so that it is not confused with a symbol for probabilities. For n = 0, we of course reproduce Equation (9). This therefore gives us distribution of hitting time and place by writing out the left-hand side as The spatial transform then is computed (in general) by the Fourier-Legendre series, where only the term with n = 0 survives. Thus we immediately obtain the density function for hitting place and  is computed by normalization, so that Together Equations (13) and (14) give us the polar angle distribution. The distribution in azimuthal angle is uniform due to symmetry. Plugging Equation (14) into Equation (11), we see that the moment-generating function for the hitting time distribution is Thus, the mean hitting time is The mean time approaches R∕v as v becomes large, as it should. However for finite values of vR∕D, the time is always less than this limiting form by the factor cosh (vR∕2D) ∕ sinh (vR∕2D) − 2D∕vR, which lies between 0 and 1 as the speed increases from 0 to ∞ ("diffusion always helps"). The density function is then the inverse Laplace transform: using the method of residues from the previous section. The cumulative hitting time distribution is Equations (13) and (19) together with the definition (14) give us the required distributions for a sphere for a particle released at the center, albeit as power series which do not converge for T = 0, v = 0. In fact the convergence of this series is extremely poor except for very large T where the distribution is almost unity. Before we return to this topic, we discuss the final generalization, namely where the diffusion is generally anisotropic.

ANISOTROPIC DIFFUSION
The joint distribution of hitting time and place on the surface of a sphere does not factorize in the case of anisotropic diffusion. This follows from a theorem proved by Pitt 13 which asserts that if the joint distribution factorizes for standard (isotropic) Brownian motion and drift, the hitting surface must be a sphere. (Again we specialize all results to the three-dimensional case we are interested in.) So, let us transform from the laboratory coordinate system to one in which the diffusion tensor is diagonal. We write where the rotation  results in Λ being diagonal. We further write and define the new dimensional coordinates so that the diffusion operator becomes isotropic in these coordinates: In this system, then, the Brownian motion is isotropic, and hence, from Wendel, the joint distribution factorizes for the sphere, but more importantly for only the sphere, (by Pitt's theorem) in this space. This sphere becomes an ellipsoid in the original coordinates, so we have shown that the distribution, while factorizing for this ellipsoid, will not factorize for the original sphere in the case of anisotropic diffusion. We therefore choose to solve a different problem. We have the freedom to do so, since our purpose is to speed up the simulation, and any convenient surface will do. We therefore solve the problem for the sphere in i −space using the formulas of the previous section for a sphere. The hitting place is mapped to the corresponding point on the ellipsoid in rotated laboratory space, where the d i 's are the eigenvalues of the diffusion tensor. Since The i ′ th (in the −system) coordinate of the velocity is thus We have thus reduced the case with anisotropic diffusion to a previously solved problem, in a special coordinate system.
To avoid any misunderstanding, we should perhaps indicate why, instead of using a coordinate transformation so that the anisotropic equation is transformed to a Laplace equation, we do not solve the Laplace equation with the relevant boundary conditions. We recall that it is essential to our applications that the medium (the brain) is inhomogeneous with differing properties or metrics in different portions of the medium. In applications in computational biomedicine specifically, there is an image for example from magnetic resonance which has voxels of about a millimeter in each side. The conductivity, say, is a constant (or, more precisely, averaged) within a voxel, but varies from voxel to voxel. The anisotropies vary as well. If one were to solve the Laplace equation (and one would do that if the entire volume of medium were simply anisotropic but homogeneous), one would be faced with an essentially impossible problem of matching voxel to voxel. Our application papers referred to in the Introduction discuss this issue as well.

THE HITTING TIME AND PLACE DISTRIBUTIONS
In this section, we develop the formulas which will serve as the basis for simulating random walks of fixed step length and random times, in contrast to the conventional Gaussian process simulating the walk in a fixed time, with random step lengths, and in contrast to the poorly converging series for these hitting times obtained above. First, we quote the Poisson summation formula. Then, for convenience, we henceforth convert to dimensionless variables since the distributions depend only on these. After these preliminaries, we obtain the formulas.

The Poisson resummation
In order to improve the convergence of the series above for smaller times, we resort to the Poisson summation formula, which we quote here as follows. Define the Fourier transform Then for nicely behaved functions where the sums are over integer values. (Any text or online source may be consulted for the conditions for validity of the Poisson summation.) However, before performing the computations, we shall first work with simplified formulas by defining dimensionless variables.

Dimensionless variables
We define the dimensionless time and speed variables ‡ : We can see that the distributions depend on the dimensional variables D, T, R, and v only through these dimensionless combinations. The density function (since it is not dimensionless itself) also depends on the Jacobian, which in this case just means using the chain rule.
Remark 1. We continue to use the same symbols for the distribution and densities. No confusion should result with the above defined functions, since we shall use only the dimensionless variables as the independent ones henceforth.

Calculating the distributions
We first note that the cumulative distribution for the hitting place on a sphere is, from Equations (13) and (14), This is readily inverted to yield for the random variable , −1 ≤ ≤ 1, as a function of the number u between 0 and 1. Thus the hitting place distribution offers no difficulty in inversion, and hence none for sample generation as well, where u is chosen to uniformly distributed in [0, 1]. ‡ The perhaps unnecessary introduction of the factor 2 has been incorporated into our software, and so for consistency we have retained it here.
Turning to the hitting time distributions, and using the dimensionless variables introduced in Equation (30), let us first examine the series for the density function for V = 0, already available in the literature, as we have mentioned: Of course we have to remove the Jacobian from the dimensionfull form of the density, in order that the normalization ∫ ∞ 0 d p 0 ( ) = 1 is satisfied. From the form of the series, the density can also be expressed as twice the sum of the series, summed over positive integers only. As stated, these forms for the densities are suitable for large enough , when they converge rapidly, but obviously not for small . We have to go up to terms beyond the integer part of √ 1∕ 2 before the series begins to behave. For ∼ 0.1, this means going beyond 10 terms, and the situation is much worse for the distributions where the drift speed is relevant. As a warm up for the case of actual interest (with nonzero speed), we apply the Poisson summation formula to obtain series that are rapidly convergent for small . Denote the y ′ th term within the differentiation sign in Equation (33): so that its Fourier transform is Thus from Equation (29) we get for the zero drift case that the density function may also be written in the form Noting that the series defining p 0 ( ) converges best for large , we can write the corresponding distribution as The distribution corresponding to the Poisson resummed density is also immediate: (The zero-limit of the expression on the right-hand side is of course zero for all k, and increasingly rapidly with the magnitude of k.) Now, for nonzero speeds, from Equation (18), the density is just Thus, using the Fourier transform and integrating by parts, the Poisson resummed density is immediate: As for the zero drift speed case, the distribution corresponding to p V ( ) may be obtained from Equation (37), now with the subscript V. The expression for p V ( ) is integrated by parts, the integrations are elementary, and after consolidation of terms, we obtain There are several ways to obtain the Poisson resummed distribution PS P V and we give three different ways here. Perhaps the simplest is to write, as before, Upon integration by parts, the second integral is also expressible in terms of standard functions and the two contributions come out to be: where erfc is the complementary error function. A second method is of course to directly transform the expression for P V ( ). Let us just begin with the series on the right-hand side of Equation (43), call each term within the summation g (y) = (−1) y y 2 We note that Let us first Fourier transform the expression inside the integral. We have ) .
So now we have to integrate over , differentiate with respect to − , and divide the result by 2 . This gives the Fourier transform G of g as Now let us return to the full expression for the distribution (43). We have to multiply the series by 2 sinh( 2 V)∕ 2 V and by exp[− 4 V 2 ]. The latter cancels with the corresponding exponentials with positive signs multiplying the erfc terms. We notice that the first complementary error function appears with the opposite sign from the one in Equation (45). We recall that The series sum that arises from the first term in Equation (50) is which, upon multiplication with 1 2 sinh ( 2 V ) and with the minus sign in the term, precisely cancels with the 1 in front of the series for the unsummed distribution, yielding again the series first computed.
Finally, a third method to obtain this expression is the following. It can be seen from the definitions (34) and (46) that the Fourier transform G of g satisfies the differential equation where F has already been given in Equation (35). The general solution to this second order differential equation can be seen to have precisely the terms above in Equation (45), plus the following terms: where C 1,2 are the two integration constants (ie, independent of x). Since one or the other of the exponentials that these multiply blows up as x → ±∞, each constant must be chosen to cancel with its respective term in the parenthesis, and so the solution of the differential equation reduces to the result obtained above. The moment generating function is ) .
(53) Conclusion 1. We have computed the hitting time and place distributions for Brownian motion with drift. In terms of the dimensionless variables introduced in Equation (30), the direct series representation of the hitting time (cumulative) distribution is given in Equation (43) and, what is essential for applications, its Poisson-resummed form which is much better convergent for most of the useful range, in Equation (45). The moment generating function which is useful for other purposes is Equation (53). The corresponding density functions have also been shown above. In numerical simulations, it will be useful to compute the inverse function ← − t V (u) as a function of u ∈]0, 1[. We employ numerical methods for this, discussed below.

NUMERICAL METHODS
The distributions calculated above are not directly used in our simulations. We have followed the path of constructing the inverse functions to obtain samples of the needed random variables: the step length of the walk and its direction. As indicated above the direction offers no difficulties, the inverse function (32) is expressible in analytic form and can be evaluated for any input to provide the random direction. The approximate inverset V (u) however presents greater difficulties. There are a multitude of methods for constructing such inverses, including those that match exactly the moments, easily obtained by successive differentiation of Equation (53), of the exact distribution. Our first implementation, which we have used extensively in our simulations with results that have been employed and tested in the complex heterogeneous environments of human patients with brain tumors, 14 is described here, along with its error estimates. Before we do so, we discuss what error estimates we should use.

Error measures
When comparing the use of the approximate random variablet V (u) in favor of the exact ← − t V (u), where u is the uniform random variable in the unit interval, we are faced with a variety of choices. The moments of ← − t V are of course the moments of the hitting time distribution (∫ 1 0 du ) n = ∫ ∞ 0 dt t n p V (t) by changing variables), so it may be thought it would be sufficient to have an approximation match the moments of the true distribution up to a given order, but this would be a poor error criterion. For example, we may have an inverse that fails to allow probabilities outside a small interval (no hitting time is returned for u outside [ 1 2 − , 1 2 + ], say) and yet match several moments perfectly. In use, the inverse has to return times in accordance with the actual distributions for all times, though of course at either end of the interval, larger relative errors can be tolerated. We have therefore chosen to use the relative error using the L 1 norm as our criterion. In other words, we compute (the denominator on the right-hand side being the mean hitting time) and display this as the percentage error. We have also computed the mean square error compared with the second moment, with errors that are even smaller, so we do not display these.

Inverse function fits
It is obvious from inspection, and shown more systematically in Appendix C, that the singularities in the inverse functions are logarithmic at either end of the unit interval. For V = 0, we used the following piecewise definition for the inverse: and we fit the various constants to the true inverse obtained by tabulating the distribution, also defined piecewise from Equations (39) and (40) for large and small times, respectively. In practice, the resummed series serves for the entire range. The way we defined the distribution was by careful evaluation of the series, testing convergence and switching over as the argument increased if the direct series was more economical than the Poisson resummed series. In this regard, it may be worthwhile to point out that we used Mathematica 10 for all of the computations resulting in the inverse functions. Despite the presence of large exponents, Mathematica collects these, allowing cancellations in the exponent before evaluating expressions, and this results in very accurate estimates of the series. Although just one or two terms in the series already yields accuracies to many decimal places, we have checked by evaluating even up to 1024 terms in the series, with no underflow or overflow errors resulting from the Mathematica code. Following the fit of the zero speed inverse with the above form, we use a simple linear fit to the logarithms for any nonzero speed: We divided the region of speeds from ln V = −5 to +5 in equally spaced intervals of a tenth for every unit (−5, −4.9, …) and found the best fit for the slopes and intercepts from the distributions for the speeds in question. Figure 1A to C shows the slopes, the intercepts, and the percentage errors calculated, respectively, as indicated above.
It must be admitted that our ad hoc approach was rather crude: nevertheless the total relative error never exceeded 0.6%. The results on test cases where analytic results were available were excellent, the concordance being of course dependent on the number of samples used. However, in all cases for a fixed number of samples, the errors were less than those obtained by direct simulation of Brownian motion using the Gaussian for the step lengths, while fixing the time steps.

Implementation and performance
The simulation of the random walk is a small, albeit crucial, part of the overall process of constructing a probabilistic solver to the PDE's of interest; and such a solver in turn is a small, again crucial, part of constructing simulations of the spatial distributions of molecules transported in the brain via injection of fluid in which they are suspended. We refer to References 3 and 4 for the context in which we have applied these methods. The random walk is needed to obtain the probabilistic solution to the PDE's of interest. The solution itself is an appropriate weighted sum of the boundary values specified by the PDE's, the value being chosen according to where the random walker hits the boundary, and the weighting being related to the time at which it hits the boundary (see Appendix A to this paper and the references cited). Since this paper is focused on the hitting time and place distributions, we confine our remarks to the use of these. The hitting place and time distributions are independent as we have described above, and the hitting place offers no difficulties, as the explicit formula at the beginning of Section 5.3 has indicated. As discussed above, we use the direct and the Poisson resummed series representations of the hitting times. Then as footnote * indicates, we get the desired hitting time distribution by constructing the inverse function described in the previous subsections. Although the series for the hitting times are complicated, the use of these is to enable us to construct the inverse functions, which are close to being lookup tables. (Indeed, we could speed up obtaining these inverse function tables, though we have not yet done so.) So the simulation steps can each be essentially as rapid as the construction of a Gaussian distribution that is required for the simulation of a step of random length in fixed time. In our simulations on a laptop, each simulation step on the average took 1.11 μs for the fixed step length method versus 0.75 μs for the more conventional fixed time step method, the absolute numbers having of course no significance. The particular simulations where these numbers were obtained are now described.
To test the performance, we solved a parabolic initial-boundary value problem (Appendix A) for a concentration field obeying drift with a velocity, diffusion, and decay, in an external (Dirichlet) boundary box of 100 × 100 × 100 mm and a spherical flux (Neumann) source of 1 mm radius at the center of the box (so that the inner boundary would reflect). (In the Appendix below, we explain only the solution to Dirichlet problems. The Neumann problems require more preparation to explain, and our approach to these is available in Reference 15.) Particles were started at (25, 0, 0), about halfway between the source and the outer boundary. There was a 100 × 100 × 100 diffusion tensor field initialized with a constant isotropic D value, but treated as an arbitrary tensor field, so that no advantage was taken of its isotropy and homogeneity. There was a velocity field that pointed inward toward the source, with magnitude proportional to 1∕r where r is the distance from the particle to the source. The peak magnitude was selected so that it would not be too large relative to the diffusion so that the particle would eventually drift to the outer bounding box and be absorbed. This was a decay field (initialized with a fixed value) just so that decay computation would be included in the simulation steps. All fields were sampled at 1 mm resolution. The fixed distance simulation was run using maximum step size of 0.5 mm, automatically reduced to as low as 0.05 near the boundaries. The fixed time simulation used a delta T of 4 seconds in order to limit the likelihood of steps much larger than 0.5 mm. The average step length was 0.144 mm, and the probability of taking a step larger than 0.5 mm was about 10 −6 . The results were that the average number of steps required for a single particle to be absorbed was 12 500 for a single step using a given step length, and 180 000 for given time interval for the steps, both as just described. Thus the speedup in this case was a factor of greater than 10. (Obviously the larger the region of homogeneity for given diffusivity or speed, the larger this speedup factor will be.) For an analytic estimate, with both diffusion and drift, one may use the distribution given in footnote 2 to estimate for example how large a time interval one may have to limit the probability of an excursion no greater than a given length L to be less than, say, 10 −6 for accuracy. For the numbers quoted, this gives us an order of magnitude estimate the same as that observed, the analytic calculation giving us about 20.

CONCLUSION AND DISCUSSION
The immediate reason to work with the hitting time distributions is the one stated in the Introduction and noticed by prior researchers, namely, that the first reason to constrain a step length in the simulation of a time-homogeneous process with drift and diffusion is to stay within the region where these coefficients are constant, so that the homogeneous kernel or Green function may be used in the simulation. Thus, it is immediately advantageous to use fixed step-length simulations provided each step is not much more costly in time than each step of a standard simulation of the process which works by sampling from a Gaussian distribution after a fixed time step has been chosen. If we use the latter, we have to strongly limit the time step to be assured with high probability that the desired step length has not been exceeded. Thus many time steps have to be simulated along with checks at each step to ensure that the region of homogeneity has not been crossed. In our implementation described above, it is certainly the case that a single step is not more costly than the choice of Gaussian random variable because it is based on table lookup and a linear interpolation. This results in a simulation that is more efficient, even for inhomogeneous media since the above advantages pertain no matter how small or large is the region of homogeneity. There is another reason for preferring this method, namely, a second reason to constrain a step length which is that a step may not cross any of the boundaries of the region. With the implementation of a distance map (based on level sets 16 ), one has the option of setting the step length accordingly, a feature unavailable to the direct simulation. For these and other purposes it is useful to compute first the hitting time distributions as we have done, and then to find efficient ways to sample these and test their accuracies. We have implemented one such simple method also described in this paper-there are many other possibilities, but the distributions and moments will always be needed to test the simulations of the process. These and the moments are provided in this paper, extending the isotropic diffusion-only case treated by earlier practitioners, referenced in the Introduction.
In Section 5.3 where we summarized the formulas, we referenced work 8,9 that has priority in publishing the zero drift Poisson resummed series, and the hitting place distribution, respectively. However, our central hitting time formulas, Equations (35) to (38), which we have obtained in several different ways, have not been displayed before to our knowledge. Grebenkov, who previously obtained our Equation (34) for zero drift, perhaps because of his interest in reaction-diffusion equations, does not seem to take advantage of the central discovery of the mathematicians, namely that the hitting time and hitting place distributions are independent for drift and diffusion for a sphere and only a sphere. Indeed Grebenkov's formula (28) in his paper seems to imply that they are not independent for his problem. (We have not examined the diffusion-reaction problem and so cannot comment on this.) This renders the approach not directly useful for the case of interest in our applications, and indeed there is no counterpart to the formulas of joint distributions of our paper.
On the other hand, Sabelfeld has obtained our formula (32) for the hitting place distribution in his prior work. He does not, as mentioned, obtain our hitting time distribution central to our paper and indeed, his approach to the simulation seems entirely different, perhaps because he confined attention in that paper to static (time-independent) problems. Simply combining their work does not help obtain the hitting time formulas, which we use in time-dependent problems as well, nor the inverse method for using them in simulations. Our methods are also sufficiently different, we hope, to be of use and interest.

APPENDIX A. PARABOLIC EQUATIONS: THE CONCENTRATION PROFILE
We simply quote available results: for details see References 5,17,and 18. Further explanation of these results available in the literature may be found in Reference 15, which may be consulted for a more detailed explanation of the Neumann problem solution. However, the mathematically minded reader can find the solution to the Neumann problem in Reference 5 as well. We first exhibit some equations, and then explain each term following the display of the equations. The equations with asterisks are in so-called forward form and the coefficients therein are used in the random walk from which we construct the solutions in probabilistic representation. They are equivalent, after translation between the coefficients as given below, to the backward form which are absent the asterisks. Suppose we have a partial differential equation where the differential operator L * is of the form with the initial-boundary conditions reading (Solutions to more general problems are available, but the above suffices for our purposes.) Then the solution can be represented via a stochastic process as Reference 5 (p133, theorem 2.3).
We now explain the above including all the symbols used. In the usual three-dimensional Cartesian coordinates, We recall that we always assume that the matrix of entries for the diffusion tensor is symmetric, and further is positive definite. We now describe each line in the equation that represents the solution. We seek the solution at time t and at the position x, the process having started at time 0. What is constructed is a three dimensional random walk which begins at time 0 at the point at which the solution is desired, namely x say. This random walk has a drift velocity v * and a diffusivity tensor D * . So X(t) is defined to be the position of a particle at time t starting at t = 0 at the position x, and undergoing both diffusion and drift, with the diffusivity and the drift velocity being given according the starred quantities above. The initial time and position variables are meant to be understood, but will be suppressed. Thus, X is subject to the stochastic process: where B * is the unique positive definite square root of the diffusivity tensor D * . The assertion on B * follows because the diffusivity itself has to be positive definite. We begin the explanation of the formula for the probabilistic representation of the solution with the first line: ) . (A7) Here T means the earliest time at which this random walk hits a point X (T) of the boundary. T<t is the characteristic function of the predicate T < t: in other words it is 1 if the random walk hits the boundary before the elapse of the time t at which the solution is to be computed; otherwise it is zero. Thus this term contributes only if the random walk hits the boundary before time t. The Dirichlet boundary value of u is evaluated at this point of the boundary and at the time of impact. (In our applications the boundary value is independent of time but the formula allows the more general case.) It is multiplied by the exponential which is a measure of the "loss" of molecules. An expectation value  x (the subscript means that the walk is begun at the point x as already stated) is taken of this number over repeated random walks. The second term can be similarly interpreted. During the random walk, as long as it has not reached the boundary, one cumulates the loss as before (the exponential term) and then multiplies by the initial value that obtains at the point at the conclusion of the walk. The expectation value of this number is then stored. The solution is the sum of these two terms only one of which contributes in a given sample path.
We have therefore discussed how the solution is computed. It remains to identify the pseudo-velocity, diffusion, and loss terms. It is easy to rewrite equation (1) of Reference 4 (which describes the transport of the molecule in terms of the advective velocity, the diffusivity and the loss rates which have the same symbols as used above but without the asterisks), to have the form required by Equation (A2) required by the probabilistic representation. By pulling the differentiations through the various terms, we find (see the quoted reference) Remark 2. The above identification of D * with D then turns out to rest on the conventional use of Gaussian random variables during a stochastic simulation, not the Wiener process (see footnote *). If the Wiener process is used, the starred diffusion tensor will be twice as large. The same applies below for the hydraulic conductivity tensor.

A.1 Elliptic equations: the velocity profile
Another field that demands simulation is that of the velocity field given by D'Arcy's law. We solve for the pressure: the velocity field is then given by a numerical differentiation. It would be more pleasing, and potentially more numerically stable, to construct a probabilistic representation for the velocity field itself. The theory of stochastic processes does allow for such a construction, but we do not develop that here, since in our current numerical simulations we do indeed compute for the pressure and then numerically differentiate. We need the stochastic representation of the solutions to the Dirichlet boundary value problem L * (x) p (x) = 0. (A10) L * (v * (x) , D * (x) , k * (x), x)p ∶= v * (x) ⋅ ∇p + 1 2 D * (x) ∶ ∇∇p + k * (x)p.
(We have used the same symbols L * , g as above, even when the coefficients are not time dependent: this should not cause any confusion. The second condition on the pressure is an asymptotic one: it must decay at large distances at least as fast as r −1 .) By comparison with equation (4) of our reference cited, we see that v * = ∇ ⋅ K. (A13a) in terms of the hydraulic conductivity K of the tissue, the inhomogeneity in which also gives a pseudo velocity for the simulation, and a parameter which describes the loss of fluid across the walls of the blood vessels, due to the higher pressure from the infusion (the paper referenced may be consulted for more details of this application). Let T be the least time for the previously described random walk (but now the drift, diffusion, and loss must be time independent) to hit the boundary. The expected value of this time must be finite (but see below). Then the solution (the same references above may be consulted) is ) .
The interpretation of this equation follows the same lines as given above, and need not be repeated. However, we remark upon the case where a part of the boundary is at infinity. For example, if we have a fluid source at the origin, we may want to solve for the pressure outside the source. In this case, as is well known, the boundary condition must be supplemented in the differential equation with an asymptotic condition on the solution at infinity (eg, that it approach a constant usually taken to be zero). Correspondingly, the stochastic process must be such that it cannot return to the inner boundary after it has wandered off to infinity. In practice we select an outer boundary of the brain and set the pressure there to be zero. Treatises such as Reference 5 may be consulted on many details of the stochastic representations including the treatment of unbounded domains.

APPENDIX B. REMARK ON MAXIMUM ENTROPY DISTRIBUTIONS
The moment generating function (53) is, of course, easier to compute than the densities. We remark here that the density of the maximum entropy distribution with specified moments m i , i ≥ 0 is It may be interesting to compare the distributions we get by using the first few moments in a maximum entropy distribution with that of the exact density which is an infinite series and so must be summed approximately anyway. We postpone this exercise for the moment. In any case, it will involve numerical work. If the moments beyond the second are taken, then computing Z becomes numerical, and in all cases, computing the parameters in the distribution from the This immediately gives us the next approximation to the inverse function, namely It is obvious that we may successively approximate the inverse (for small x, t) in a double recursion, improving the approximation to the g and h functions in turn.
The large time series, on the other hand, has for its first non trivial term: the inverse to which is directly ] . (C11) The expansions may be continued in this way.