Efficient Simulation of Random Fields by Trigonometric Polynomial and Low‐rank Tensor

We propose a fast and economic representation of stationary random fields in trigonometric polynomial, utilizing the prowess of fast Fourier transform (FFT) and low‐rank tensor approximation. With the method we are able to generate large random fields with discretization size up to 220 which are otherwise well beyond the capacity of PCs. We also illustrate the approach to the specified property of random field by increasing rank in the tensor approximation.


Random field synthesized in trigonometric polynomials
Consider a zero-mean Gaussian random field u(x, ω) on a spatial domain D = d j=1 (−Y j /2, Y j /2) ⊂ R d , for each x ∈ D, u(x, ·) : Ω → R is a random variable in a suitable probability space. It was established for D-periodic stationary covariance function C(|x 1 − x 2 |) that u can be approximated by a linear combination of a finite series of trigonometric polynomials in , ξ k uncorrelated standard Gaussian random variables, F stands for Fourier transform, • for Hadamard product, and λ k = F(C). The periodicity of C is not an essential restriction since we can periodize an arbitrary covariance function on an extended domain and take only the original partial domain after the synthesization.
However, the finite-period periodization of covariance function risks resulting in negative Fourier spectrum densities λ k due to the phenomena known as "spectrum leakage". The positive definiteness of covariance can be conveniently guaranteed by instead starting off the synthesization in Eq. (1) directly from an appropriate and non-negative spectrum density λ k . For many widely used covariance models, e.g. the Matérn type covariance we adopted in this work, their spectrum densities are analytically known.
If x is also discretized with an equal-spaced regular grid indexed by n ∈ N = {n ∈ Z d 0 , −n i /2 ≤ n i ≤ n i /2 − 1} and we take K = N , the discrete Fourier transform for u(x n , ω) can be evaluated swiftly by a d-dimensional FFT: However we still have to compute the n d -sized tensor √ λ • ξ with n = max i n i , and the FFT complexity O(n d log n) scales exponentially in d, this "curse of dimensionaliy" can be mitigated by using low-rank tensor approximations.

Low-rank tensor approximation
We use low rank tensors to reduce the complexity of the tensor operations in Eq. (2), opting for tensor train (TT) format [2] for robustness in rank truncation. For a tensor V ∈ (R, C) n1×···×n d , the TT format approximation V is ⊗ denotes tensor product, v FFT This reduces the complexity to O(dnr 2 log n) which scales only linearly in d.

of 2
Section 15: Uncertainty quantification We tabulate the complexities of tensor operations FFT and Hadamard multiplication for tensors in their full format and TT format, as well as their memory storage size, in Tables 1. For the TT format they scale only linearly in d and at most quasilinearly in n, provided that the rank r is kept small. This makes the format favourable particularly for higher dimensional problems.

Format
FFT Hadamard multiplication Storage size Full tensor Another issue is about efficient construction of the low-rank tensor λ and ξ . For λ the construction is based on the fact that the Matérn spectrum density and its square root can be expressed by a Laplace integral wherein the separability of exponential functions paves a way to the tensor product representation. This was established in [4] and further extended to more general cases in this work. A fast construction of entry-wise uncorrelated tensor ξ is developed in this work through a probabilistic factorization of Gaussian random variables, the construction renders ξ jointly Gaussian by increasing rank r. Details in these aspects can be found in [3].

Numerical result
Numerical tests are made to verify the efficiency of the method. We use Matérn covariance with roughness coefficient ν set to 1/2 and correlation length one twentieth of the domain size. In a 2-dimensional test case we are able to generate large random fields with discretization size up to n = 2 20 in a minute, with storage size only amounts to 3.5 gigabyte, while using full tensors would require 8796 gigabyte memory which is well beyond capacity of PCs. Here we set the rank of λ and ξ at 20 and 5 respectively which gives a reasonable balance of computational cost and accuracy in approximation.
Below we illustrate the effect of using various ranks of λ. We make one realisation of the Gaussian random field with n = 2 11 and the rank of ξ fixed to 5, and represent the field with the rank of λ set to 5, 10 and 20. The resulted fields are shown in Figure 1. One observes that with increasing rank the representation approaches to the degree of "roughness" specified by the coefficient ν. It is justified in [3] that the rank value 20 is accuracy sufficient.