Efficient spherical harmonic transforms aimed at pseudospectral numerical simulations


ISTerre, Université de Grenoble 1, CNRS, Grenoble, France. (nathanael.schaeffer@ujf-grenoble.fr)


[1] In this paper, we report on very efficient algorithms for spherical harmonic transform (SHT). Explicitly vectorized variations of the algorithm based on the Gauss-Legendre quadrature are discussed and implemented in the SHTns library, which includes scalar and vector transforms. The main breakthrough is to achieve very efficient on-the-fly computations of the Legendre-associated functions, even for very high resolutions, by taking advantage of the specific properties of the SHT and the advanced capabilities of current and future computers. This allows us to simultaneously and significantly reduce memory usage and computation time of the SHT. We measure the performance and accuracy of our algorithms. Although the complexity of the algorithms implemented in SHTns are in inline image (where N is the maximum harmonic degree of the transform), they perform much better than any third-party implementation, including lower-complexity algorithms, even for truncations as high as N = 1023. SHTns is available at https://bitbucket.org/nschaeff/shtns as open source software.

1 Introduction

[2] Spherical harmonics are the eigenfunctions of the Laplace operator on the 2-sphere. They form a basis and are useful and convenient to describe data on a sphere in a consistent way in spectral space. Spherical harmonic transforms (SHT) are the spherical counterpart of the Fourier transform, casting spatial data to the spectral domain and vice versa. They are commonly used in various pseudospectral direct numerical simulations in spherical geometry, for simulating the Sun or the liquid core of the Earth among others [Glatzmaier, 1984; Sakuraba, 1999; Christensen et al., 2001; Brun & Rempel, 2009; Wicht & Tilgner, 2010].

[3] All numerical simulations that take advantage of spherical harmonics use the classical Gauss-Legendre algorithm (see section 2) with complexity inline image for a truncation at spherical harmonic degree N. As a consequence of this high computational cost when N increases, high-resolution spherical codes currently spend most of their time performing SHT. A few years ago, state-of-the-art numerical simulations used N = 255 [Sakuraba & Roberts, 2009].

[4] However, there exist several asymptotically fast algorithms [Driscoll & Healy, 1994; Potts et al., 1998; Mohlenkamp, 1999; Suda & Takami, 2002; Healy et al., 2003; Tygert, 2008], but the overhead for these fast algorithms is such that they do not claim to be effectively faster for N < 512. In addition, some of them lack stability (the error becomes too large even for moderate N) and flexibility (e.g., N + 1 must be a power of 2).

[5] Among the asymptotically fast algorithms, only two have open-source implementations, and the only one that seems to perform reasonably well is SpharmonicKit, based on the algorithms described by Healy et al. [Healy et al., 2003]. Its main drawback is the need of a latitudinal grid of size 2(N + 1), while the Gauss-Legendre quadrature allows the use of only N + 1 collocation points. Thus, even if it were as fast as the Gauss-Legendre approach for the same truncation N, the overall numerical simulation would be slower because it would operate on twice as many points. These facts explain why the Gauss-Legendre algorithm is still the most efficient solution for numerical simulations.

[6] A recent paper [Dickson et al., 2011] reports that a carefully tuned software could finally run nine times faster on the same CPU than the initial nonoptimized version, and insists on the importance of vectorization and careful optimization of the code. As the goal of this work is to speed up numerical simulations, we have written a highly optimized and explicitly vectorized version of the Gauss-Legendre SHT algorithm. The next section recalls the basics of spherical harmonic transforms. We then describe the optimizations we used and compare the performance of our transform to other SHT implementations. We conclude this paper by a short summary and perspectives for future developments.

2 Spherical Harmonic Transform

2.1 Definitions and Properties

[7] The orthonormalized spherical harmonics of degree n and order − n ≤ m ≤ n are functions defined on the sphere as:

display math(1)

where θ is the colatitude, φ is the longitude, and inline image are the associated Legendre polynomials normalized for spherical harmonics

display math(2)

which involve derivatives of Legendre polynomials Pn(x) defined by the following recurrence:

display math

The spherical harmonics inline image form an orthonormal basis for functions defined on the sphere:

display math(3)

with δij the Kronecker symbol. By construction, they are eigenfunctions of the Laplace operator on the unit sphere:

display math(4)

This property is very appealing for solving many physical problems in spherical geometry involving the Laplace operator.

2.2 Synthesis or Inverse Transform

[8] The spherical harmonic synthesis is the evaluation of the sum

display math(5)

up to degree n = N, given the complex coefficients inline image. If f(θ,φ) is a real-valued function, then inline image, where z stands for the complex conjugate of z.

[9] The sums can be exchanged; and using the expression of inline image we can write

display math(6)

From this last expression, it appears that the summation over m is a regular Fourier transform. Hence, the remaining task is to evaluate

display math(7)

or its discrete version at given collocation points θj.

2.3 Analysis or Forward Transform

[10] The analysis step of the SHT consists in computing the coefficients

display math(8)

The integral over φ is obtained using the Fourier transform:

display math(9)

so the remaining Legendre transform reads

display math(10)

The discrete problem reduces to the appropriate quadrature rule to evaluate the integral (10) knowing only the values fm(θj). In particular, the use of the Gauss-Legendre quadrature replaces the integral of expression (10) by the sum

display math(11)

where θj and wj are, respectively, the Gauss nodes and weights [Temme, 2011]. Note that the sum equals the integral if inline image is a polynomial in cosθ of order 2Nθ − 1 or less. If fm(θ) is given by expression (7), then inline image is always a polynomial in cosθ, of degree at most 2N. Hence, the Gauss-Legendre quadrature is exact for Nθ ≥ N + 1.

[11] A discrete spherical harmonic transform using Gauss nodes as latitudinal grid points and a Gauss-Legendre quadrature for the analysis step is referred to as a Gauss-Legendre algorithm.

3 Optimization of the Gauss-Legendre Algorithm

3.1 Standard Optimizations

[12] Let us first recall some standard optimizations found in almost every serious implementation of the Gauss-Legendre algorithm. All the following optimizations are used in the SHTns library.

3.1.1 Use the Fast-Fourier Transform

[13] The expressions in section 2 show that part of the SHT is in fact a Fourier transform. The fast Fourier transform (FFT) should be used for this part, as it improves accuracy and speed. SHTns uses the FFTW library [Frigo & Johnson, 2005], a portable, flexible, and highly efficient FFT implementation.

3.1.2 Take Advantage of Hermitian Symmetry for Real Data

[14] When dealing with real-valued data, the spectral coefficients fulfill inline image, so we only need to store them for m ≥ 0. This also allows the use of faster real-valued FFTs.

3.1.3 Take Advantage of Mirror Symmetry

[15] Due to the defined symmetry of spherical harmonics with respect to a reflection about the equator

display math

one can reduce by a factor of 2 the operation count of both forward and inverse transforms.

3.1.4 Precompute Values of inline image

[16] The coefficients inline image appear in both synthesis and analysis expressions (7 and 10), and can be precomputed and stored for all (n,m,j). When performing multiple transforms, it avoids computing the Legendre polynomial recursion at every transform and saves some computing power, at the expense of memory bandwidth. This may or may not be efficient, as we will discuss later.

3.1.5 Polar Optimization

[17] High-order spherical harmonics have their magnitude decrease exponentially when approaching the poles as shown in Figure 1. Hence, the integral of expression (10) can be reduced to

display math(12)

where inline image is a threshold below which inline image is considered to be zero. Similarly, the synthesis of fm(θ) (equation (7)) is only needed for inline image. SHTns uses a threshold inline image that does not depend on n, which leads to around 5% to 20% speed increase, depending on the desired accuracy and the truncation N.

Figure 1.

Two associated Legendre polynomials of degree n = 40 and order m = 33 (blue) and m = 36 (red), showing the localization near the equator.

3.2 On-the-Fly Algorithms and Vectorization

[18] It can be shown that inline image can be computed recursively by

display math(13)
display math(14)
display math(15)


display math(16)
display math(17)
display math(18)

The coefficients inline image and inline image do not depend on x, and can be easily precomputed and stored into an array of (N + 1)2 values. This has to be compared to the order N3 values of inline image, which are usually precomputed and stored in the spherical harmonic transforms implemented in numerical simulations. The amount of memory required to store all inline image in double-precision is at least 2(N + 1)3 bytes, which gives 2 Gb for N = 1023. Our on-the-fly algorithm only needs about 8(N + 1)2 bytes of storage (same size as a spectral representation inline image), that is, 8 Mb for N = 1023. When N becomes very large, it is no longer possible to store inline image in memory (for inline image nowadays) and on-the-fly algorithms (which recompute inline image from the recurrence relation when needed) are then the only possibility.

[19] We would like to stress that even far from that storage limit, on-the-fly algorithm can be significantly faster thanks to vector capabilities of modern processors. Most desktop and laptop computers, as well as many high-performance computing clusters, have support for single-instruction, multiple-data (SIMD) operations in double precision. The SSE2 instruction set is available since year 2000 and currently supported by almost every PC, allowing the performance of the same double-precision arithmetic operations on a vector of two double-precision numbers, effectively doubling the computing power. The recently introduced AVX instruction set increases the vector size to four double-precision numbers. This means that inline image can be computed from the recursion relation (15) (which requires three multiplications and one addition) for two or four values of x simultaneously, which may be faster than loading precomputed values from memory. Hence, as already pointed out by Dickson et al. [Dickson et al., 2011], it is therefore very important to use the vector capabilities of modern processors to address their full computing power. Furthermore, when running multiple transforms on the different cores of a computer, the performance of on-the-fly transforms (which use less memory bandwidth) scales much better than algorithms with precomputed matrices, because the memory bandwidth is shared between cores. Superscalar architectures that do not have double-precision SIMD instructions but have many computation units per core (like the POWER7 or SPARC64) could also benefit from on-the-fly transforms by saturating the many computation units with independent computations (at different x).

[20] Figure 2 shows the benefit of explicit vectorization of on-the-fly algorithms on an Intel Xeon E5-2680 (Sandy Bridge architecture with AVX instruction set running at 2.7 GHz) and compares on-the-fly algorithms with algorithms based on precomputed matrices. With the four vectors of AVX, the fastest algorithm is always on the fly, while for two vectors, the fastest algorithm uses precomputed matrices for inline image. In the forthcoming years, wider vector architecture is expected to become widely available, and the benefits of on-the-fly vectorized transforms will become even more important.

Figure 2.

Efficiency (N + 1)3/(2tf) of various algorithms, where t is the execution time and f the frequency of the Xeon E5-2680 CPU (2.7 GHz). On-the-fly algorithms with two different vector sizes are compared with the algorithm using precomputed matrices. Note the influence of hardware vector size for on-the-fly algorithms (AVX vectors pack four double-precision floating point numbers where SSE3 vectors pack only two). The efficiency of the algorithm based on precomputed matrices drops above N = 127 probably due to cache size limitations.

3.2.1 Runtime Tuning

[21] We have now two different available algorithms: one uses precomputed values for inline image and the other one computes them on the fly at each transform. The SHTns library compares the time taken by those algorithms (and variants) at startup and chooses the fastest, similarly to what the FFTW library [Frigo & Johnson, 2005] does. The time overhead required by runtime tuning can be several orders of magnitude larger than that of a single transform. The observed performance gain varies between 10% and 30%. This is significant for numerical simulations, but runtime tuning can be entirely skipped for applications performing only a few transforms, in which case there is no noticeable overhead.

3.3 Multithreaded Transform

[22] Modern computers have several computing cores. We use OpenMP to implement a multithreaded algorithm for the Legendre transform including the above optimizations and the on-the-fly approach. The lower memory bandwidth requirements for the on-the-fly approach is an asset for a multithreaded transform because if each thread would read a different portion of a large matrix, it can saturate the memory bus very quickly. The multithreaded Fourier transform is left to the FFTW library.

[23] We need to decide how to share the work between different threads. Because we compute the inline image on the fly using the recurrence relation (15), we are left with each thread computing different θ, or different m. As the analysis step involves a sum over θ, we choose the latter option.

[24] From equation (7), we see that the number of terms involved in the sum depends on m, so that the computing cost will also depend on m. To achieve the best workload balance between a team of p threads, the thread number i (0 ≤ i < p) handles m = i + kp ≤ N, with integer k from 0 to (N + 1)p.

[25] For different thread number b, we have measured the time Ts(p) and Ta(p) needed for a scalar spherical harmonic synthesis and analysis, respectively (including the FFT).

[26] Figure 3 shows the speedup T(1)/T(p), where T(p) is the largest of Ts(p) and Ta(p), and T(1) is the time of the fastest single threaded transform. It shows that there is no point in doing a parallel transform with N below 128. The speedup is good for N = 255 or above, and excellent up to eight threads for inline image or up to 16 threads for very large transform (N ≥ 2047).

Figure 3.

Speedup obtained with multiple threads using OpenMP (gcc 4.6.3) on a 16-core Intel Xeon E5-2680 (Sandy Bridge architecture with AVX instruction set running at 2.7 GHz).

3.4 Performance Comparisons

[27] Table 1 reports the timing measurements of two SHT libraries, compared to the optimized Gauss-Legendre implementation found in the SHTnslibrary (this work). We compare with the Gauss-Legendre implementation of libpsht [Reinecke, 2011] a parallel spherical harmonic transform library targeting very large N, and with SpharmonicKit 2.7 (DH) which implements one of the Driscoll-Healy fast algorithms [Healy et al., 2003]. All the timings are for a complete SHT, which includes the fast Fourier transform. Note that the Gauss-Legendre algorithm is by far (a factor of order 2) the fastest algorithm of the libpsht library. Note also that SpharmonicKit is limited to N + 1 being a power of two, requires 2(N + 1) latitudinal colocation points, and crashed for N = 2047. The software library implementing the fast Legendre transform described by Mohlenkamp [Mohlenkamp, 1999], libftsh, has also been tested and found to be of comparable performance to that of SpharmonicKit, although the comparison is not straightforward because libftsh did not include the Fourier transform. Again, that fast library could not operate at N = 2047 because of memory limitations. Note finally that these measurements were performed on a machine that did not support the new AVX instruction set.

Table 1. Comparison of Execution Time for Different SHT Implementations
  1. The numbers correspond to the average execution time for forward and backward scalar transform (including the FFT) on an Intel Xeon X5650 (2.67GHz) with 12 cores. The programs were compiled with gcc 4.4.5 and -O3 -march=native -ffast-math compilation options.
libpsht (1 thread)1.05 ms4.7 ms27 ms162 ms850 ms4.4 s30.5 s
DH (fast)1.1 ms5.5 ms21 ms110 ms600 msNANA
SHTns (1 thread)0.09 ms0.60 ms4.2 ms28 ms216 ms1.6 s11.8 s

[28] To ease the comparison, we define the efficiency of the SHT by (N + 1)3/(2Tf), where T is the execution time (reported in Table 1) and f the frequency of the CPU. Note that (N + 1)3/2 reflects the number of computation elements of a Gauss-Legendre algorithm [the number of modes (N + 1)(N + 2)/2 times the number of latitudinal points N + 1]. An efficiency that does not depend on N corresponds to an algorithm with an execution time proportional to N3.

[29] The efficiency of the tested algorithms is displayed in Figure 4. Not surprisingly, the Driscoll-Healy implementation has the largest slope, which means that its efficiency grows fastest with N, as expected for a fast algorithm. It also performs slightly better than libpsht for N ≥ 511. However, even for N = 1023 (the largest size that it can compute), it is still 2.8 times slower than the Gauss-Legendre algorithm implemented in SHTns. It is remarkable that SHTns achieves an efficiency very close to 1, meaning that almost one element per clock cycle is computed for N = 511 and N = 1023. Overall, SHTns is between 2 and 10 times faster than the best alternative.

Figure 4.

Efficiency (N + 1)3/(2Tf) of the implementations from Table 1, where T is the execution time and f the frequency of the Xeon X5650 CPU (2.67 GHz) with 12 cores.

3.5 Accuracy

[30] One cannot write about an SHT implementation without addressing its accuracy. The Gauss-Legendre quadrature ensures very good accuracy, at least on par with other high-quality implementations.

[31] The recurrence relation we use (see section 3.2) is numerically stable, but for inline image, the value inline image can become so small that it cannot be represented by a double-precision number anymore. To avoid this underflow problem, the code dynamically rescales the values of inline image during the recursion, when they reach a given threshold. The number of rescalings is stored in an integer, which acts as an enhanced exponent. Our implementation of the rescaling does not impact performance negatively, as it is compensated by dynamic polar optimization: these very small values are treated as zero in the transform (equations (7) and (11)), but not in the recurrence. This technique ensures good accuracy up to N = 8191 at least, but partial transforms have been performed successfully up to N = 43,600.

[32] To quantify the error, we start with random spherical harmonic coefficients inline image with each real part and imaginary part between -1 and + 1. After a backward and forward transform (with orthonormal spherical harmonics), we compare the resulting coefficients inline image with the originals inline image. We use two different error measurements: the maximum error is defined as

display math

while the root mean square (rms) error is defined as

display math

The error measurements for our on-the-fly Gauss-Legendre implementation with the default polar optimization and for various truncation degrees N are shown in Figure 5. The errors steadily increase with N and are comparable to other implementations. For N < 2048, we have εmax < 10− 11, which is negligible compared to other sources of errors in most numerical simulations.

Figure 5.

Accuracy of the on-the-fly Gauss-Legendre algorithm with the default polar optimization.

4 Conclusion and Perspectives

[33] Despite the many fast spherical harmonic transform algorithms published, the few with a publicly available implementation are far from the performance of a carefully written Gauss-Legendre algorithm, as implemented in the SHTns library, even for quite large truncation (N = 1023). Explicitly vectorized on-the-fly algorithms seem to be able to unleash the computing power of nowadays and future computers, without suffering too much of memory bandwidth limitations, which is an asset for multithreaded transforms.

[34] The SHTns library has already been used in various demanding computations [e.g., Schaeffer et al., 2012; Augier & Lindborg, 2013; Figueroa et al., 2013]. The versatile truncation, the various normalization conventions supported, as well as the scalar and vector transform routines available for C/C++, Fortran or Python, should suit most of the current and future needs in high-performance computing involving partial differential equations in spherical geometry.

[35] Thanks to the significant performance gain, as well as the much lower memory requirement of vectorized on-the-fly implementations, we should be able to run spectral geodynamo simulations at N = 1023 in the next few years. Such high-resolution simulations will operate in a regime much closer to the dynamics of the Earth's core.


[36] The author thanks Alexandre Fournier and Daniel Lemire for their comments that helped to improve the paper. Some computations have been carried out at the Service Commun de Calcul Intensif de l'Observatoire de Grenoble (SCCI) and other were run on the PRACE Research Infrastructure Curie at the TGCC (grant PA1039).