• Open Access

Sub-gridscale parametrization from the perspective of a computer games animator

Authors


Abstract

Inspired by the techniques used by animators, it is argued that sub-gridscale parametrizations (such as those representing deep convection and mountain drag) could be computed on a finer grid than the Numerical Weather Prediction (NWP) model and use the same dynamical equations but with reduced-complexity physics and reduced-accuracy numerical techniques. This fine-scale parametrization grid would correctly capture the spatial and temporal correlation scales of the modelled processes as well as non-equilibrium aspects which are usually absent from conventional parametrization schemes (e.g. diurnal cycle in convection). Reducing the accuracy requirement at the fine scale, as well as the algorithmic complexity of multiple interacting physical processes, could bring forward in time some of the benefits of global convective-scale resolution that are beyond current computational resources. © Crown Copyright 2007. Reproduced with the permission of Her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd.

1. Introduction

The design of Numerical Weather Prediction (NWP) and climate models has always been constrained by the need for speed so that a numerical forecast can be delivered in a matter of hours. Numerical techniques have been employed that permit the use of large time steps at the expense of some tolerable loss of accuracy. Animators in the computer games industry are confronted with a similar need for speed so that natural physical phenomena can be modelled convincingly from a visual perspective (e.g. representation of cloudscapes in fast-moving flight simulation games). The strong demand for realism in computer simulation of clouds and fluid motion has increasingly led programmers to use the same physical equations as those used to forecast weather and climate. However, in their case accuracy is not the primary issue: visual plausibility and computational speed are the goals.

Current sub-gridscale parametrization techniques used in numerical weather prediction and climate modelling assume local statistical equilibrium or steady-state conditions. For instance, in convection parametrization the supposed existence of a unique relationship between the state of the atmosphere in a grid column and the fluxes carried by the clouds requires there to be a large number of convective clouds in that column. For deep convection this assumption is unrealistic at current NWP resolutions i.e. horizontal grid-lengths ∼40 km (Shutts and Palmer, 2007). In the Tropics, deep convective cloud systems span hundreds of kilometres and their dynamical influence extends beyond that, making the column-based convection parametrization difficult to justify. The mutual independence of neighbouring grid columns, in so far as parametrization is concerned, makes for poor numerics and a failure to capture the long-range influence and organization of cloud systems. These deficiencies are ameliorated somewhat by the tendency of topographic and mesoscale dynamical forcing to define the pattern of parametrized convection.

Tomita et al. (2005) have carried out global atmospheric simulation on the Earth Simulator using a 3.5 km grid without parametrized convection. Its realism in terms of tropical circulation is very encouraging but the computational expense is too prohibitive for climate simulation. Also, Moncrieff et al. (2005) have demonstrated that even with a grid as coarse as 10 km, explicitly simulated convection over North America shows many characteristics that are absent from runs with parametrized convection. Grabowski (2000) showed the potential benefits of the ‘super-parametrization’ approach which embeds a two-dimensional cloud-resolving model (CRM) within each grid column of a coarse-resolution global climate model, dispensing with the need for conventional convective parametrization. The need to end the ‘parametrization deadlock’ using approaches such as Grabowski's super-parametrization was forcefully argued by Randall et al. (2003) although the computational burden is still a barrier.

The approach proposed in this paper is to find a compromise point between the currently unattainable global CRM (with say 500 m horizontal resolution) and a conventional climate model with column-based parametrization. For instance, by replacing deep convection and gravity wave drag parametrization with a fine grid computation of reduced point-wise accuracy, it should be possible to achieve a better overall statistical description of these processes than conventional parametrization. The chances of success are likely to be determined by the skill with which two-way coupling can be achieved between the NWP and fine grids.

In the short-term, stochastic physics or backscatter schemes are already providing skill improvement in ensemble forecast systems (Buizza et al., 1999; Shutts, 2005) in seasonal weather forecasting with improved Pacific blocking frequency (Palmer et al., 2005; Jung et al., 2005) and tropical precipitation (Weisheimer, personal communication). It is envisaged that this will translate into improved climate simulation.

In the longer term, it is going to be necessary to partially resolve much of what is currently parametrized. A dual-grid approach will be described in which the currently parametrized physical processes are explicitly represented on a fine grid and coupled to an NWP model through coarse-grained fluxes and relaxation terms. The computational feasibility of this method depends on the extent to which the requirement for accurate numerics and complex physics can be relaxed.

2. Pattern generators and cellular automata

The use of cellular automata (e.g. John Conway's ‘Game of Life’, see Gardner, 1970) was suggested by Palmer (1997) as a means of describing near-gridscale variability associated with unresolved physical processes such as deep convection. He envisaged a probability-based cellular automaton (CA) in which the state of a cell (alive or dead) is governed by a probability distribution function (p.d.f) that depends on the state of the cell's nearest neighbours and is also a function of an associated specification of topographic data, e.g. land/sea, orographic height and land type. Organized convection could be modelled on this fine grid of cells by making convection (represented by a living cell) more likely if neighbouring cells were already convecting. The need for stochastic physical parametrizations in NWP and climate models was suggested by Palmer (2001) and the CA approach is one of many that could be employed.

The idea of using a CA as a pattern generator was taken up by Shutts (2005) in the development of a kinetic backscatter scheme which replaces the energy lost by model dissipation. A time-evolving CA pattern (see Figure 1) was used in place of a scheme based on smoothed random number fields (more commonly used in turbulent backscatter formulations, e.g. Mason and Thomson, 1992). Although not used in that work, the CA approach allowed the possibility of a mutual coupling between the CA grid and the forecast model grid. For instance the p.d.f could be made a function of some model state parameters such as the convective available potential energy when modelling convection. In general, one would imagine only a weak coupling between the CA grid and the NWP grid so that the forcing coming from the CA gently nudges the NWP model flow. For instance, the simple ‘Mexican Wave’ rule (stand up if the person to your right is already standing up) could form the basis of an equatorial Kelvin wave forcing function that would modulate convective parametrization and drive an eastward-moving, convectively coupled wave response.

Figure 1.

Pattern generated by a cellular automaton where each cell can have 50 ‘lives’ with a life being lost at every step. The grey-scale intensity is a measure of the number of lives remaining, with 50 being white and 0 being black. The resulting animated pattern is reminiscent of a convective cloud organization in a cold air outbreak over the sea. Animated figure available from the supplementary materials page.

3. Computer games animation techniques

Early computer landscape modelling or scene generator software (e.g. Vista for the Commodore Amiga) used fractal methods to define natural scenes such as mountains, forests and clouds. Related to this is the work of Barnsley (2000) on fractal compression, which attempts to find a set of rules, which, when randomly combined, produce a rendering of the desired object. Determining these rules is a difficult and somewhat controversial task, however. Recent emphasis is on real-time simulation and rendering of clouds and smoke for flight simulators using fast algorithms operating on graphics processor hardware. Rendering of realistic cloudscapes from a moving viewpoint requires efficient algorithms that describe the multiple forward scattering and attenuation of sunlight (Harris, 2003).

When it comes to simulating the motion of fluid flow, however, animators have turned to using the Navier–Stokes equations and have copied methods used by the computational fluid dynamics community with one important exception; they are not concerned about accuracy, only numerical stability and visual likeness to the real phenomenon (Stam, 1999, 2003). Following current NWP practice, Stam uses the semi-Lagrangian advection algorithm with very large time steps to simulate neutrally buoyant fluid flow carrying tracers that might represent smoke or clouds. Since low-order interpolation of flow variables to the departure point acts to unnaturally smooth the fields, Fedkiw et al. (2001) have adopted the ‘vorticity confinement’ technique of Steinhoff and Underhill (1994) to oppose spurious dissipation and maintain vorticity features close the the model's grid scale. Steinhoff's technique has even been used by the Hollywood film industry in the creation of realistic computer-generated smoke and fluid motion scenes.

Vorticity confinement can be achieved in different ways but the simplest way of viewing it is the addition of an apparent force to the momentum equation that is perpendicular to the local vorticity; it circulates around vortex tubes and has a strength proportional to the magnitude of the vorticity itself (see Figure 3).

Specifically, the momentum equation can be written in the form:

equation image(1)

where V is the velocity, p is the pressure, ρ is the density and ν is the kinematic viscosity. The vorticity confinement term ϵS is defined such that ϵ is a constant and

equation image(2)

where ω = ∇× V is the vorticity, and the unit normal n is given by:

equation image(3)

Here, a constant ϵ is proportional to the horizontal grid spacing and its value is chosen by trial and error. Energetically, the vorticity confinement term tends to counterbalance numerical diffusion and ϵ must be small enough to prevent net energy input.

An example of its effect on the simulation of fluid flow is given in Steinhoff et al. (2006) where a three-dimensional jet emerging from a flat plate is simulated with and without the vorticity confinement term in Equation (1). The simulation without confinement is rather bland and lacking small-scale turbulent eddies, but with vorticity confinement it looks more realistic and shows rolled-up Kelvin–Helmholtz eddies at the edge of the plume (Figure 2).

Figure 2.

Tracer concentration in a simulation of a jet emerging at 30 degrees to a flat plate without (left) and with (right) vorticity confinement (from Steinhoff et al., 2006). The computational grid uses 129 × 65 × 65 points and there is no cross-wind velocity. Animated figure available from the supplementary materials page.

Figure 3.

A sketch of a vortex tube with vortex line passing along its axis. The vector triad comprises a normal to the vortex tube n, a vector equal to the local vorticity lying in the vortex tube surface and the resulting vector cross product S.

Inspired by the work of Stam, a flow-past-buildings code was written to confirm the extra speed attainable when the requirement for accuracy is relaxed. The code executes sufficiently quickly for the user to be able to interact with the display and place smoke tracers into the computational domain via mouse pointer actions (see Figure 4). The authors were impressed by the efficiency and realism of the simulation and realized the potential for this approach to be used as a more sophisticated replacement for the CA pattern generator in stochastic parametrization. It is important to recognize that although the point-wise accuracy in any simulation has been sacrificed for the sake of computational speed, the statistical behaviour of the flow may be quite realistic with the inclusion of vorticity confinement and backscatter. Conventional parametrization could not provide the statistical variability or time history that would be inherent in a stochastic parametrization based on explicit simulation.

Figure 4.

A snapshot of the vorticity field from an interactive movie of flow past a series of buildings. 500 × 125 points were used and the location of buildings is enforced by masking the flow to zero. On a 2.4 GHz Pentium 4 personal computer using OpenGL graphics the animation was able to run at 15 frames a second.

4. A fake cloud-resolving model using very simple physics

The convective parametrization problem remains a serious concern for climate modellers and those engaged in seasonal weather prediction. This is because deep convection in the tropics is a fundamentally multi-scale phenomenon—powered at the scale of a kilometre or less but transporting heat, water vapour (as well as droplets and ice crystals), momentum and kinetic energy to the planetary scale. Individual cloud systems are affected by radiative fluxes and the collective effect of the clouds in determining the climatological radiation budget is an important element in the uncertainty of global warming predictions. In view of this, there is a real need to move on from the current column-based convection parametrization methods so that the tropical atmosphere is better modelled. In the spirit of developing fast, stable models that trade in accuracy for speed, a two-dimensional explicit cloud simulation model has been developed with a greatly compromised description of the moist process and cloud microphysics.

A typical CRM has a complex description of the microscale processes that form cloud droplets and ice crystals together with their subsequent fall out as precipitation. The Met Office Large Eddy Model (LEM) for instance describes 34 distinct microphysical processes using up to 10 mixing-ratio or number-concentration variables. Many of these microphysical processes have associated uncertainty, e.g. the empirical formula for the fall speed of precipitation. With horizontal grid lengths of 1 km or more, simulated convective updraughts essentially occupy a single grid column and are therefore not properly resolved. In the LEM there is a tendency to produce updraught speeds that are much too large without the introduction of additional numerical diffusion. Under these circumstances, the model's near-gridscale dynamical error can be expected to be very large and it is doubtful whether some aspects of the complexity of cloud microphysical interactions can be justified.

From the viewpoint of a computer games programmer, one could attempt to ‘compress the complexity out of the cloud microphysics’ so that in some sense the error level is on a par with the error of the dynamics. This would help generate a fast code that could be coupled to a forecast model to provide the spatial and temporal coherence of convective forcing that is absent from column-based parametrization. Such a ‘fake CRM’ has been developed as a prototype to see what degree of simplification is tolerable while preserving the important large-scale structure of tropical cloud systems.

Any cloud model has to have a reasonable representation of the condensation process that is triggered when the vapour mixing ratio exceeds the saturation value for that temperature (T) and pressure (p), i.e. qsat(T, p). In the fake CRM all non-precipitating water is held in a single variable q and all precipitating water (rain, snow, hail, etc.) is held in qR. When q exceeds qsat, it is relaxed back to qsat at a rate determined by the supersaturation qqsat(T, p), which is deemed to be the cloud water mixing ratio. The rate of decrease in the supersaturation determines the rate of production of qR and the resulting precipitation falls at a speed W given by the Kessler formula:

equation image(4)

Given the temperature and model level, qsat is interpolated from a pre-computed look-up table and uses a formula for saturation over ice if T < − 15 C and a linear interpolation of liquid water and ice formulae values if − 15 < T < 0 C. The model solves anelastic, quasi-Boussinesq equations using a second-order Runge–Kutta time integration scheme and pre-conditioned biconjugate gradient solver for pressure. Convection is driven by imposed cooling at a rate of 1.5 K per day up to a height of 11 km and decaying to zero by a height of 15 km. The reference potential temperature profile (and initial state) follows a moist adiabat from the surface up to a tropopause at 16 km; above that a constant (stratospheric) static stability is assumed. Surface fluxes of heat, water vapour and momentum are given by standard bulk surface transfer formulae.

Figure 5 shows a vertical section of the relative humidity after 9.25 days of integration. Convection organizes on the horizontal scale of thousands of kilometres and typically extends up to the top of the cooling layer but with some plumes ascending to the tropopause. The white areas are regions of supersaturated air and therefore cloudy, if, as has been assumed, supersaturation corresponds to cloud water. With this interpretation, the areal extent of upper level cloud is therefore unrealistically small and one might consider the use of a post-processing algorithm to map relative humidity to cloud amount. Further tuning of the rate of relaxation of q to qsat may avoid the problem.

Figure 5.

Relative humidity field taken from a simulation with the fake CRM. As percentages, purple is 0–10; dark blue is 10–20; light blue is 20–30; green is 30–50; yellow is 50–80; red is 80–100 and grey-white is 100–140. See also the animation from which this frame was taken. Animated figure available from the supplementary materials page.

So far, there has been no attempt to gain speed by degrading the accuracy of the dynamical core of the model except by relaxing the convergence tolerance on the pressure solver. The next planned stage of the work is to use a semi-Lagrangian advection scheme with bilinear interpolation and vorticity confinement to combat excessive dissipation. A semi-implicit treatment of the gravity waves will be necessary if large time steps are to be taken.

There are many questions to be addressed in connection with the way such a fake CRM could be coupled to a climate model: the next section demonstrates the basic principles underlying this coupling.

5. A dual-grid parametrization scheme

As a proof-of-concept exercise, a simple fluid dynamical problem has been modelled on a fine grid and coupled to a coarse grid, i.e. a dual grid simulation. Coupling was attempted in several different ways and the vorticity confinement technique tested with different values of ϵ. Recall that the fine model can employ a very different physical model from that used by the coarse grid (e.g. CA versus Navier–Stokes solver). Here, the fine model is based on the same fluid dynamical equations as the coarse model but employs a different numerical advection scheme (semi-Lagrangian) for speed rather than accuracy. Specifically, the grid is 10 times finer (in both directions) than that of the coarse model but uses the same time step, therefore implying a Courant number 10 times larger and equal to 5 for the plume simulation below. The coarse model however is deemed to be relatively accurate and is representative of an NWP model. The coarse model used the second-order, semi-implicit Crank–Nicolson scheme for time marching and, in order to remove the need for additional viscous terms, the QUICK scheme of Leonard (1979). The resulting non-linear algebraic system was solved iteratively using a Jacobian-free Newton–Krylov method (Knoll and Keyes, 2004).

Coupling is achieved by forcing the coarse model with coarse-grained eddy fluxes computed from the fine model. Although unimportant for the present problem, a relaxation term was added to the fine scale model, which nudged the fine variables towards the linearly interpolated coarse ones. Some type of relaxation/control is required in order that the two models do not drift too far apart over the course of a long time integration. Excessive relaxation from the coarse to fine model suppresses some of the flow structures that one would hope would be passed from fine to coarse model.

The flow consists of three convective plumes ascending through an isentropic atmosphere at rest. The plumes are forced by three heat sources situated at the surface, each of which spans three grid points on the fine mesh. The heat sources are entirely sub-gridscale for the coarse model and therefore have no direct influence in the absence of diffusion and a boundary layer scheme.

Figure 6 shows the buoyancy on both coarse and fine mesh model at some time after the plume has ascended up to the model's lid. From this figure it can be seen that, even though the fine mesh solution would be considered highly inaccurate, many of the actual features of an accurate converged solution are present. It can also be seen that the coarse model, which only sees the plumes through the coupling to the fine grid, reproduces most of the features of the rising plume. It is important to note that, apart from the relaxation term (which could be set to zero in this instance), there are no tunable parameters in this form of parametrization.

Figure 6.

The buoyancy field (ms−2) on the coarse (upper) and fine (lower) model grids after the plume has ascended to the top of the model domain. The model has 25 × 50 points and uses a 25 s time step. Animated figure available from the supplementary materials page.

It is unlikely that a column-based parametrization could generate the upscale influence found in the coupled grid simulation (see the animation linked to Figure 6) and the additional computational expense, provided not too prohibitive, would be worthwhile.

6. Discussion

This paper supports the growing desire in the climate and NWP modelling communities to move away from column-based physical parametrization and use some intermediate approach where higher spatial resolution can be used at an affordable computational cost.

Grabowski (2000) and Randall et al. (2003) discuss the technique of embedding conventional CRMs in the grid boxes of a climate model and found encouraging improvements though at a very high cost. Jung and Arakawa (2005) discuss the limitation caused by the use of cyclic lateral boundary conditions in super-parametrization and proposed a modified coupling method between climate model and CRM. Computational cost savings are proposed in this paper through the use of cheap advection and physics algorithms in the manner adopted by the computer animation industry. Devices such as vorticity confinement help to maintain statistical accuracy in spite of compromises made to the accuracy of the Lagrangian conservation and pressure solver steps.

The fake CRM described here would gain speed over a conventional CRM through the use of semi-Lagrangian advection with large time steps and bilinear interpolation rather than more expensive higher-order schemes. Also, by approximating the physics in such a way as to avoid conditional statements, the code becomes much more vectorizable. Even if a very fast, fake CRM is developed, the memory requirements for running it in on a global grid would be very high (e.g. about 80 gigabytes per model field for a 2 km grid with 50 vertical levels). Perhaps a less ambitious objective initially would be to couple a 10 km fine grid to a climate model with 100 or 200 km resolution. Other computational savings in the fine model could be made using adaptive mesh refinement (e.g. Hubbard and Nikiforakis, 2003) and reduced longitudinal resolution in the polar cap regions of the globe. Grid refinement could be permanently enabled for mountainous terrain and perhaps an equatorial strip (e.g. from 20 degrees north to 20 degrees south). By limiting oneself to first-order (at worst) accuracy in space and time, many of the complex programming issues related to adaptive mesh refinement are greatly simplified.

The coupling of the fake CRM to a climate model will require a post-processing step that relates the model's q field to the water mixing ratio variables of the climate model since it contains the cloud mixing ratio implicitly. As noted earlier, the cloud amount is underestimated in the current formulation if qqsat is interpreted literally as the cloud mixing ratio. It is also necessary to make such post-processing decisions as to whether the cloud is composed of droplets or ice and likewise for the precipitation type.

This class of coupled model is an example of the heterogeneous, multiscale method (HMM) (Engquist et al., 2003; Ren and Weinan, 2005), which is a general methodology for efficient computation in coupled macroscopic and microscopic models. Another area in physics where HMM is relevant is the simulation of polymeric fluid flow (polymers in a solvent) where the internal stress is made up of contributions from the solvent viscosity and that due to the bending and stretching of polymers. The dynamics of the immersed polymers are modelled as a microscale process (e.g. using a springy dumbbell idealization) that couples to a macroscale fluid model. This method contrasts with the classical generalized Newtonian model for the viscosity of the polymeric fluid.

The schematic diagram in Figure 8 summarises two potential strategies for future parametrization. The left-hand diagram represents the proposed dual grid approach with post-processor lying between the coarse and fine models; the right-hand diagram represents the conventional column-based parametrization approach with the addition of a stochastic modulator. An example of the latter is the ECMWF stochastic physics scheme (Buizza et al., 1999), which multiplies parametrization tendency field by a random number drawn from the range 0.5 to 1.5 and with uniform p.d.f.

Apart from having a crude description of moist thermodynamic and cloud microphysical processes, the fine model of the dual grid method could include a crude specification of orography, as in the buildings model discussed earlier, together with land type or sea state information. Global NWP model grid lengths of ∼40 km are insufficient to resolve mesoscale mountain ranges such as the Swiss Alps and Pyrenees and it is well known that they exert a major influence on the flow through very high drag forces. The fine model could include the specification of these mountains quite accurately and perform a more detailed surface drag calculation than that achievable on the coarse grid. Indeed, initial simulations, such as those described in Section 5, indicate that the upscale influence of subgrid orographic features, namely the formation of large-scale von Karman streets, is possible within the present dual-grid formalism. However, extra terms are required within the coarse model to correct for the presence of the immersed obstacle: i.e. to correct for the fact that the volume of fluid on the coarse and fine meshes is different. This was partially rectified by the introduction of a ‘volume fraction’ weight within the coarse model formulation (see Figure 7).

Figure 7.

The downstream component of velocity u for flow past three mountain blocks indicated by small black rectangles in the lower (fine mesh) picture. The upper picture shows the coarse model response to a combination of the coarse-grained Reynolds' stresses from the fine mesh model plus a correction to the mass and momentum budgets that accounts for an effective porosity for the coarse grid boxes containing the mountains. The model has inflow/outflow boundary conditions with u = 1 on the left side inflow boundary (non-dimensional units). Animated figure available from the supplementary materials page.

Figure 8.

Parametrization strategies: (left) dual grid approach; (right) conventional column-based approach with additional stochastic terms.

Similarly, regions of islands and lakes could be better defined on the fine grid and provide more accurate coarse-grained surface energy fluxes to the coarse NWP model.

Some NWP models have the possibility of running the dynamical equations of motion and physical parametrizations on separate grids with differing resolutions (e.g. ECMWF). Recent experiments at ECMWF have demonstrated the efficacy of using a higher-resolution ‘physics grid’ than the dynamics grid (M. Miller, personal communication). In that configuration, coarser-resolution fields of wind, temperature and moisture are interpolated onto a grid of higher horizontal resolution and fed into the forecast model's parametrization schemes. The resulting parametrization tendencies are then coarse-grained back to the dynamics grid resolution and used to compute new dynamical tendencies. This is an example of a dual-grid computation in which the fine grid model has no representation of advection or direct horizontal communication.

At this stage it is difficult to say how practical the dual-grid method will be and how much tuning will be required in both the fine model's physics scheme and in the subsequent post-processing step. More careful consideration of the coupling method will be required to ensure an optimal transmission of information between the two grids. An initial goal is to use a barotropic model with stochastic vorticity sources as a fine-scale pattern generator in a kinetic energy backscatter scheme, such as that described in Shutts (2005). A two-way relaxation between the forecast model and this barotropic model would provide the forecast model with a stream function forcing function that acts as a dynamically filtered stochastic backscatter term. The relaxation would be carried out for a small range of heights around the mean height of the middle-latitude tropopause so that the resultant vorticity forcing targets the jet stream regions. In an ensemble prediction context, each forecast member could have its own fine-mesh barotropic model. In this way, the fine model would provide a stochastic tropopause potential vorticity anomaly generator that would be effective in emulating model error in an ensemble prediction system.

In the context of climate modelling, the dual grid technique would be likely to provide a better representation of the upscale influence of tropical cloud systems and coupling to equatorially trapped waves. The ability of climate models to represent low-frequency tropical variability, such as the Madden-Julian oscillation, is linked to convective parametrization since different schemes have varying impacts on the phenomenon (Slingo et al., 1996). Nevertheless, the task of coupling and tuning the dual grid approach is likely to be a formidable one. As computational resources increase, the resolution of the fine grid can be increased and the accuracy of advection, moist thermodynamics, cloud microphysics and radiative transfer schemes improved. This strategy would be properly convergent unlike the present state of affairs with column-based, quasi-equilibrium parametrizations.

Acknowledgements

We thank Alberto Arribas, Chris Smith, Tim Palmer, Judith Berner and Martin Miller, Roy Kershaw and Andrew Staniforth for useful comments. We are also grateful to John Steinhoff and Nicholas Lynn for providing the images and animations corresponding to Figure 2 as well as other helpful comments.

Ancillary