High-order CFD methods: current status and perspective

Authors


Correspondence to: Krzysztof Fidkowski, Department of Aerospace Engineering, University of Michigan, Ann Arbor, MI, USA.

E-mail: kfid@umich.edu

SUMMARY

After several years of planning, the 1st International Workshop on High-Order CFD Methods was successfully held in Nashville, Tennessee, on January 7–8, 2012, just before the 50th Aerospace Sciences Meeting. The American Institute of Aeronautics and Astronautics, the Air Force Office of Scientific Research, and the German Aerospace Center provided much needed support, financial and moral. Over 70 participants from all over the world across the research spectrum of academia, government labs, and private industry attended the workshop. Many exciting results were presented. In this review article, the main motivation and major findings from the workshop are described. Pacing items requiring further effort are presented. Copyright © 2013 John Wiley & Sons, Ltd.

1 INTRODUCTION

High-order CFD methods have received considerable attention in the research community in the past two decades because of their potential in delivering higher accuracy with lower cost than low-order methods. Before we proceed any further, let us first clarify what we mean by order of accuracy and high order. Mathematically, a numerical method is said to be the kth order (or order k) if the solution error e is proportional to the mesh size h to the power K, that is, e ∝ hk. In 2007, when the first author became the chair of the CFD Algorithm Discussion Group (CFDADG) in the American Institute of Aeronautics and Astronautics Fluid Dynamics Technical Committee (AIAA FDTC), a survey about the definition of high order was sent to members of the technical committee and other researchers outside the technical committee. Amazingly, we received a unanimous definition of high order: third order or higher. This is perhaps because nearly all production codes used in the aerospace community are first or second order accurate. We do understand that in certain communities, only spectral methods are considered high order.

Many types of high-order methods have been developed in the CFD community to deal with a diverse range of problems. At the extremes of the accuracy spectrum, one finds the spectral method [1] as the most accurate and a first-order scheme (e.g., the Godunov method [2]) as the least accurate. Many methods were developed for structured meshes, for example, [3-8]. Other methods were developed for unstructured meshes, for example, [6, 9-18]. For a review of such methods, see [19] and [20]. The purpose of the present paper is not to review high-order methods but to measure the performance of these methods as fairly as possible. In addition, we wish to dispel some myths or beliefs regarding high-order methods.

Belief 1. High-order methods are expensive

This one is among the most widely held belief about high-order methods. The myth was perhaps generated when a CFD practitioner programmed a high-order method and found that obtaining a converged steady solution with the high-order method took much longer than with a low-order method on a given mesh. It is well known that a second-order method takes more CPU time to compute a steady solution than a first-order one on the same mesh. But, nobody is claiming that first-order methods are more efficient than second-order ones: first-order methods take more CPU time to achieve the same level of accuracy than second-order ones, and a much finer mesh is usually needed. When it comes to high-order methods, the same basis of comparison must be used.

We cannot evaluate method efficiency on the basis of the cost on the same mesh. We must do it on the basis of the cost to achieve the same error. For example, if an error of one drag count (0.0001 in units of the drag coefficient) is required in an aerodynamic computation, a high-order method may be more efficient than a low-order one because the high-order method can achieve this error threshold on a much coarser mesh. Therefore, the only fair way to compare efficiency is to look at the computational cost to achieve the same level of accuracy or given the same CPU, what error is produced. On this basis, high-order methods are not necessarily expensive.

Belief 2. High-order methods are not needed for engineering accuracy

CFD has undergone tremendous development as a discipline for three decades and is used routinely to complement the wind tunnel in the design of aircraft [21]. The workhorse production codes use second-order finite volume method (FVM), finite difference method (FDM), or FEM. They are capable of running on small clusters with overnight turnaround time to achieve engineering accuracy (e.g., 5% error) for Reynolds-averaged Navier–Stokes (RANS) simulations. There was much excitement when the CFD community moved from first-order to second-order methods as the solution accuracy showed significant improvement. The reason is that when the mesh size and time step are reduced by half, the computational cost increases by a factor of roughly 16 (three spatial dimensions and one time dimension). Therefore, to reduce the error by a factor of 4, the DOFs increase by a factor of 256 for a first-order method and only 16 for a second-order one.

Whereas second-order methods have been the workhorse for CFD, there are still many flow problems that are too expensive or out of their reach. One such problem is the flow over a helicopter. The aerodynamic loading on the helicopter body is strongly influenced by the tip vortices generated by the rotor. These vortices travel many revolutions before hitting the body. It is critical that these vortices be resolved for a long distance to obtain even an engineering accuracy level prediction of the aerodynamic forces on the helicopter body. Because first-order and second-order methods strongly dissipate unsteady vortices, the mesh resolution requirement for the flow makes such a simulation too expensive even on modern supercomputers. The accurate resolution of unsteady vortices is quite a stringent requirement similar to that encountered in computational aeroacoustics (CAA) where broadband acoustic waves need to propagate for a long distance without significant numerical dissipation or dispersion errors. In the CAA community, high-order methods are used almost exclusively because of their superior accuracy and efficiency for problems requiring a high-level of accuracy [22]. Thus, for vortex-dominated flows, high-order methods are needed to accurately resolve unsteady vortices. Such flows play a critical role in the aerodynamic performance of flight vehicles.

Why should we stop at second-order accuracy? There is no evidence that second-order is the sweet spot in terms of the order of accuracy. The main reason that these methods are enjoying much success in engineering applications today is because of the research investment by the CFD community from the 1970s to the 1990s in making them efficient and robust. With additional research, high-order methods could become a workhorse for future CFD. Ultimately, the most efficient approach is to let the flow field dictate the local order of accuracy and grid resolution using hp-adaptation.

Another reason that second-order methods may not be accurate enough is the following. An acceptable solution error for one variable may lead to an unacceptable solution error for another. For example, a 5% error in velocity may translate into a 20% or higher error in skin friction depending on many factors such as Reynolds number, mesh density, and the method employed. As another example, for flows over helicopters, a 5% error in the drag coefficient may require that the strength of the tip vortices be resolved within 5% error over four to eight revolutions. In short, low-order methods cannot satisfy even engineering accuracy for numerous problems.

So much about CFD myths; now, let us turn to some justified concerns. The main reasons why high-order methods are not used in the design process include the following:

  • They are more complicated than low-order methods.

  • They are less robust and slower to converge to steady state because of the much reduced numerical dissipation.

  • They have a high memory requirement if implicit time stepping is employed.

  • Robust high-order mesh generators are not readily available.

In short, in spite of their potential, much remains to be done before high-order methods become a workhorse for CFD.

The main goals of the workshop on high-order methods are (i) to evaluate high-order and second-order methods in a fair manner for comparison and (ii) to identify remaining difficulties or pacing items. Concerning (i), we measure performance by comparing computational costs to achieve the same error. The workshop identified test cases and defined error and cost for a wide variety of methods and computers. In many cases, computational meshes were also provided.

The remainder of the paper is organized as follows. In Section 2, the motivation and history of the workshop are described. After that, the benchmark cases are presented together with how to compute errors and work units in Section 3. Section 5 depicts some representative results from the workshop to illustrate the current status in the development of high-order methods. Concluding remarks and pacing items for future work are described in Section 6.

2 MOTIVATION AND HISTORY OF THE WORKSHOP

After decades of development, second-order methods became robust and affordable for RANS simulations on small CPU clusters during the 1990s. They are now widely used in product design in aerospace, automotive, micro-electronics, and many other industries. In aerospace engineering, RANS codes had difficulties in predicting vortex-dominated flows, for example, during aircraft takeoff and landing with high-lift configurations, and in computing aeroacoustic noise generated by landing gear. In fact, many believe that the RANS equations break down for such problems. This difficulty prompted the development of more powerful CFD tools based on LES, hybrid LES/RANS, and high-order methods, especially those that can handle complex geometries. In the United States, the computational mathematics program of the Air Force Office of Scientific Research has provided perhaps the most support for the development of high-order methods in academia as well as industry. In Europe, the Adaptive Higher-order Variational Methods for Aerodynamic Applications in Industry (ADIGMA) project [23] supported a consortium consisting of 22 organizations, which included the main European aircraft manufacturers, the major European research establishments, and several universities, all with well-proven expertise in CFD. The goal of ADIGMA was the development and utilization of innovative adaptive higher-order methods for the compressible flow equations enabling reliable, mesh independent numerical solutions for large-scale aerodynamic applications in aircraft industry.

In early 2007, several authors became members of the CFDADG in the AIAA FDTC. In the first meeting of the CFDADG, the members discussed and passed the following charter:

To coordinate research and promote discussion for the improvement of CFD algorithms with a particular focus on the following:

  • High-order spatial discretizations

  • Error estimates, grid adaptation, and methods capable of handling skewed grids

  • Efficient time marching and iterative solution methods for steady and unsteady flow

  • Benchmark and challenge problems for the aforementioned methods

To solicit input and start an active discussion, a survey was distributed to researchers active in CFD algorithm development. The objectives of the survey were as follows:

  • To identify the pacing items in research on high-order methods

  • To produce a position paper for funding agencies and to inspire graduate students who are interested in pursuing research in high-order methods

  • To propose a mechanism to assess the performance of high-order methods relative to low-order methods

  • To identify application areas best suited for high-order methods

Seven questions were asked in the survey. The following is a short summary of the survey results.

  1. High-order means different thing to different people. Do you agree with the definition that third and higher orders are high-order?

    Yes, the consensus in the aerospace community is that high-order methods start at third order.

  2. What applications require high-order methods? Obvious ones include wave propagation problems and vortex-dominated flows. Are there others you can think of?

    The benefits of high-order methods for aeroacoustic and electromagnetic wave propagation have been conclusively demonstrated in the past two decades. More recently, high-order methods have also shown higher resolution of unsteady vortices. Because of their higher resolution, high-order methods are routinely chosen for LES and DNS of turbulent flows.

  3. Do you believe that high-order methods will replace low-order ones, or rather complement low-order ones in the future?  

    Many believe that high-order methods will complement low-order methods in the future. The optimal solution is the use of hp-adaptations (mesh and polynomial order adaptations) to achieve the best accuracy with minimum cost. In smooth regions, p-adaptation is preferred, whereas in discontinuous regions, h-adaptation is favored.

  4. Do you anticipate high-order methods to be used heavily with Reynolds-averaged Navier–Stokes computations? Why or why not?

    There was a difference in opinion on whether high-order methods will elicit significant benefits for RANS solutions. However, many believe that high-order methods should be implemented and evaluated for RANS simulations. The turbulence model equations should be solved with high-order discretizations. Defects in turbulence models may be exposed in this exercise.

  5. Do you believe a robust high-order limiter will be found that will capture a shock wave while resolving a smooth (acoustic) wave passing through the shock with high-order accuracy?

    This is a very challenging research area, and currently, we are not aware of limiters that are completely satisfactory. It is believed that subcell resolution may be necessary in shocked cells to preserve accuracy. Another possibility is through hp-adaptations to achieve high resolution for shocks and high accuracy for smooth features.

  6. What error estimators do you use for adaptive high-order methods? What are the challenges?

    For functional output such as lift and drag, the adjoint-based adaptation has been shown to be effective for many problems. On adaptation, robust anisotropic mesh adaptation is a significant challenge. More rigorous error estimators suitable for hp-adaptations need to be developed.

  7. Could you give three of the most urgent pacing items in high-order method research?

    • Robust, compact, accuracy-preserving, and convergent limiters

    • High-order viscous grid generation and adaptation with clustering near curved boundaries, error estimation, and anisotropic hp-adaptation

    • Low-storage, efficient iterative solution methods for both steady and unsteady flow problems

There was an immediate consensus in the CFDADG to plan for a workshop on high-order methods with a set of benchmark problems to assess the performance of these methods relative to mature low-order methods used in production codes. Biannual discussions were held at the winter and summer AIAA conferences to identify the benchmark cases, define error, cost, and convergence criteria; all of which were important for an objective comparison. It was evident from the discussions that obtaining numerical convergence with high-order methods was more important than comparing with experimental data. Each participant would be asked to obtain an hp-independent solution (within a certain error threshold) to define the error, either in aerodynamic outputs of interest or in an entropy norm for inviscid flows. To attract the most participation from the CFD community, the CFDADG decided to have a wide variety of cases in terms of the level of difficulty, from two-dimensional (2D) steady inviscid to three-dimensional (3D) unsteady turbulent flow problems. It is of course much easier to demonstrate hp-independence with 2D problems. A close collaborative relationship was established between the CFDADG and the ADIGMA teams in finalizing the details of the workshop. In fact, many of the benchmark cases were chosen from the test cases adopted in the ADIGMA project. An international organizing committee was then formed, with each member responsible for one case. Technical details of the workshop are described next.

3 ERROR AND COST DEFINITION

A main objective of the workshop is to assess if high-order methods can obtain a numerical solution more efficiently than low-order ones, given the same error threshold. For steady problems, a way to define iterative convergence must be identified. Convergence is often related to the reduction of the global residual. Because we do not want to exclude any numerical methods, the definition of residual must be as general as possible. Furthermore, a universal means of measuring cost and error must be identified. It took more than 3 years of discussions to finally have all the pieces of the puzzle.

3.1 Definition of residual and the L2 norm

It is not trivial to define a residual easily computable for all methods. Let us do it first for an FVM and then extend the definition to other methods. Consider the Euler or the Navier–Stokes equations written in conservative form,

display math(1)

where Q is the vector of state variables and inline image is the vector of fluxes in all coordinate directions. Integration of the equation on element Vi yields

display math(2)

where inline image is the cell average state. Now, replacing the normal flux term with any Riemann flux as the numerical flux, we obtain

display math(3)

where Qi is the reconstructed approximate solution on Vi and Qi +  is the solution outside Vi. The element residual is then defined as

display math(4)

The L2 norm of the residual is then defined as

display math(5)

where N is the total number of elements or cells. The extension of this definition to weighted residual-based methods is quite straightforward. In a DG method (DGM), for example, given the local DOFs, inline image, the final update equation can be written as

display math(6)

where M is the mass matrix and inline image consists of the volume and surface integral terms. Then, the residual can be defined as

display math(7)

Similarly for a nodal finite difference type method, the DE is solved at each node i according to

display math(8)

Then, the residual at the node is

display math(9)

To avoid scaling issues in the convergence criterion, the residual associated with density is used to monitor convergence in the workshop. However, for the flat plate boundary layer case, the density residual with a uniform free stream is machine zero. Therefore, the energy residual is recommended in the future.

3.2 Definition of cost and error

Cost was perhaps not only the most difficult measure to quantify but also a crucial one. Many factors can affect the cost of a steady simulation, such as the initial condition, the convergence criterion, the solution algorithm and its associated parameters, and obviously the computer. To nondimensionalize cost, the TauBench code [24] was adopted to measure computer performance. TauBench mimics the European production code TAU, widely used by aircraft manufacturers. The work unit is equivalent to the CPU time taken to run the TauBench code for ten steps with 250,000 DOFs. To nondimensionalize the cost of parallel runs, the CPU time was first multiplied by the number of processors used. For codes with imperfect algorithmic scalability, it was thus advantageous to run on the fewest processors possible. Although the use of the TauBench code is not a fool-proof approach, our hope is that the work units provide a reasonable rough estimate of cost.

For problems with an analytical solution in any variable, for example, entropy for inviscid flows, a global error in that variable can be used to define error. To accommodate different methods, the workshop provided three options for error computation.

Option 1. For any solution variable (preferably nondimensional) s, the L2 error is defined as

display math(10)

For an element-based or cell-based method (FV, DG, etc), where a solution distribution is available on the element, the element integral should be computed with a quadrature formula of sufficient precision, such that the error is nearly independent of the quadrature rule. Note that for an FVM, the reconstructed solution should be the same as that used in the actual residual evaluation.

Option 2. For a finite difference scheme, if the transformation Jacobian matrix is available, that is,

display math(11)

the L2 error is defined as (Option 2a)

display math(12)

Otherwise, the L2 error is defined as (Option 2b)

display math(13)

Option 3. For some numerical methods, an error defined on the basis of the cell-averaged solution may reveal superconvergence properties. In such cases, the following definition is adopted (Option 3a):

display math(14)

In this definition, one can also drop the volume in a similar fashion to the definition for finite difference type methods, that is, (Option 3b)

display math(15)

Each benchmark case can choose one of the options in error definition.

For problems without an analytical solution, nondimensional integrated forces such as lift and drag coefficients are used in the error definition. In the workshop, a precision of 0.01 counts or 10 − 6 in cl and cd was required. In other words, hp-independent cl and cd satisfying the precision requirement need to be computed either with global refinement or hp-adaptations. Then, these converged values are used to compute the cl or cd errors. For external flow problems, the location of the far-field boundary should be far enough that the effect on cl and cd is within 0.01 counts.

4 BENCHMARK CASES

A total of 15 benchmark cases was adopted in the workshop, divided into three difficulty categories: easy (C1), intermediate (C2), and difficult (C3). Although they can be found in the workshop website (http://zjwang.com/hiocfd.html), they are described in this paper for the sake of completeness. The requirements are omitted to save space.

4.1 Problem C1.1. Inviscid flow through a channel with a smooth bump

Overview

This problem aims at testing high-order methods for the computation of internal flow with a high-order curved boundary representation. In this subsonic flow problem, the geometry is smooth and so is the flow. Entropy should be constant in the flow field. The L2 norm of the entropy error is then used as the indicator of solution accuracy because the analytical solution is unknown.

Governing equations

The flow is governed by the 2D Euler equations with a constant ratio of specific heats of 1.4.

Flow conditions

The inflow Mach number is 0.5 at zero angle of attack.

Geometry

The computational domain is bounded between x = − 1.5 and x = 1.5 and between the bump and y = 0.8, as shown in Figure 1. The bump is defined as

display math
Figure 1.

Channel with a smooth bump.

Boundary conditions

  • Left boundary: subsonic inflow

  • Right boundary: subsonic exit

  • Top boundary: symmetry

  • Bottom boundary: slip wall

4.2 Problem C1.2. Ringleb problem

Overview

This problem tests the spatial accuracy of high-order methods. The flow is transonic and smooth. The geometry is also smooth, and high-order curved boundary representation appears to be critical. The exact solution is known via hodograph transformation [25, 26].

Governing equations

The flow is governed by the 2D Euler equations with γ = 1.4.

Geometry

Let k be the streamline parameter, that is, k = constant on each streamline. The streamlines corresponding to the two wall boundaries are k = kmax = 1.5 for the inner wall and k = kmin = 0.7 for the outer wall. Let q be the velocity magnitude. For each fixed k, kminkkmax, the variable q varies between q0 = 0.5 and k. For each q, define the speed of sound a, density ρ, pressure p, and a quantity J by

display math(16)

For each pair (q,k), set

display math(17)

Again, q0 = 0.4, kmin = 0.7, kmax = 1.5, and the four boundaries are as follows: (i) inflow, q = q0, kminkkmax, and y > 0; (ii) outflow, q = q0, kminkkmax, and y < 0; (iii) inner wall, k = kmax and q0qk; and (iv) outer wall, k = kmin. See Figure 2.

Figure 2.

Ringleb geometry; thick curves: walls; thin curves: inflow and outflow boundaries.

Exact solution

The exact solution is given by (16) and (17). The flow is irrotational and isentropic. It reaches a supersonic speed of Mach number 1.5 at location y = 0 of the inner wall. Entropy should be a constant in the flow field.

4.3 Problem C1.3. Flow over a NACA0012 airfoil

Overview

This problem aims at testing high-order methods for the computation of external flow with a high-order curved boundary representation. Both inviscid and viscous, and subsonic and transonic flow conditions will be simulated. The transonic problem will also test various methods’ shock-capturing ability. The lift and drag coefficients will be computed and compared with those obtained with low-order methods.

Governing equations

The governing equations are 2D Euler and Navier–Stokes with a constant ratio of specific heats of 1.4 and Prandtl number of 0.72. For the viscous flow problem, the viscosity is assumed a constant.

Flow conditions

Three different flow conditions are considered:

  1. Subsonic inviscid flow with M ∞  = 0.5 and angle of attack α = 2°

  2. Inviscid transonic flow with M ∞  = 0.8 and α = 1.25°

  3. Subsonic viscous flow with M ∞  = 0.5, α = 1 , and Reynolds number (based on the chord length) Re = 5000

Geometry

The geometry is a NACA0012 airfoil, modified to close the trailing edge. The resulting analytical expression for the airfoil surface is

display math

The airfoil geometry is shown in Figure 3.

Figure 3.

NACA0012 airfoil with closed trailing edge.

Boundary conditions

Subsonic inflow and outflow on the far-field boundary. Slip wall for inviscid flow or no-slip adiabatic wall for viscous flow.

4.4 Problem C1.4. Laminar boundary layer on a flat plate

Overview

This problem aims at testing high-order methods for viscous boundary layers, where highly clustered meshes are employed to resolve the steep velocity gradient. The drag coefficient will be computed and compared with that obtained with low-order methods.

Governing equations

The flow is governed by the 2D Navier–Stokes equations with a constant ratio of specific heats of 1.4 and Prandtl number of 0.72. The dynamic viscosity is also a constant.

Flow conditions

M ∞  = 0.5, and angle of attack α = 0°. Reynolds number (based on the plate length) ReL = 106.

Geometry

The plate length L is assumed 1. The computational domain has two other length scales LH and LV, as shown in Figure 4. Participants should assess the influence of these length scales to the numerical results and select large enough values that the numerical results are not affected by them.

Figure 4.

Computational domain for the flat plate boundary layer.

Boundary conditions

As depicted in Figure 4.

4.5 Problem C1.5. Radial expansion wave (2D or 3D)

Overview

Boundary conditions.

Governing equations

The governing equation is the 2D or 3D Euler equations with a constant ratio of specific heats of γ = 1.4.

Flow conditions

The initial condition is defined everywhere. The flow is cylindrically (2D) or spherically (3D) symmetric but computed on a Cartesian grid. The flow field is purely radial. The initial distribution of the radial velocity q is infinitely differentiable:

display math

The Cartesian velocity components are related to q at any time and place by

display math

The initial distribution of the speed of sound derives from q(r,0)

display math

Here, the ratio of specific heats may be chosen by the user from the interval 1 < γ < 3, for example, 7/5 (aero), 5/3 (astro), and 2 (civil, atmospheric). For this workshop, γ = 7 ∕ 5. The density initially equals γ in the origin and further follows from assuming uniform entropy

display math

The initial values of any other flow quantity derive from the aforementioned equations and the perfect gas law, for example, pressure:

display math

Because the flow is entirely determined by the initial values, these should be carefully discretized. For instance, if the numerical method is an FV scheme that updates the cell averages of the conserved flow quantities, these must be computed from the analytical initial values by a sufficiently accurate Gaussian quadrature. The same holds for the integrals needed to compute the weights of the basis functions in a DG discretization. If superconvergence is anticipated, the order of accuracy of the Gaussian quadrature must be taken high enough so as not to obscure the truncation error of the scheme with an initialization error. At outflow, the Mach number is 2, which means supersonic outflow normal to all domain boundaries, in both 2D and 3D.

Geometry

The computational domain is a square (cube) [4,4] × [4,4]( × [4,4]), uniformly divided into cells with Δx = Δy( = Δz) = h, where h takes the values 1/8 (grid 1), 1/16 (grid 2), 1/32 (grid 3), and so on. The simulations should be run from t = 0 to t = 3. The largest wave speed appearing in the problem is 3 ∕ γ, which may be helpful in setting the time step. However, tests should be performed to ensure that the time step is small enough (or the order of time integration is high enough) that temporal resolution has minimal effect on the results.

Boundary conditions

Supersonic outflow everywhere.

4.6 Problem C1.6. Vortex transport by uniform flow

Overview

This problem aims at testing a high-order method's capability to preserve vorticity in an unsteady inviscid flow. Accurate transport of vortices at all speeds (including Mach number much less than 1) is very important for LES and detached-eddy simulations, possibly the workhorse of future industrial CFD simulations, as well as for aeronautics/rotorcraft applications.

Governing equations

The governing equations are the unsteady (2D) Euler equations, with a constant ratio of specific heats of γ = 1.4 and gas constant Rgas = 287.15 J/kg.K.

Flow conditions

The domain is first initialized with a uniform flow of pressure P ∞  , temperature T ∞ , a given Mach number (see Testing Conditions below), and a vortical movement of characteristic radius R and strength β superimposed around the point at coordinates (Xc,Yc):

display math

where

display math

Note that U ∞  is the speed of the unperturbed flow. The pressure, temperature, and density are prescribed such that the superimposed vortex is a steady solution of the stagnant (e.g., without uniform transport) flow situation:

display math

Pressure is computed as

display math

The superimposed vortex should be transported without distortion by the flow. Thus, the initial flow solution can be used to assess the accuracy of the computational method (see requirements that follow).

Geometry

The computational domain is rectangular, with (x,y) ∈ [0,Lx] × [0,Ly].

Boundary conditions

Translational periodic boundary conditions are imposed for the left/right and top/bottom boundaries, respectively.

Testing conditions

Assume that the computational domain dimensions (in meters) are Lx = 0.1, Ly = 0.1 and set Xc = 0.05 m, Y c = 0.05 m (marking the center of the computational domain), P ∞  = 105 N/m 2, and T ∞  = 300 K.

Consider the following flow configurations:

  1. Slow vortex: M ∞  = 0.05, β = 1 ∕ 50, and R = 0.005

  2. Fast vortex: M ∞  = 0.5, β = 1 ∕ 5, and R = 0.005

Define the period T as T = Lx ∕ U ∞  and perform a long simulation, where the solution is advanced in time for 50 periods. That is, simulate the vortex evolution in time for 50T.

4.7 Problem C2.1. Unsteady viscous flow over tandem airfoils

Overview

This problem aims to test the unsteady interaction of a vortex with a solid wall. Specifically, several cases are defined that address both unsteady pressure generation and viscous separation in a 2D framework by using a pair of airfoils. The geometry remains relatively simple and provides a test bed for several different types of analysis. In general, the time history of the lift coefficient on the aft airfoil will be used as a metric. Other quantities to assess would be the pressure distribution on the two airfoils and the total circulation in the problem obtained by integrating the vorticity throughout the domain.

Governing equations

The governing equations for this problem are the 2D compressible Navier–Stokes equations with a constant ratio of specific heats equal to 1.4 and a Prandtl number of 0.72. Compressibility is not anticipated to be a significant player in these problems. Specific conditions will be provided with each subcase.

Flow conditions

Two different problems are defined. In each of these cases, the Mach number (M ∞ ) is 0.2, the angle of attack (α) is 0°, and the Reynolds number based on the chord of one of the airfoils is 104.

  • Case A examines the evolution of the flow field from a prescribed initial solution that is C1. In this condition, d is the distance to the closest wall, which is often required for turbulence models. Density and pressure are initialized to their free-stream values.

    display math(18)

    with δ1 = 0.05.

  • Case B is closer to an impulsive start condition. Let s(r) = (r − (δ2,0)) ⋅ (cos α,sin α) with δ1 = 0.5. Density and pressure are again set to uniform free-stream values.

    display math(19)

    with δ3 = 0.1.

Geometry

The basic geometry for this case is two relatively positioned NACA0012 airfoils (Figure 5), which were described for problem C.1.3. The leading airfoil is rotated by δ about (0.25c,0), whereas the trailing airfoil is translated by (1 + dsep, − doff)c. For the present case, take δ = 10°, dsep = 0.5, and doff = 0. The far-field boundary can be determined such that the steady state lift coefficient of both airfoils varies by less than 0.01 counts.

Figure 5.

Geometry for tandem airfoils.

4.8 Problem C2.2. Steady turbulent transonic flow over an airfoil

Overview

This problem aims at testing high-order methods for a 2D turbulent flow under transonic conditions with weak shock-boundary layer interaction effects. The test case is the RAE2822 airfoil Case 9, for which an extensive experimental database exists [27]. The test case has also been investigated numerically by many authors using low-order methods. It was also used in the European project ADIGMA (test case MTC5). The target quantities of interest are the lift and drag coefficients and the skin friction distribution at one free-stream condition, as described in the following text.

Governing equations

The governing equation is the 2D RANS equations with a constant ratio of specific heats of 1.4 and Prandtl number of 0.71. The dynamic viscosity is also a constant. The choice of turbulence model is left up to the participants; recommended suggestions are (i) the Spalart–Allmaras (SA) model and (2) the Wilcox k- ω model or explicit algebraic Reynolds stress model (EARSM).

Flow conditions

Only Case 9 is retained for this workshop. The original flow conditions in the wind tunnel experiment are M ∞  = 0.730, angle of attack α = 3.19°, and Reynolds number (based on the reference chord) Re = 6.5 ⋅ 106. However, to take into account the wind tunnel corrections for comparison with experimental data, the computations for the workshop have to be made with corrected flow conditions, namely Mach number M ∞  = 0.734, angle of attack α = 2.79°, and with the same Reynolds number. Laminar to turbulent transition is fixed at 3% of the chord, on both pressure and suction sides. No further wind tunnel effects are to be modeled.

Geometry

Originally, the geometry is defined with a set of points. These points are then used to define a high-order geometry, which is available online at the workshop web site.

Boundary conditions

Adiabatic no-slip wall on the airfoil surface and free stream at the farfield (subsonic inflow/outflow). A sensitivity study must be performed to find a far-field boundary location whose effect on the lift and drag coefficient is less than 0.01 counts, that is, 10 − 6.

4.9 Problem C2.3. Analytical 3D body of revolution

Overview

This problem defined in ADIGMA as test case BTC0 [23] is aimed at testing high-order methods for the computation of external flow with a high-order curved boundary representation in 3D. Inviscid, viscous (laminar), and turbulent flow conditions will be simulated.

Governing equations

The governing equations for inviscid and laminar flows are the 3D Euler and Navier–Stokes equations with a constant ratio of specific heats of 1.4 and Prandtl number of 0.72. For the laminar flow problem, the viscosity is assumed a constant.

Flow conditions

display math

Geometry

The geometry is a streamlined body based on a 10% thick airfoil with boundaries constructed by a surface of revolution (Figure 6). The airfoil is constructed by an elliptical leading edge and straight lines.

Figure 6.

3D body of revolution.

Half model: inline image:

display math

inline image:

display math

Reference values

  • Reference area: 0.1 (full model)

  • Reference moment length: 1.0

  • Moment line: quarter chord

Boundary conditions

  • Far-field boundary: subsonic inflow and outflow

  • Wing surface: no-slip adiabatic wall

4.10 Problem C2.4. Laminar flow around a delta wing

Overview

This problem aims at predicting vortex-dominated flows. A laminar flow at high angle of attack around a delta wing with a sharp leading edge and a blunt trailing edge is selected (see also [23]). As the flow passes the leading edge, it rolls up and creates a vortex together with a secondary vortex. The vortex system remains over a long distance behind the wing. This problem also aims at testing high-order and adaptive methods for the computation of vortex-dominated external flows. Note that methods that show high order on smooth solutions will show about second order only on this test case because of reduced smoothness properties (e.g., at the sharp edges) of the flow solution. Finally, also h-adaptive and hp-adaptive computations can be submitted for this test case.

Governing equations

The governing equation is the 3D Navier–Stokes equations with a constant ratio of specific heats of 1.4 and Prandtl number of 0.72. The viscosity is assumed a constant.

Flow conditions

Subsonic viscous flow with M ∞  = 0.3, α = 12.5°, and Reynolds number (based on the mean cord length) Re = 4000.

Geometry

The geometry is a delta wing with a sloped and sharp leading edge and a blunt trailing edge. A precise definition of the geometry can be found in [28, 29]. Figure 7 shows the top, bottom, and side views of a half model of this wing.

Figure 7.

Left: Top, bottom, and side views of the half model of the delta wing. The grid has been provided by NLR within the ADIGMA project. Right: streamlines and Mach number isosurfaces of the flow solution over the left half of the wing and Mach number slices over the right half. The figures are taken from [30].

Reference values

  • Reference area: 0.133974596 (half model)

  • Reference moment length: 1.0

  • Moment line: quarter chord

Boundary conditions

  • Far-field boundary: subsonic inflow and outflow

  • Wing surface: no-slip isothermal wall with Twall = T ∞ 

4.11 Problem C3.1. Turbulent flow over a 2D multielement airfoil

Overview

This problem aims at testing high-order methods for a 2D turbulent flow with a complex configuration. It has been investigated previously with low-order methods as part of a NASA Langley workshop. The target quantities of interest are the lift and drag coefficients at one free-stream condition, as described in the following text.

Governing equations

The governing equation is the 2D RANS equations with a constant ratio of specific heats of 1.4 and Prandtl number of 0.71. The dynamic viscosity is also a constant. The choice of turbulence model is left up to the participants; recommended suggestions are (i) the SA model and (ii) the Wilcox k- ω model.

Flow conditions

Mach number M ∞  = 0.2, angle of attack α = 16°, and Reynolds number (based on the reference chord) Re = 9 ⋅ 106. The boundary layer is assumed fully turbulent, and no wind tunnel effects are to be modeled.

Geometry

The multielement airfoil geometry is shown in Figure 8. Originally, the geometry is defined with a set of points. These points are then used to define a high-order geometry, which is available online at the workshop web site. The reference chord length is 0.5588 m.

Figure 8.

MDA 30P-30N multielement airfoil geometry.

Boundary conditions

Adiabatic no-slip wall on the airfoil surface and free stream at the farfield.

4.12 Problem C3.2. Turbulent flow over the DPW III wing alone case

Overview

This problem aims at testing high-order methods for a 3D wing case with turbulent boundary layers at transonic conditions. This problem has been investigated previously with low-order methods as part of the AIAA drag prediction workshop, [31] (see DPW-W1). The target quantity of interest is the drag coefficient at one free-stream condition, as described in the following text.

Governing equations

The governing equation is the 3D RANS equations with a constant ratio of specific heats of 1.4 and Prandtl number of 0.71. The dynamic viscosity is also a constant. The choice of turbulence model is left up to the participants; recommended suggestions are (i) the SA model and (ii) the Wilcox k- ω model.

Flow conditions

Mach number M ∞  = 0.76, angle of attack α = 0.5°, and Reynolds number (based on the reference chord) Recref = 5 ⋅ 106. The boundary layer is assumed fully turbulent, and no wind tunnel effects are to be modeled.

Geometry

The wing geometry, illustrated in Figure 9 with pressure contours, is a simple trapezoidal planform with modern supercritical airfoils. The precise geometry definition is available online at http://aaac.larc.nasa.gov/tsab/cfdlarc/aiaa-dpw/Workshop3/.

Figure 9.

The wing geometry for the drag prediction workshop.

Reference values

  • Planform area: Sref = 290,322 mm2 = 450 in2

  • Chord: cref = 197.556 mm = 7.778 in

  • Span: b = 1,524 mm = 60 in

Boundary conditions

Adiabatic no-slip wall on the wing, symmetry at the wing root, and free stream at the farfield.

4.13 Problem C3.3. Transitional flow over a SD7003 wing

Overview

This test case aims at characterizing the accuracy and performance of high-order solvers for the prediction of complex unsteady transitional flows over a wing section under low Reynolds number conditions. Of particular interest is the evaluation of so-called implicit LES approaches for handling, in a seamless fashion, the mixed laminar, transitional, and turbulent flow regions encountered in these low Reynolds applications. The unsteady flow is characterized by laminar separation, the formation of a transitional shear layer followed by turbulent reattachment. In a time-averaged sense, a laminar separation bubble (LSB) is formed over the airfoil.

Governing equations

The governing equations are the full 3D compressible Navier–Stokes equations with a constant ratio of specific heats of 1.4 and Prandtl number of 0.72. Solutions obtained employing the fully incompressible Navier–Stokes equations are also desired. Given the low value of the Reynolds number being considered, emphasis is placed on implicit LES approaches; however, methodologies that incorporate dynamic subgrid scale models are also of interest.

Flow conditions

  • Mach number M = 0.1

  • Reynolds number based on wing chord, Rec = 60,000

  • Angle of attack:

    • Case 1. α = 4°, which corresponds to a relatively long LSB

    • Case 2. α = 8°, which corresponds to a shorter LSB

Geometry

The wing section is based on the Selig SD7003 airfoil profile shown in Figure 10. This airfoil, which was originally designed for low Reynolds number operation ( Rec ∼ 105), has a maximum thickness of 8.5% and a maximum camber of 1.45% at x ∕ c = 0.35. The original sharp trailing edge has been rounded with a very small circular arc of radius r ∕ c ∼ 0.0004 to facilitate the use on an O-mesh topology. The precise profile of the geometry is provided in [32]. The flow is considered to be homogeneous in the spanwise direction with periodic boundary conditions being imposed over a width s ∕ c = 0.2.

Figure 10.

The SD7003 wing.

Boundary conditions

  • Far-field boundary: subsonic inflow and outflow. This boundary should be located very far from the wing at a distance of ∼ 100 chords

  • Airfoil surface: no-slip isothermal wall conditions with Twall ∕ Tinf = 1.002.

4.14 Problem C3.4. Heaving and pitching airfoil in wake

Overview

This problem aims at testing the accuracy and performance of high-order flow solvers for problems with deforming domains. An oscillating cylinder produces vortices that interact with an airfoil performing a typical flapping motion. The time histories of the drag coefficient on both the cylinder and on the airfoil are used as metrics, and two test cases corresponding to different streamwise positions of the airfoil are studied.

Governing equations

The governing equations for this problem are the 2D compressible Navier–Stokes equations with a constant ratio of specific heats equal to 1.4 and a Prandtl number of 0.72.

Flow conditions

The free stream has the magnitude U ∞  = 1, a Mach number (M ∞ ) of 0.2, and an angle of attack (α) of 0°. The Reynolds number based on the chord of the airfoil is 1000 (or, equivalently, 500 based on the diameter of the cylinder).

Geometry

The geometry consists of a cylinder of diameter d = 0.5 centered at the origin and a NACA0012 airfoil with chord length c = 1 positioned downstream from the cylinder. The airfoil geometry is given by the modification of the x4-coefficient to give zero trailing edge thickness, as described for case C1.3. The center of the cylinder and the 1 ∕ 3-chord of the airfoil are separated by a distance s. Both bodies are heaving in an oscillating motion h(t) = Asin ωt, where A = 0.25 and ω = 2π ⋅ 0.4 (in the workshop, the angular velocity ω = 2π ⋅ 0.2 was used). In addition, the airfoil is pitching about its 1 ∕ 3-chord by an angle θ(t) = asin(ωt + ϕ), where the amplitude a = π ∕ 6 and the phase shift ϕ = π ∕ 2 (Figure 11).

Figure 11.

The heaving and pitching airfoil problem.

4.15 Problem C3.5. DNS of the Taylor–Green vortex at Re = 1600

Overview

This problem is aimed at testing the accuracy and the performance of high-order methods on the DNS of transitional flows. The test case concerns a 3D periodic and transitional flow defined by a simple initial condition: the Taylor–Green vortex. The initial flow field is given by

display math

This flow transitions to turbulence with the creation of small scales, followed by a decay phase similar to homogeneous isotropic turbulence (see Figure 12).

Figure 12.

Illustration of Taylor–Green vortex at t = 0 (left) and at tfinal = 20 tc (right): isosurfaces of the z-component of the dimensionless vorticity.

Governing equations

The flow is governed by the 3D incompressible Navier–Stokes, or alternatively, the 3D compressible Navier–Stokes equations at low Mach number. In both cases, the physical parameters of the model are taken to be constant.

Flow conditions

The Reynolds number of the flow is defined as inline image and is equal to 1600. If modeling compressible flow, the fluid is assumed a perfect gas with γ = cp ∕ cv = 1.4, Prandtl number inline image, and zero bulk viscosity: μv = 0. In addition, the Mach number is taken as inline image, where c0 is the speed of sound corresponding to the temperature inline image, and the initial temperature field is assumed uniform: T = T0, so that the initial density field is taken as inline image.

The physical duration of the computation is based on the characteristic convective time inline image and is set to tfinal = 20 tc. As the maximum of the dissipation (and thus the smallest turbulent structures) occurs at t 蝶 8 tc, participants can also decide to only compute the flow up to t = 10 tc and report solely on those results.

Geometry

The flow is computed within a periodic cube defined as − πL ≤ x,y,z ≤ πL.

Grids

Participants were encouraged to perform a grid or order convergence study if feasible. At least one computation was required on a baseline resolution of approximately 2563 DOFs: for example, for DGM using fourth-order polynomial interpolants, this corresponds to 643 elements.

Mandatory results

Each participant was requested to provide the following data:

  • The temporal evolution of the kinetic energy integrated on the domain Ω:

    display math
  • The temporal evolution of the kinetic energy dissipation rate: inline image. The typical evolution of the dissipation rate is illustrated in Figure 13.

  • The temporal evolution of the enstrophy integrated on the domain Ω:

    display math

    This is indeed an important diagnostic as ε is also exactly equal to inline image for incompressible flow and approximately for compressible flow at low Mach number.

  • The vorticity norm on the periodic face inline image at time inline image for comparison to the reference results. An illustration is given in Figure 14. For brevity, the results of this comparison are not treated in this paper.

Figure 13.

Evolution of the dimensionless energy dissipation rate as a function of the dimensionless time: results of pseudo-spectral code and of variants of a DG code.

Figure 14.

Isocontours of the dimensionless vorticity norm, inline image, on a subset of the periodic face inline image at time inline image. Comparison between the results obtained using the pseudo-spectral code (black) and those obtained using a DG code with p = 3 and on a 963 mesh (red).

Reference data

The results are compared with a reference incompressible flow solution. This solution was obtained using a dealiased pseudo-spectral code (developed at Université Catholique de Louvain) for which, spatially, neither numerical dissipation nor numerical dispersion errors occur; the time integration is performed using a low-storage three-step Runge–Kutta scheme [33], with a dimensionless time step of 1.0 ⋅ 10 − 3. These results have been grid converged on a 5123 grid (a grid convergence study for a spectral discretization has also been carried out by van Rees et al. in [34]); this means that all Fourier modes up to the 256th harmonic with respect to the domain length have been captured exactly (apart from the time integration error of the Runge–Kutta scheme). However, an adequate resolution is already obtained at 2563, which is the baseline requirements for the test case.

5 CURRENT STATUS—SAMPLE RESULTS

In this section, we present a subset of the results analyzed and reviewed in the course of the workshop. Not all cases are included, but the ones chosen form a representative sampling that supports several conclusions. In presenting the results, we will make use of several acronyms, which are defined in Table 1.

Table 1. Acronyms used in the results presentation.
AcronymDefinition
FVMFinite volume method
DGDiscontinuous Galerkin
CPRCorrection procedure via reconstruction
SEMSpectral element method
RK4Fourth-order Runge–Kutta
RDGReconstructed discontinuous Galerkin
DRPDispersion relation preserving (finite difference)

5.1 Inviscid flow through a channel with a smooth bump

This is an example of a smooth flow for which high-order schemes are expected to perform very well compared with low-order schemes. Ten groups submitted full or partial results for this case, using a variety of methods, as listed in Table 2. With the exception of the MIT group, all participants used a sequence of nested uniformly refined meshes. The MIT group generated output-based optimized meshes at several fixed DOFs.

Table 2. Inviscid flow through a channel with a smooth bump: summary of participants.
GroupAffiliationAuthorsMethod
UTennUniversity of Tennessee at ChattanoogaWang, Anderson, ErwinDG
UMUniversity of MichiganKhosravi, FidkowskiDG
UBCUniversity of British ColumbiaOllivier-GoochHigh-order FV
MITMassachusetts Institute of TechnologyYano, DarmofalDG
ONERAONERAS. GéraldDG
TwenteUniversities of Twente and Edinburghvan der Weide and SvärdFD
ISUIowa State UniversityLi, WangCPR–DG
UCUniversity of Cincinnati and AFRLGalbraith, Orkwis, BenekDG
GlennNASA GlennHuynhDG

A subset of the results for entropy error convergence is presented in Figure 15. Shown are the data points for all the groups at approximation order p = 1 and p = 3, that is, formally second and fourth orders, respectively. Although additional orders were compared in the workshop, only these two are shown here for brevity; p = 1 was chosen because it is expected to be similar to traditional second-order schemes, and p = 3 was chosen as a representative high-order result. Results from a second-order reconstructed FV result, obtained using the Tau code, are shown for reference on all of the plots. We immediately see that high order (p = 3) is, across the board, more efficient in DOFs than p = 1. Among the participants, the spread in error versus h is smaller for p = 3 compared with p = 1. The spread in work units is large for both orders shown, indicating that implementation and coding vary significantly.

Figure 15.

Inviscid flow through a channel with a smooth bump: entropy error norm versus mesh size and work units, shown for two different approximation orders.

On average, p = 1 results are similar to the reference FV solution. On the other hand, p = 3 results from nearly all of the participants outperform the reference FVM solution in terms of DOFs and work units. Furthermore, as expected, the ideal asymptotic convergence rates are attained for this smooth problem.

5.2 Flow over the NACA0012 airfoil, inviscid and viscous, subsonic and transonic

In this section, we present results from all subcases of case 1.3: these are (i) inviscid subsonic, (ii) transonic flow, and (iii) viscous subsonic. Participants that submitted results for this case are listed in Table 3. We note again that the MIT group generated output-based optimized meshes at fixed DOFs. The UW group used an hp-refinement strategy, whereas all other participants used sets of uniformly refined meshes. The MIT and UW groups, as well as the UBC group, used triangular meshes, whereas the remaining participants used a quadrilateral mesh topology. The DLR and UM groups, as well as the ISU group, used the quadrilateral meshes provided to participants via the conference website, whereas the remaining groups generated meshes independently.

Table 3. Flow over the NACA0012 airfoil: summary of participants.
GroupAffiliationAuthorsMethod
MITMassachusetts Institute of TechnologyYano, DarmofalDG
UMUniversity of MichiganFidkowskiDG
UBCUniversity of British ColumbiaOllivier-GoochHigh-order FV
DLRDLRHartmannDG
TwenteUniversities of Twente and Edinburghvan der Weide and SvärdFD
ISUIowa State UniversityZhou, WangCPR–DG
UWUniversity of WyomingBurgess, MavriplisDG

It should be appreciated that no exact analytical solutions were used to measure the error for this test case. Instead, each participant provided a convergence study based on a reference solution that he or she generated at higher resolution (finer grid and/or using polynomials of higher degree). However, there were no common standards by which the reference solution was generated, and furthermore, not all participants provided the details of their individual approach. The reference solutions that were used by the participants showed a scatter on a level similar to that approached by the errors in mesh refinement. Consequently, results for this test case should be interpreted as qualitative in nature.

We show a representative subset of the results submitted by the participants. For both the subsonic and the transonic test cases, we compare results for p = 3 with results for p = 1, obtained with the same respective method. Furthermore, we again compare the results with those from a second-order reconstructed FVM, computed by the DLR Tau code.

Results for the subcritical case are shown in Figures 16 and 17. High order (p = 3) is observed to be more efficient in DOFs and work units than low order (p = 1). The p = 1 results, on average, are similar to reference FVM solution, whereas the p = 3 results, for almost all simulations, outperform the reference FVM solution in terms of DOFs and work units. This is true even for moderate levels of error. The spread in error versus h is smaller for p = 3 compared with p = 1, whereas the spread in work units is fairly large for both orders. It should be noted that not all participants have optimized their solvers with respect to runtime. Therefore, a somewhat larger scatter in work units is not surprising.

Figure 16.

Inviscid subcritical flow over a NACA0012 airfoil: lift and drag coefficient error convergence for p = 1.

Figure 17.

Inviscid subcritical flow over a NACA0012 airfoil: lift and drag coefficient error convergence for p = 3.

Asymptotic high-order convergence rates are not attained with uniform refinement at p = 3 because of a singularity at the trailing edge of the airfoil. The highest convergence rates, at least for drag, are observed with the fixed-DOF, output-based optimization strategy employed by the MIT group. This strategy is effectively able to isolate the trailing edge singularity with appropriately small elements, in contrast to uniform refinement of a fixed starting mesh. Note that the results from the DLR group and the UM group are very similar, when plotted against h, as these two groups use the same discretization method (DG), and furthermore, both groups used the meshes provided by the conference organizers.

Results for the transonic case are shown in Figures 18 and 19 for p = 1 and p = 2, respectively. The plots show the reference FVM solution (TAU), a uniform refinement study (UM), and two adaptive studies (MIT and UW). It should be noted that the UW group, which contributed one of the adaptive solutions, actually performed hp-adaptation using polynomial degrees of p = 1 to p = 4. They provided one convergence study, so that the same results are shown in Figures 18 and 19. For the transonic case, the benefit of high order (p = 2 in this case because of a larger data set submitted for p = 2) over p = 1 is not as great as for smooth problems. However, in general, p = 2 does not fare worse than p = 1 in terms of DOFs or even work units. Adaptive refinement generally performs better than uniform refinement, as expected, although convergence histories are not as regular as for smooth problems.

Figure 18.

Transonic flow over a NACA0012 airfoil: lift and drag coefficient convergence for p = 1.

Figure 19.

Transonic flow over a NACA0012 airfoil: lift and drag coefficient convergence for p = 2.

Results for the viscous case are shown in Figures 20 and 21. We note that the UW group again submitted a result using an hp-refinement strategy. In addition, the residual convergence criterion for this case was relaxed from ten to eight orders of magnitude. The plots again include a reference FVM solution generated by the Tau code. In terms of both DOFs and work units, most of the p = 1 submissions are comparable with the reference FVM solution. For p = 3, the submitted results show generally better performance than the FVM solution, especially when high accuracy is required.

Figure 20.

Viscous flow over a NACA0012 airfoil: lift and drag coefficient convergence for p = 1.

Figure 21.

Viscous flow over a NACA0012 airfoil: lift and drag coefficient convergence for p = 3.

5.3 Vortex transport by uniform flow

This unsteady case, the last of the easy set, is relevant to various unsteady flows involving vortex propagation and resolution. The participants are listed in Table 4. We note that only DG results were submitted.

Table 4. Vortex transport by uniform flow: summary of participants.
GroupAffiliationAuthorsMethod
ISUIowa State UniversityLi, WangCPR–DG
UMUniversity of MichiganFidkowskiDG
BergamoUniversity of BergamoBassiDG
JSUJackson State UniversityTu, Pang, XiangDG

A subset of the submitted results is shown in Figure 22. For comparison, a result using the second-order accurate STAR-CD code from CD-adapco is shown and labeled as Star. High order in both space and time is important for observing optimal rates in solution convergence, as measured by the velocity error norm at the final time. We see that high-order results, p = 3, significantly outperform the low-order, p = 1, results in this case.

Figure 22.

Vortex transport by uniform flow: convergence of velocity error with h and work units.

5.4 Turbulent flow over an RAE airfoil

Three groups submitted results to this moderately difficult case, which involved simulation of a Reynolds-averaged boundary layer and a transonic shock. A summary of the participants is given in Table 5. Two of the groups used the SA turbulence model. The Bergamo group submitted two data sets, one with the EARSM and one with the Wilcox k − ω model. The MIT group submitted results using adaptive mesh optimization at fixed DOFs, whereas the other groups submitted uniform refinement results. All groups used DG for the discretization.

Table 5. Turbulent flow over an RAE airfoil: summary of participants.
GroupAffiliationAuthorsMethod/turbulence model
MITMassachusetts Institute of TechnologyYano, DarmofalDG/SA
UMUniversity of MichiganFidkowskiDG/SA
BergamoUniversity of BergamoBassiDG/EARSM and k − ω

The results for drag and lift coefficients are shown in Figure 23. The provided results show a large spread among the groups and virtually no improvement from high order under uniform mesh refinement. Part of the spread in the uniform refinement results may be attributed to an inadequate wake alignment in the initial mesh. In this case, given the relative complexity of the flow, in particular the use of a RANS turbulence model, and the difficulty of making an a priori initial mesh, adaptive mesh optimization is markedly superior. The other cause for spread in the results may be due to choice of turbulence model, as three different turbulence models were used by the participants.

Figure 23.

Turbulent transonic flow over an RAE airfoil: drag and lift coefficient errors versus mesh size and work units, shown for two different approximation orders.

5.5 Delta wing at low Reynolds number

The participants for this case are listed in Table 6. We note that only DGMs were submitted.

The results are shown in Figure 24. High order shows some benefit in DOFs, but in terms of work units, high order is comparable with low order. The singularity at the leading edge of the delta wing precludes high-order convergence rates and likely masks the benefit of high order with uniform mesh refinement. Some of the spread in the results can be ascribed to different mesh sequences used by the different groups (e.g., UTenn was not same as the others) and to different approximation spaces (e.g., Bergamo used a full-order space instead of a tensor product space). The results of UM and DLR are almost identical in terms of accuracy over DOFs because of the same numerical schemes used (BR2 scheme on a tensor product space), representing a perfect cross-code verification, whereas there are large differences in the work units required because of different solver strategies used.

Table 6. Delta wing at low Reynolds number: summary of participants.
GroupAffiliationAuthorsMethod
DLRDLRHartmannDG
UMUniversity of MichiganFidkowskiDG
BergamoUniversity of BergamoBassiDG
UTennUniversity of Tennessee at ChattanoogaWang, Anderson, ErwinDG
Figure 24.

Delta wing: lift coefficient errors versus mesh size and work units, shown for various orders.

5.6 DNS of the Taylor–Green vortex at Re = 1600

Starting out from a simple analytic initial solution, the transition of the Taylor–Green vortex at Re = 1600 generates very complex 3D (anisotropic) turbulent flow features. The transition is sufficiently robust such that it is not initiated by truncation errors. Therefore, a detailed temporal comparison of the obtained solutions is possible.

The convergence analysis is based on the kinetic energy dissipation rate. A grid converged reference computation has been obtained using a incompressible pseudo-spectral method. The resolved spectrum contained up to the 256th spatial harmonic based on the width of the computational domain (ie. 5123 DOFs per variable). Participants were required to perform at least one computation containing an equivalent resolution of 2563 subdivisions of the domain; with a convergence study with the pseudo-spectral computations, this value seems to correspond to the Nyquist criterion based on the relevant spatial scales for this test case. Compressible codes were required to run at an equivalent Mach number, corresponding to the reference velocity, of 0.1 to avoid (important) compressibility effects.

Two errors with respect to the reference results were defined. The first concerns the time derivative of the computed kinetic energy, whereas the second concerns the error on the theoretical dissipation rate, based on the integral of the enstrophy. The third error is defined as the difference between the temporal derivative of the kinetic energy and the numerically evaluated theoretical dissipation rate. This error is independent of the reference solution and may as such be applied to more complex flows; a potential drawback is that this error should be zero for any truly kinetic energy preserving method, such as the skew-symmetric central FDM. In practice, even this type of methods will usually introduce numerical dissipation through the introduction of dealiasing filters.

This test case allows, because of its simple setup and geometry, for a detailed comparison between all types of computational methods. This was reflected in the diversity of the contributions, as shown in Table 7.

Table 7. Taylor–Green vortex: summary of participants.
GroupAffiliationAuthorsMethod
IAG–DGUniversity of Stuttgard and IAGBeck, GassnerDGSEM,RK5
Cenaero–DGCenaeroCarton, HillewaertDG/IP, RK4
ONERA–DGONERAChapelier, Gerald, Renac, PlataDG/BR2, RK4
Glenn–FDNASA GlennDebonisFD, RK4
Glenn–DRPNASA GlennDebonisDRP, RK4
ISU–CPRIowa State UniversityWang, HagaDG–CPR, RK4
ONERA–FVONERALe GouezFV/Recon, RK3
UM–DGUniversity of MichiganVaradan, Hara, Johnsen, Van LeerDG/RDG, RK4

A. Beck (University of Stuttgart) provided an order convergence study using the DG spectral element method (DGSEM). C. Carton de Wiart (Cenaero) provided a grid convergence study using a fourth-order accurate DG/symmetric interior penalty method. J. Debonis (NASA Glenn) provided grid convergence studies with the fourth-order, eighth-order and 12th-order standard central FDM, as well as with the fourth-order dispersion relation-preserving (DRP) FDM of Bogey and Bailly. We note that DRP methods [35] are based on a different philosophy compared with standard high-order methods: they attempt to minimize dispersion errors instead of directly addressing the formal order of accuracy. T. Haga (Iowa State University) provided an order convergence study with the CPR/DG method. J.-M. Le Gouez (Onera) provided computations using a third-order and fourth-order accurate reconstructed FVM (NXO). Although the method is formally applicable to unstructured meshes, the code used for this computations explicitly takes the structure of the mesh into account. J.-B. Chapelier (Onera) provided computations using a fourth-order accurate DGM using the BR2 approach for diffusion (DG/BR2). S. Varadan (University of Michigan) provided computations with a method based on DG (fifth-order accurate) for the convective terms and an eighth-order recovery scheme for the viscous terms. High-resolution FDMs have been used since early on for the study of canonical turbulence benchmarks and are hence a very good reference for the novel unstructured methods.

Figures 25 and 26 show the evolution of the errors with respect to this reference solution, whereas Figure 27 shows the evolution of the error related to the difference between theoretical and measured dissipation rate.

Figure 25.

Convergence of the measured dissipation rate.

Figure 26.

Convergence of the theoretical dissipation rate.

Figure 27.

Convergence of the difference between measured and theoretical dissipation rates.

We see that the error convergence of the high-order FDMs does not attain the projected (up to 12th) order of convergence, for any of the error measures. This is also the case for the finite volume scheme. The DGM-like schemes, however, seem to obtain the correct order of convergence for both the first and the third error types. The error of the second type, however, seems to evolve suboptimally for all schemes. The error of the DGM related schemes are on the same level as the best performing finite difference scheme.

We see that the error of the 12th-order accurate central finite difference schemes is comparable with the error obtained by the DRP scheme, which is formally only fourth-order accurate; the error of the corresponding fourth-order scheme is much higher. This seems to indicate that the dispersion error already dominates the convergence. This would be consistent with the previous point, as DGM schemes are dissipative but have particularly good dispersive accuracy.

At the resolution proposed for the test case, there is a huge difference between structured methods (i.e., that exploit the mesh topology during computation), namely the finite difference and the finite volume scheme, and the truly unstructured schemes. The effort-based rate of convergence for unstructured methods is equal to that of the most precise FDMs, but it is shifted towards higher cost and higher precision for similar resolution. This means that for sufficiently high convergence requirements, unstructured methods are competitive with FDMs.

A. Beck performed an order convergence study, keeping the equivalent resolution constant. The results show spectral spatial convergence, at a constant computational cost, thus leading to a much higher efficiency than the finite difference schemes. This conclusion is only valid for the explicit time integration used for this test case and cannot be generalized to implicit time integration because of the very different scaling of the computational effort as a function of order of convergence.

Finally, we note that in these results have focused on spatial errors. Temporal errors are also of concern in unsteady simulations, and for efficiency purposes, high-order time integration schemes should be used with spatially high-order methods. There are several choices of high-order time integration schemes [36, 37], and this topic will be explored in future workshops.

6 CONCLUSIONS AND PACING ITEMS

The workshop provided a platform to compare numerical methods on the basis of error versus cost. We believe that this is a more objective way to measure different methods than to compare the error on a given mesh or even with the same DOFs. To accommodate different types of methods, error is defined as independently of the methods as possible. Likewise, cost or work units is computed on the basis of the TauBench to minimize the effects of different computers. We want to caution that the definition of cost is not fool-proof. We found that the costs for the same case run on different computers can differ by as much as 30%. Therefore, one should not draw a conclusion on efficiency when the cost for one method differs by less than 30% from another. For steady problems, the iterative solution approaches are as critical as the spatial discretization schemes in determining the overall efficiency and accuracy of the numerical methods. Equally important is the definition of convergence. For most cases, we used eight or ten order-of-magnitude reductions in the density residual as the criterion of convergence.

With the workshop results, we can draw the following conclusions:

  1. For problems with smooth solutions and geometries, high-order methods are able to demonstrate high-order accuracy with h and p refinement. For cases C1.1 and C1.3, results from the TAU code based on the second-order FVM are provided by DLR for comparison. High-order methods demonstrated better performance than the second-order FVM for both steady and unsteady problems on the basis of error versus cost. High-order methods therefore perform better than low-order methods for these 2D cases. A similar conclusion is expected in three dimensions, although we require additional test cases to demonstrate this.

  2. For problems with nonsmooth solutions or geometries, high-order methods cannot achieve high-order accuracy, as expected. There is not enough evidence to state whether high-order methods perform better or worse than low-order methods. More comparisons are necessary. High-order methods, however, might require more computer memory to perform the simulations.

  3. Solution-based hp-adaptations have been shown to be very effective in minimizing the computational cost to achieve a given level of accuracy. The advantage is much more pronounced when the solution or geometry is nonsmooth. h-Adaptation near the solution or derivative discontinuity appears to be the most efficient means to reduce solution error.

  4. For RANS simulations, high-order methods are still not as robust as low-order methods in converging to the steady state solution. It is believed that this behavior is related to nonsmoothness introduced in the turbulence models. Ad hoc fixes have been used to alleviate the problem, but smooth turbulence models are needed to remedy the problem.

Although impressive progresses have been made in high-order methods, much remains to be done. For these methods to impact the design process, significant progress in the following pacing items is required:

  1. High-order mesh generation: to achieve high-order accuracy, curved geometries need to be represented with high-order polynomials. At present, we are not aware of any commercial grid generators capable of generating coarse high-order meshes, although a public domain package, Gmsh [38], is available. Coarse high-order meshes are needed to exploit the potential of higher-order methods. The generation of unstructured, highly clustered viscous meshes near high-order boundaries requires further research to improve the robustness. The main difficulty is that cells near the curved geometries can overlap each other.

  2. Solution-based hp-adaptations: the promise of high-order methods can be best realized with hp-adaptations using an appropriate adaptation criterion. Output adjoint-based, entropy-based, and residual-based error indicators [39] have demonstrated their effectiveness. Efficient and robust 3D implementation requires further development. Furthermore, it is desirable to remove any user-specified parameters and have an error estimate on key flow parameters such as force coefficients in the adaptation process.

  3. Scalable, low memory efficient time integrators for RANS and hybrid RANS/LES approaches: for high Reynolds number problems, much research is still needed on how to handle the stiffness generated by highly anisotropic viscous meshes near the wall. Line implicit approaches have been shown to be effective. However, the memory requirement scales as p6, with p the polynomial order. Clever ways to reduce the memory requirement are needed to make higher-order methods practical in 3D. Large-scale turbulent flow simulations will be performed on massively parallel computers. Much effort is needed in developing scalable solvers and effective preconditioners for these architectures.

  4. Robust, accuracy-preserving, and parameter-free shock capturing: two main approaches have been employed to capture discontinuities in high-order methods: artificial viscosity and solution limiting. The former involves user-specified parameters, and latter often causes iterative convergence to stall. Another desirable feature of any shock-capturing approach is accuracy preservation away from discontinuities. Can we develop an approach that is robust, parameter free, accuracy preserving, and convergent for steady flow problems? That is one of the challenges facing us.

ACKNOWLEDGEMENTS

We gratefully acknowledge the support by the Air Force Office of Scientific Research, AIAA FDTC, and DLR.

Ancillary