Get access
Advertisement

Assembly of finite element methods on graphics processors

Authors

  • Cris Cecka,

    1. Institute for Computational and Mathematical Engineering, Stanford University, CA, U.S.A.
    Search for more papers by this author
  • Adrian J. Lew,

    Corresponding author
    1. Institute for Computational and Mathematical Engineering, Stanford University, CA, U.S.A.
    2. Department of Mechanical Engineering, Stanford University, CA, U.S.A.
    • Mechanics and Computation, 496 Lomita Mall, Durand Building, Stanford University, Stanford, CA 94305-4040, U.S.A.
    Search for more papers by this author
  • E. Darve

    1. Institute for Computational and Mathematical Engineering, Stanford University, CA, U.S.A.
    2. Department of Mechanical Engineering, Stanford University, CA, U.S.A.
    Search for more papers by this author

Abstract

Recently, graphics processing units (GPUs) have had great success in accelerating many numerical computations. We present their application to computations on unstructured meshes such as those in finite element methods. Multiple approaches in assembling and solving sparse linear systems with NVIDIA GPUs and the Compute Unified Device Architecture (CUDA) are created and analyzed. Multiple strategies for efficient use of global, shared, and local memory, methods to achieve memory coalescing, and optimal choice of parameters are introduced. We find that with appropriate preprocessing and arrangement of support data, the GPU coprocessor using single-precision arithmetic achieves speedups of 30 or more in comparison to a well optimized double-precision single core implementation. We also find that the optimal assembly strategy depends on the order of polynomials used in the finite element discretization. Copyright © 2010 John Wiley & Sons, Ltd.

Get access to the full text of this article

Ancillary