High-performance computing applied to numerical methods in physics and engineering enables the simulation and design of many devices and systems. The ability to continue extending the range of applicability of computational methods to field and device modeling relies on the increase of the computational power and efficiency of numerical methods. In the past, the scaling of computational power often relied on the increase of the clock speed of central processing units (CPUs). Parallel systems were also developed to further boost the computational performance. More recently, the CPU clock speed saturated because of physical limitations related to power and losses. Further scaling of modern computing systems relies on increasing their complexity in terms of the architecture and the number of processing units. Recent advances in multi-processor computer architectures provide many exciting opportunities in computational science and engineering.

Multi-core CPU computing systems have replaced older single-core configurations. New architectures have emerged. Graphics processing units (GPUs) represent a particularly successful massively parallel hardware architecture type. Modern GPUs contain several thousands of stream processors, offering massive parallelization at a fraction of the cost and power consumption of a comparable CPU clusters. Multiple GPUs can be installed in a single computer node or incorporated into a multi-node GPU clusters. Heterogeneous multi-core/multi-CPU/multi-GPU systems provide flexibility to exploit strengths of both GPU and CPU architecture types. The use of GPUs has led to major advances in many areas of scientific computing including molecular dynamics, medical imaging, fluid dynamics, seismic imaging, and computational finance, to name a few. GPU computing also has a high promise in the field and device modeling.

There exist many challenges in porting present-day algorithms onto, or developing new algorithms for, GPUs. Indeed, there are a number of differences between GPU and CPU architectures in term of processor and memory arrangements. Algorithms that execute efficiently on one of these architectures may be ineffective on the other. The challenges are further exacerbated when developing algorithms for heterogeneous GPU-CPU systems. Additional questions arise because of several choices available for code development. Two major platforms for developing GPU codes are CUDA and OpenCL. CUDA was the first general purpose platform and has a relatively larger number of adopters, but it can be used only for Nvidia GPUs. OpenCL is a more recent platform and it can be used for most modern GPU types. It is important to be able to understand and use these platforms in an optimal way. Identifying and extending the range of applicability of GPU-accelerated or CPU-GPU hybrid methods is key to unlocking the full potential of this new computing paradigm.

The objective of this special issue is to report on recent advances in the area of GPU programming aimed at analyzing engineering problems. The papers present GPU algorithms and simulations in several areas of computational physics, engineering, and device modeling. The papers report on GPU computing in Computational Electromagnetics, Micromagnetics, Fluid Dynamics, Electromechanical Modeling, and general partial differential equation solvers.

Donno et al. discuss GPU implementations of the finite difference time domain method and mixed-potential integral equation method for solving electromagnetic scattering problems.

Attardo et al. outline how Fast Fourier transform-based integral equation codes can be ported to GPUs. They obtain high acceleration rates and demonstrate the use of their codes for solving the electric field integral equation for modeling the electromagnetic scattering from impenetrable metallic objects.

Hamada compares boundary element methods implemented on GPUs for analyzing realistic human voxel models. The implementation includes a solver augmented by three version of the fast multipole method implemented on GPUs. Comparisons highlight the strengths of CPU and GPU computing applied to N-body problems and electromagnetic codes.

Stefanski et al. present GPU codes for solving Maxwell's equations by Finite Different Time Domain method implemented using the OpenCL framework with MPI for multi-GPU computing systems. The presented codes are incorporated into commercial software, and electromagnetic simulation results of realistic human models are demonstrated.

Vansteenkiste et al. present their high-performance GPU-accelerated micromagnetic code. This code uses the finite-difference method with FFT acceleration for magnetostatic fields to solve the Landau–Gilbert–Lifshitz equation describing the magnetic dynamics in magnetic nanostructures.

Musolino et al. introduce the GPU computing to the field of electromechanical device modeling using GPUs to accelerate a low-frequency integral formulation of Maxwell's equations coupled with the rigid body dynamic equations.

Zhang describes their work on implementing a particle model for fluid simulations on multi-GPU systems. They outline approaches for balancing workload between several GPUs and the optimization for each of the GPUs.

Finally, Hutchcraft et al. present their work on using GPU computing for solving partial differential equations with radial basis functions. This work is applicable to a range of areas of computational physics and engineering.

The guest editor thanks the authors and reviewers for their excellent work in creating and improving the manuscripts. The quest editor also thanks the Editor-in-Chief Eric Michielssen and Alice Wood at Wiley for their effort in handling the submissions and organizing the special issue.

List of papers:

  • D. De Donno, A. Esposito, G. Monti, L. Catarinucci, L. Tarricone, “GPU-based Acceleration of Computational Electromagnetics Codes”
  • E. Attardo, M. Francavilla, Vipiana, Francesca; Vecchi, Giuseppe, “Investigation on accelerating FFT-based methods for the EFIE on graphics processors”
  • Hamada, Shoji, “Performance comparison of three types of GPU-accelerated indirect boundary element method for voxel model analysis”
  • Stefanski, Tomasz; Benkler, Stefan; Chavannes, Nicolas; Kuster, Niels, “OpenCL-based acceleration of the FDTD method in Computational Electromagnetics”
  • Vansteenkiste, Arne; Van de Wiele, Ben; Dupré, Luc; Van Waeyenberge, Bartel; De Zutter, Daniel, “Implementation of a finite-difference micromagnetic model on GPU hardware”
  • Musolino, Antonino; Rizzo, Rocco; Tripodi, Ernesto; Toni, Michele, “Modelling of Electromechanical Devices by a GPU Accelerated Integral Formulation”
  • zhang, Fengquan, “A Particle Model for Fluid Simulation on the Multi-GPU”
  • Hutchcraft, Winn; Woolsey, Maxwell; Gordon, Richard, “On the Acceleration of the Numerical Solution of Partial Differential Equations Using Radial Basis Functions and Graphics Processing Units”


  1. Top of page
  • Vitaliy Lomakin obtained his MS in Electrical Engineering from Kharkov National University in 1996 and PhD in Electrical Engineering from Tel Aviv University in 2003. From 2002 to 2005, he was a Postdoctoral Associate and Visiting Assistant Professor in the Department of Electrical and Computer Engineering, University of Illinois at Urbana Champaign. He joined the Department of Electrical and Computer Engineering at the University of California, San Diego in 2005, where he currently holds the position of Associate Professor. His research interests include Computational Electromagnetics, Computational Micromagnetics/Nanomagnetics, the electromagnetic analysis and design of photonic and microwave structures, and the micromagnetic analysis and design of magnetic nanostructures and devices.