Get access

Performance of windows multicore systems on threading and MPI

Authors

  • Judy Qiu,

    Corresponding author
    1. Pervasive Technology Institute, Indiana University, Bloomington, IN 47408, U.S.A.
    2. School of Informatics and Computing, Indiana University, Bloomington, IN 47408, U.S.A.
    • Pervasive Technology Institute, Indiana University, Bloomington, IN 47408, U.S.A.
    Search for more papers by this author
  • Seung-Hee Bae

    1. Pervasive Technology Institute, Indiana University, Bloomington, IN 47408, U.S.A.
    2. School of Informatics and Computing, Indiana University, Bloomington, IN 47408, U.S.A.
    Search for more papers by this author

SUMMARY

We present performance results on a Windows cluster with up to 768 cores using Message Passing Interface (MPI) and two variants of threading—Concurrency and Coordination Runtime (CCR) and Task Parallel Library (TPL). CCR presents a message-based interface, while TPL allows for loops to be automatically parallelized. MPI is used between the cluster nodes (up to 32) and either threading or MPI for parallelism on the 24 cores of each node. We look at the performance of two significant bioinformatics applications; gene clustering and dimension reduction. We find that the two threading runtimes offer similar performance with MPI outperforming both at low levels of parallelism but threading much better when the grain size (problem size per process/thread) is small. We develop simple models for the performance of the clustering code. Copyright © 2011 John Wiley & Sons, Ltd.

Ancillary