Scheduling algorithms-by-blocks on small clusters
Article first published online: 28 MAR 2012
Copyright © 2012 John Wiley & Sons, Ltd.
Concurrency and Computation: Practice and Experience
Volume 25, Issue 3, pages 367–384, 10 March 2013
How to Cite
Igual, F. D., Quintana-Ortí, G. and van de Geijn, R. (2013), Scheduling algorithms-by-blocks on small clusters. Concurrency Computat.: Pract. Exper., 25: 367–384. doi: 10.1002/cpe.2842
- Issue published online: 8 FEB 2013
- Article first published online: 28 MAR 2012
- Manuscript Accepted: 11 MAR 2012
- Manuscript Revised: 6 MAR 2012
- Manuscript Received: 27 JUN 2011
- matrix computations;
- novel parallel architectures;
- automatic parallelization
The arrival of multicore architectures has generated an interest in reformulating dense matrix computations as algorithms-by-blocks, where submatrices are units of data and computations with those blocks are units of computation. Rather than directly executing such an algorithm, a directed acyclic graph is generated at runtime that is then scheduled by a runtime system such as SuperMatrix. The benefit is a clear separation of concerns between the library and the heuristics for scheduling. In this paper, we show that this approach can be taken one step further using the same methodology and an ad hoc runtime to map algorithms-by-blocks to small clusters. With no change to the library code, and the application that uses it, the computational power of such small clusters can be utilized. An impressive performance on a number of small clusters is reported. As a proof of the flexibility of the solution, we report performance results on accelerated clusters based on graphics processors. We believe this to be a possible step towards programming many-core architectures, as demonstrated by a port of the solution to Intel's Single-chip Cloud Computer (Intel, Santa Clara, CA, USA). Copyright © 2012 John Wiley & Sons, Ltd.