Optimizing the distribution of large data sets in theory and practice

Authors


  • The original version of this article was first published as ‘Rauch F, Kurmann C, Stricker TM. Optimizing the distribution of large data sets in theory and practice. Euro-Par 2000—Parallel Processing (Lecture Notes in Computer Science, vol. 1900), Bode A, Ludwig T, Karl W, Wismüller R (eds.). Springer, 2000; 1118–1131’, and is reproduced here by kind permission of the publisher.

Abstract

Multicasting large amounts of data efficiently to all nodes of a PC clusteris an important operation. In the form of a partition cast it can be used to replicate entire software installations by cloning. Optimizing a partition cast for a given cluster of PCs reveals some interesting architectural tradeoffs, since the fastest solution does not only depend on the network speed and topology, but remains highly sensitive to other resources like the disk speed, the memory system performance and the processing power in the participating nodes. We present an analytical model that guides an implementation towards an optimal configuration for any given PC cluster. The model is validated by measurements on our cluster using Gigabit- and Fast-Ethernet links. The resulting simple software tool, Dolly, can replicate an entire 2 GB Windows NT image onto 24 machines in less than 5 min. Copyright © 2002 John Wiley & Sons, Ltd.

Ancillary