Special Issue: Euro-Par 2012

Authors


This special issue of Concurrency and Computation: Practice and Experience contains revised and extended versions of selected papers presented at the conference Euro-Par 2012.

Euro-Par—the European Conference on Parallel Computing—is an annual series of international conferences dedicated to the promotion and advancement of all aspects of parallel and distributed computing. Euro-Par covers a wide spectrum of topics from algorithms and theory to software technology and hardware-related issues, with application areas ranging from scientific to mobile and cloud computing. The major part of the Euro-Par audience consists of researchers in academic institutions, government laboratories, and industrial organizations.

Euro-Par 2012, the 18th conference in the Euro-Par series, was organized by the Computer Technology Institute & Press ‘Diophantus’ (CTI) in Patras, and held on Rhodes Island, Greece.

Sixteen broad topics were defined and advertised, covering a large variety of aspects of parallel and distributed computing. The call for papers attracted a total of 228 submissions. The submitted papers were reviewed at least three and, in many cases, four times (3.83 on average). A total of 75 papers were finally accepted for publication. This makes a global acceptance rate of 32.9%. The authors of accepted papers come from 29 countries, with the four main contributing countries—the USA, Germany, Spain and France—accounting for about 52% of them. The distribution of papers follows the pattern typical for a Euro-Par conference: 65% are authored by academic researchers, 24% by students, and 11% by other authors (industry, NGO and government).

Based on the results of the reviews and a majority opinion of the respective topic program committees, several papers were recommended for a special journal issue. The authors were contacted at the conference and invited to submit revised and extended versions of their papers. These new versions were reviewed independently by three reviewers; two had also reviewed the conference version, the third had not. Eventually, five papers were accepted for publication.

Topic 2 on Performance Prediction and Evaluation is represented by the paper Adaptive Sampling for Performance Characterization of Application Kernels authored by Pablo de Oliveira Castro, Eric Petit, Asma Farjallah, and William Jalby [1]. Its subject is the open-source Adaptive Sampling Kit (ASK) that facilitates, with relatively few samples, a characterization of the performance trade-off in large design spaces. To this end, the kit is equipped with a number of adaptive sampling strategies. The authors add a new strategy, hierarchical variance sampling, and present the performance characterization of three problems: memory stride accesses, Jacobian (2D) stencil codes and industrial seismic modeling with a 3D stencil.

Topic 8 on Distributed Systems and Algorithms is represented by the paper Applying the Dynamics of Evolution to Achieve Reliability in Master-Worker Computing authored by Evgenia Christoforou, Antonio Fernandez Anta, Chryssis Georgiou, Miguel A. Mosteiro, and Angel Sánchez [2]. This is an original and interesting approach to unreliable Internet-based master-worker computing. The assumption is that the workers cannot be trusted to return correct results and that their interests may even change during the course of the computation. The authors employ the dynamics of evolution to study the conditions under which the master can obtain reliable results. Via the technique of reinforcement learning, the workers receive incentives to become truthful within a bounded time.

Topic 9 on Parallel and Distributed Programming is represented by the paper Extending the Scope of the Checkpoint-on-Failure Protocol for Forward Recovery in Standard MPI authored by Wesley Bland, Peng Du, Aurelien Bouteiller, Thomas Herault, George Bosilca, and Jack J. Dongarra [3]. Checkpoint-on-Failure (CoF) is a new protocol for fault-tolerant MPI applications that avoids the need for periodic checkpointing and that, unlike other approaches, only requires standard MPI. At the occurrence of a failure, the application makes a checkpoint as last action before abortion. This checkpoint is reloaded in a new MPI application, which recovers from the failure. The validity and performance of this approach are demonstrated on the example of QR factorization.

Topic 11 on Multicore and Manycore Programming is represented by the paper Efficient Support for In-Place Metadata in Java Software Transactional Memory authored by Ricardo J. Dias, Tiago M. Vale, and João M. Lourenço [4]. The accesses in software transactional memories are governed by metadata. There are two strategies for associating these metadata with its memory location: in the out-place strategy, a separate table is constructed; in the in-place strategy, the metadata are stored adjacant to the memory cell. A fair comparison of both strategies is difficult because implementations usually have a strong bias toward one of them. The authors offer an in-place strategy that facilitates an unbiased implementation and offer one such implementation in the system Deuce. This enables them to conduct a fair comparison of the two metadata management strategies.

Topic 13 on High-Performance Network and Communication is represented by the paper Tailoring the Network to the Problem: Topology Configuration in Hybrid EPS/OCS Interconnects authored by Kostas Christodoulopoulos, Kostas Katrinis, Marco Ruffini, and Donal O'Mahony [5]. The authors consider a hybrid electronic packet switching (EPS) and reconfigurable optical circuit switched (OCS) network for future high-performance computing and data center systems. Their objective is to map the task communication graph of an application to the compute resources and, thereby, find an optimal configuration of the optical circuit. They prove the NP-completeness of this problem and offer two algorithms: one for the small and one for the large scale. The small-scale algorithm is based on integer linear programming and finds an optimal solution, the large-scale algorithm is based on a simulated annealing heuristics and trades off performance for responsiveness. The reviewers were particularly pleased by the thoroughness of this treatise.

Concluding this preface, we would like to thank Prof. Geoffrey Fox and Prof. Luc Moreau, editors of this journal, for their support of this special issue. We would also like to thank our peers who assisted us in reviewing the papers and helped strengthen the final versions. Last, but not least, we also appreciate the support of Springer, who agreed to the publication of the extended versions of the articles that appeared originally in the series Lecture Notes in Computer Science.

Ancillary