2. Programming View and UPC Data Types

  1. Tarek El-Ghazawi1,
  2. William Carlson2,
  3. Thomas Sterling3 and
  4. Katherine Yelick4

Published Online: 27 JAN 2005

DOI: 10.1002/0471478369.ch2

UPC: Distributed Shared Memory Programming

UPC: Distributed Shared Memory Programming

How to Cite

El-Ghazawi, T., Carlson, W., Sterling, T. and Yelick, K. (2005) Programming View and UPC Data Types, in UPC: Distributed Shared Memory Programming, John Wiley & Sons, Inc., Hoboken, NJ, USA. doi: 10.1002/0471478369.ch2

Author Information

  1. 1

    The George Washington University, USA

  2. 2

    IDA Center for Computing Sciences, USA

  3. 3

    California Institute of Technology, USA

  4. 4

    University of California at Berkeley, USA

Publication History

  1. Published Online: 27 JAN 2005
  2. Published Print: 13 MAY 2005

Book Series:

  1. Wiley Series on Parallel and Distributed Computing

Book Series Editors:

  1. Albert Y. Zomaya

ISBN Information

Print ISBN: 9780471220480

Online ISBN: 9780471478362



  • programming model;
  • shared arrays;
  • private arrays


UPC is an extension of ISO C that follows the distributed shared memory programming model. All threads can access the entire shared space. However, the shared space is logically partitioned and each partition has affinity to one of the threads. This enables programmers to co-locate data and processing, thus exploiting data locality for improved performance. Each thread has, in addition, a private memory space, which should be accessed by that thread only. UPC operates in a SPMD model of execution where all threads execute the main() function. No synchronization is implied except that threads must start and finish the same program at the same time. Programmers determine at declaration time where data will be located with respect to threads and private and shared space. Private variable declarations result in one instance of the variable within the private space of each thread. Shared scalar variables have affinity to thread 0. Shared array elements are distributed by default in a round robin fashion across the threads. Such default can be changed and a block size can be specified by the programmer, in which case the array elements are distributed by blocks of elements across the threads in a round robin fashion. Thus, the data layout can be defined in such a way that reduces the overall number of remote shared memory references. Shared variables and arrays cannot have dynamic scope.