Skip to Main Content
 

Global Search Box

 
 
 
 

ETD Abstract Container

Abstract Header

Scalable Task Parallel Programming in the Partitioned Global Address Space

Dinan, James S.

Abstract Details

2010, Doctor of Philosophy, Ohio State University, Computer Science and Engineering.

Applications that exhibit irregular, dynamic, and unbalanced parallelism are growing in number and importance in the computational science and engineering communities. These applications span many domains including computational chemistry, physics, biology, and data mining. In such applications, the units of computation are often irregular in size and the availability of work may be depend on the dynamic, often recursive, behavior of the program. Because of these properties, it is challenging for these programs to achieve high levels of performance and scalability on modern high performance clusters.

A new family of programming models, called the Partitioned Global Address Space (PGAS) family, provides the programmer with a global view of shared data and allows for asynchronous, one-sided access to data regardless of where it is physically stored. In this model, the global address space is distributed across the memories of multiple nodes and, for any given node, is partitioned into local patches that have high affinity and low access cost and remote patches that have a high access cost due to communication. The PGAS data model relaxes conventional two-sided communication semantics and allows the programmer to access remote data without the cooperation of the remote processor. Thus, this model is attractive for supporting irregular and dynamic applications on distributed memory clusters. However, in spite of the flexible data model, PGAS execution models require the programmer to explicitly partition the computation into a process-centric execution.

In this work, we build a complete environment to support irregular and dynamic parallel computations on large scale clusters by extending the PGAS data model with a task parallel execution model. Task parallelism allows the programmer to express their computation as a dynamic collection of tasks. The execution of these tasks is managed by a scalable and efficient runtime system that performs dynamic load balancing, enhances locality, and provides opportunities for efficient recovery from faults. Experimental results indicate that this system is scalable to over 8192 processor cores, can achieve extremely high efficiency even in the presence of highly unbalanced and dynamic computations, and can also be leveraged to enable rapid recovery from failures.

Ponnuswamy Sadayappan, PhD (Advisor)
Paolo Sivilotti, PhD (Committee Member)
Atanas Rountev, PhD (Committee Member)
121 p.

Recommended Citations

Citations

  • Dinan, J. S. (2010). Scalable Task Parallel Programming in the Partitioned Global Address Space [Doctoral dissertation, Ohio State University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=osu1275418061

    APA Style (7th edition)

  • Dinan, James. Scalable Task Parallel Programming in the Partitioned Global Address Space. 2010. Ohio State University, Doctoral dissertation. OhioLINK Electronic Theses and Dissertations Center, http://rave.ohiolink.edu/etdc/view?acc_num=osu1275418061.

    MLA Style (8th edition)

  • Dinan, James. "Scalable Task Parallel Programming in the Partitioned Global Address Space." Doctoral dissertation, Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1275418061

    Chicago Manual of Style (17th edition)