Skip to Main Content
Frequently Asked Questions
Submit an ETD
Global Search Box
Need Help?
Keyword Search
Participating Institutions
Advanced Search
School Logo
Files
File List
Dissertation_BinRen.pdf (1.62 MB)
ETD Abstract Container
Abstract Header
Supporting Applications Involving Dynamic Data Structures and Irregular Memory Access on Emerging Parallel Platforms
Author Info
Ren, Bin
Permalink:
http://rave.ohiolink.edu/etdc/view?acc_num=osu1397753127
Abstract Details
Year and Degree
2014, Doctor of Philosophy, Ohio State University, Computer Science and Engineering.
Abstract
SIMD accelerators and many-core coprocessors with coarse-grained and fine-grained level parallelism, become more and more popular. Streaming SIMD Extensions (SSE), Graphics Processing Unit (GPU), and Intel Xeon Phi (MIC) can provide orders of magnitude better performance and efficiency for parallel workloads as compared to single core CPUs. However, parallelizing irregular applications involving dynamic data structures and irregular memory access on these parallel platforms is not straightforward due to their intensive control-flow dependency and lack of memory locality. Our efforts focus on three classes of irregular applications: Irregular Trees and Graphs Traversals, Irregular Reductions and Dynamic Allocated Arrays and Lists, and explore the mechanism of parallelizing them on various parallel architectures from both fine-grained and coarse-grained perspectives. We first focus on the traversal of irregular trees and graphs, more specifically, a class of applications involving the traversal of many pointer-intensive data structures, e.g. Random Forest, and Regular Expressions, on various fine-grained SIMD architectures, e.g. the Streaming SIMD Extension (SSE), and Graphic Processing Unit (GPU). We address this problem by developing an intermediate language for specifying such traversals, followed by a run-time scheduler that maps traversals to SIMD units. A key idea in our run-time scheme is converting branches to arithmetic operations, which then allows us to use SIMD hardware. However, different SIMD architectures have different features, so a significant challenge to our previous work is to automatically optimize applications for various architectures, i.e., we need to implement performance portability. Moreover, one of the first architectural features programmers look to when optimizing their applications is the memory hierarchy. Thus, we design a portable optimization engine for accelerating irregular data traversal applications on various SIMD architectures by emphasizing on improving the data locality and hiding memory latency. We next explore the possibility of efficiently parallelizing two irregular reduction applications on Intel Xeon Phi architecture, an emerging many-core coprocessor architecture with long SIMD vectors, via data layout optimization. During this process, we also identify a general data management problem in the CPU-Coprocessor programming model, i.e., the problem of automating and optimizing dynamic allocated data structures transfers between CPU and Coprocessors. For dynamic multi-dimensional arrays, we design a set of compile-time solutions involving heap layout transformation, while for other irregular data structures such as linked lists, we improve the existing shared memory runtime solution to reduce the transfer costs. Dynamic allocated data structures like List are also commonly used in high-level programming languages, such as Python to support dynamic, flexible features to increase the programming productivity. To parallelize applications in such high-level programming languages on both coarse-grained and fine-grained parallel platforms, we design a compilation system linearizing dynamic data structures into arrays, and invoking low level multi-core, many-core libraries. A critical issue of our linearization method is that it incurs extra data structure transformation overhead, especially for the irregular data structures not reused frequently. To address this challenge, we design a set of transformation optimization algorithms including an inter-procedural Partial Redundancy Elimination (PRE) algorithm to minimize the data transformation overhead automatically.
Committee
Gagan Agrawal (Advisor)
Ponnuswamy Sadayappan (Committee Member)
Radu Teodorescu (Committee Member)
Pages
211 p.
Subject Headings
Computer Science
Keywords
Irregular Data Structure
;
Fine Grained Parallelism
;
SIMD
;
MIMD
;
SSE
;
GPUs
;
Xeon Phi
;
Static Analysis
;
Runtime Analysis
;
Offloading
;
Python
;
Redundancy Elimination
Recommended Citations
Refworks
EndNote
RIS
Mendeley
Citations
Ren, B. (2014).
Supporting Applications Involving Dynamic Data Structures and Irregular Memory Access on Emerging Parallel Platforms
[Doctoral dissertation, Ohio State University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=osu1397753127
APA Style (7th edition)
Ren, Bin.
Supporting Applications Involving Dynamic Data Structures and Irregular Memory Access on Emerging Parallel Platforms.
2014. Ohio State University, Doctoral dissertation.
OhioLINK Electronic Theses and Dissertations Center
, http://rave.ohiolink.edu/etdc/view?acc_num=osu1397753127.
MLA Style (8th edition)
Ren, Bin. "Supporting Applications Involving Dynamic Data Structures and Irregular Memory Access on Emerging Parallel Platforms." Doctoral dissertation, Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1397753127
Chicago Manual of Style (17th edition)
Abstract Footer
Document number:
osu1397753127
Download Count:
1,206
Copyright Info
© 2014, all rights reserved.
This open access ETD is published by The Ohio State University and OhioLINK.