Skip to Main Content
 

Global Search Box

 
 
 
 

Files

ETD Abstract Container

Abstract Header

Sparse Matrix-Vector Multiplication on GPU

Ashari, Arash

Abstract Details

2014, Doctor of Philosophy, Ohio State University, Computer Science and Engineering.
Sparse Matrix-Vector multiplication (SpMV) is one of the key operations in linear algebra. Overcoming thread divergence, load imbalance and un-coalesced and indirect memory access due to sparsity and irregularity are challenges to optimizing SpMV on GPUs. This dissertation develops solutions that address these challenges effectively. The first part of this dissertation focuses on a new blocked row-column (BRC) storage format with a two-dimensional blocking mechanism. It reduces thread divergence by reordering and blocking rows of the input matrix with nearly equal number of non-zero elements onto the same execution units (i.e., warps). BRC improves load balance by partitioning rows into blocks with a constant number of non-zeros such that different warps perform the same amount of work. We also present an approach to optimize BRC performance by judicious selection of block size based on sparsity characteristics of the matrix. The most commonly used format for a sparse matrix is CSR (Compressed Sparse Row), but a number of other representations have recently been developed that achieve higher SpMV performance. However, the alternative representations typically impose a significant preprocessing overhead. While a high preprocessing overhead can be amortized for applications requiring many iterative invocations of SpMV that use the same matrix, it is not always feasible -- for instance when analyzing large dynamically evolving graphs. The second part of this dissertation presents ACSR, an adaptive SpMV algorithm that uses the standard CSR format but reduces thread divergence by combining rows into groups (bins) which have a similar number of non-zero elements. Further, for rows in bins that span a wide range of non zero counts, dynamic parallelism is leveraged. A significant benefit of ACSR over other proposed SpMV approaches is that it works directly with the standard CSR format, and thus avoids significant preprocessing overheads. We also demonstrate the use of ACSR for the analysis of dynamic graphs, where the improvement over extant approaches is even higher. SpMV is commonly found in a wide class of data analytics and machine learning (ML) algorithms such as linear regression (LR), generalized linear models (GLM), binomial/multinomial logistic regression (LogReg), and support vector machines (SVM). SpMV is also one of the major performance bottlenecks in these algorithms. In the third part of this dissertation, we study application of SpMV in scalable ML. In particular, we identify the generic pattern of computation (y = alpha * t(A) * (z .* (A * x)) + beta * y) and its various instantiations. This pattern involves two SpMV computations. Fusing these computations, we develop a pipelined approach to optimize the complete pattern on GPUs. This approach not only reduces the cost of data loads due to improved temporal locality but also enables other optimizations like coarsening, as well as hierarchical aggregation of partial results.
P. Sadayappan (Advisor)
Srinivasan Parthasarathy (Committee Member)
Atanas Rountev (Committee Member)
153 p.

Recommended Citations

Citations

  • Ashari, A. (2014). Sparse Matrix-Vector Multiplication on GPU [Doctoral dissertation, Ohio State University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=osu1417770100

    APA Style (7th edition)

  • Ashari, Arash. Sparse Matrix-Vector Multiplication on GPU. 2014. Ohio State University, Doctoral dissertation. OhioLINK Electronic Theses and Dissertations Center, http://rave.ohiolink.edu/etdc/view?acc_num=osu1417770100.

    MLA Style (8th edition)

  • Ashari, Arash. "Sparse Matrix-Vector Multiplication on GPU." Doctoral dissertation, Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1417770100

    Chicago Manual of Style (17th edition)