SORS/WomenInBSC: "Accelerating Sparse Matrix Computations on Tensor Cores"

Date: 09/May/2024 Time: 10:30

Place:

[ONSITE] Room Severo Ochoa, Capella, BSC.

2024-05-09 10:30:00 2024-05-09 10:30:00 Europe/Madrid SORS/WomenInBSC: "Accelerating Sparse Matrix Computations on Tensor Cores" For details, click on the following event link: https://www.bsc.es/research-and-development/research-seminars/sorswomeninbsc-accelerating-sparse-matrix-computations-tensor-cores ---

Primary tabs

Abstract

Sparse matrix computations such as sparse matrix-vector multiplication (SpMV) and sparse matrix-matrix multiplication (SpGEMM) play a key role in computational science and engineering, graph processing, and machine learning applications. Even though there has been much work devoted to optimizing sparse matrix computations, the latest major hardware features, i.e., tensor core units and their low precision compute power, have not been exploited sufficiently to accelerate sparse matrix computations. This talk will present our two recent studies: (1) DASP: a tensor core-accelerated SpMV algorithm, which makes the irregular data layout in sparse matrices regular for efficiently exploiting tensor core units; (2) AmgT: a new algebraic multigrid solver that utilizes the tensor cores and their mixed precision ability for SpGEMM and SpMV in the entire procedure of AMG. Our experiments show that the two studies break the limitations of the traditional GEMM computation pattern supported by tensor cores and exploit the specific dense matrix multiply-accumulate units to accelerate sparse kernels.

Short Bio

Yuechen Lu is currently a second-year PhD student at China University of Petroleum-Beijing, under the supervision of Prof. Weifeng Liu. She earned her Bachelor of Engineering degree in Computer Science and Technology in 2020 from China University of Petroleum-Beijing. Her research interests are in high-performance computing, parallel computing, and sparse matrix computation. Her research focuses on accelerating sparse linear algebra kernels on CPUs and GPUs, and she proposed DASP, a new method for general sparse matrix-vector multiplication, which hash been experimentally proven to achieve better performance on most matrices in the SuiteSparse Matrix Collection. She is a Student Member of both IEEE and ACM.

 

 

 

Speakers

Speaker: Yuechen Lu. Second-year PhD student at China University of Petroleum-Beijing
Host: Marc Casas. Leading researcher -SOftware research and development vehicles for New ARchitectures (SONAR), Computer Sciences, BSC.