Performance analysis of sparse matrix-vector multiplication (Spmv) on graphics processing units (gpus)

dc.contributorAalto Universityen
dc.contributor.authorAlahmadi, Sarahen_US
dc.contributor.authorMohammed, Thahaen_US
dc.contributor.authorAlbeshri, Aiiaden_US
dc.contributor.authorKatib, Iyaden_US
dc.contributor.authorMehmood, Rashiden_US
dc.contributor.departmentDepartment of Computer Scienceen
dc.contributor.groupauthorProfessorship Di Francesco Marioen
dc.contributor.organizationTaibah Universityen_US
dc.contributor.organizationKing Abdulaziz Universityen_US
dc.description.abstractGraphics processing units (GPUs) have delivered a remarkable performance for a variety of high performance computing (HPC) applications through massive parallelism. One such application is sparse matrix-vector (SpMV) computations, which is central to many scientific, engineering, and other applications including machine learning. No single SpMV storage or computation scheme provides consistent and sufficiently high performance for all matrices due to their varying sparsity patterns. An extensive literature review reveals that the performance of SpMV techniques on GPUs has not been studied in sufficient detail. In this paper, we provide a detailed performance analysis of SpMV performance on GPUs using four notable sparse matrix storage schemes (compressed sparse row (CSR), ELLAPCK (ELL), hybrid ELL/COO (HYB), and compressed sparse row 5 (CSR5)), five performance metrics (execution time, giga floating point operations per second (GFLOPS), achieved occupancy, instructions per warp, and warp execution efficiency), five matrix sparsity features (nnz, anpr, npr variance, maxnpr, and distavg), and 17 sparse matrices from 10 application domains (chemical simulations, computational fluid dynamics (CFD), electromagnetics, linear programming, economics, etc.). Subsequently, based on the deeper insights gained through the detailed performance analysis, we propose a technique called the heterogeneous CPU–GPU Hybrid (HCGHYB) scheme. It utilizes both the CPU and GPU in parallel and provides better performance over the HYB format by an average speedup of 1.7x. Heterogeneous computing is an important direction for SpMV and other application areas. Moreover, to the best of our knowledge, this is the first work where the SpMV performance on GPUs has been discussed in such depth. We believe that this work on SpMV performance analysis and the heterogeneous scheme will open up many new directions and improvements for the SpMV computing field in the future.en
dc.description.versionPeer revieweden
dc.identifier.citationAlahmadi, S, Mohammed, T, Albeshri, A, Katib, I & Mehmood, R 2020, ' Performance analysis of sparse matrix-vector multiplication (Spmv) on graphics processing units (gpus) ', Electronics (Switzerland), vol. 9, no. 10, 1675, pp. 1-30 .
dc.identifier.otherPURE UUID: c35727bc-13b8-4ab8-8d98-2c4aaba45d84en_US
dc.identifier.otherPURE ITEMURL:
dc.identifier.otherPURE LINK:
dc.identifier.otherPURE FILEURL:
dc.publisherMDPI AG
dc.relation.ispartofseriesElectronics (Switzerland)en
dc.relation.ispartofseriesVolume 9, issue 10en
dc.subject.keywordGraphics processing units (GPUs)en_US
dc.subject.keywordHeterogeneous computingen_US
dc.subject.keywordHigh performance computing (HPC)en_US
dc.subject.keywordSparse matrix storageen_US
dc.subject.keywordSparse matrix-vector multiplication (SpMV)en_US
dc.titlePerformance analysis of sparse matrix-vector multiplication (Spmv) on graphics processing units (gpus)en
dc.typeA1 Alkuperäisartikkeli tieteellisessä aikakauslehdessäfi