Bachelor's thesis presentation. U-Jin is advised by Hayden Liu Weng.
Previous talks at the SCCS Colloquium
U-Jin Hong: Mixed-Precision in Sparse Iterative Linear Solver Performance
SCCS Colloquium |
Floating-point numbers, typically represented as 32-bit float or 64-bit double, offer a trade-off between computational speed, memory usage, and numerical accuracy. Lower precision generally leads to faster computations and reduced storage, but sacrifices accuracy. This thesis investigates the impact of mixed precision on solving large, sparse linear systems using several iterative methods, aiming to identify strategies that maximize speedup while preserving acceptable accuracy. We performed benchmarks on a diverse set of matrices from a sparse matrix collection, varying the intensity of mixed precision application, primarily within Iterative Refinement and Multigrid methods combined with the Conjugate Gradient solver. Our results clearly demonstrate the significant benefits of lower and mixed precision for Sparse Matrix-Vector (SpMV) operations, yielding speedups of up to 87% for half precision compared to double. Furthermore, observable speedup tendencies were identified across the iterative methods. The achieved accuracy often remained above the numerical limits of the lower-precision data types, indicating that mixed precision can be strategically employed to obtain substantial speedups without a large loss of accuracy. However, due to persistent server outages, the selection of matrices on which the solvers were tested was limited.