CACHING IN WITH MULTIGRID ALGORITHMS: PROBLEMS IN TWO DIMENSIONS
- 1 June 1996
- journal article
- research article
- Published by Taylor & Francis in Parallel Algorithms and Applications
- Vol. 9 (3-4) , 195-204
- https://doi.org/10.1080/10637199608915575
Abstract
Multigrid methods combine a number of standard sparse matrix techniques. Usual implementations separate the individual components (e.g., an iterative methods, residual computation, and interpolation between grids) into nicely structured routines. However, many computers today employ quite sophisticated and potentially large caches whose correct use are instrumental in gaining much of the peak performance of the processors. We investigate when it makes sense to combine several of the multigrid components into one, using block oriented algorithms. We determine how large (or small) the blocks must be in order for the data in the block to just fit into the processor's primary cache. By re-using the data in cache several times, a potential savings in run time can be predicted. This is analyzed for a set of examples.Keywords
This publication has 6 references indexed in Scilit:
- A Unified Convergence Theory for Abstract Multigrid or Multilevel Algorithms, Serial and ParallelSIAM Journal on Numerical Analysis, 1993
- The Nas Parallel BenchmarksThe International Journal of Supercomputing Applications, 1991
- Estimates for multigrid methods based on red-black Gauss-Seidel smoothingsNumerische Mathematik, 1988
- Sharp Estimates for Multigrid Rates of Convergence with General Smoothing and AccelerationSIAM Journal on Numerical Analysis, 1985
- Multi-Grid Algorithms with Applications to Elliptic Boundary Value ProblemsSIAM Journal on Numerical Analysis, 1984
- Multi-level adaptive solutions to boundary-value problemsMathematics of Computation, 1977