SLAMM - Automating memory analysis for numerical algorithms

John M. Dennis, Elizabeth R. Jessup, William M. Waite

Research output: Contribution to journalArticlepeer-review

Abstract

Memory efficiency is overtaking the number of floating-point operations as a performance determinant for numerical algorithms. Integrating memory efficiency into an algorithm from the start is made easier by computational tools that can quantify its memory traffic. The Sparse Linear Algebra Memory Model (SLAMM) is implemented by a source-to-source translator that accepts a MATLAB specification of an algorithm and adds code to predict memory traffic. Our tests on numerous small kernels and complete implementations of algorithms for solving sparse linear systems show that SLAMM accurately predicts the amount of data loaded from the memory hierarchy to the L1 cache to within 20% error on three different compute platforms. SLAMM allows us to evaluate the memory efficiency of particular choices rapidly during the design phase of an iterative algorithm, and it provides an automated mechanism for tuning exisiting implementations. It reduces the time to perform a priori memory analysis from as long as several days to 20 minutes.

Original languageEnglish
Pages (from-to)89-104
Number of pages16
JournalElectronic Notes in Theoretical Computer Science
Volume253
Issue number7
DOIs
StatePublished - Sep 17 2010

Keywords

  • MATLAB
  • memory analysis
  • source-to-source translation
  • sparse linear algebra

Fingerprint

Dive into the research topics of 'SLAMM - Automating memory analysis for numerical algorithms'. Together they form a unique fingerprint.

Cite this