Performance variation across benchmark suites
- 1 July 1991
- journal article
- Published by Association for Computing Machinery (ACM) in ACM SIGARCH Computer Architecture News
- Vol. 19 (4) , 30-36
- https://doi.org/10.1145/122576.122579
Abstract
The performance ratio between two systems tends to vary across different benchmarks. Here we study this variation as a "signature" or "fingerprint" of the systems under consideration. This "fingerprint" can be used to guess the performance of programs not represented in a benchmark suite, assess the breadth and credibility of the benchmark suite, and infer details of the system design.Keywords
This publication has 4 references indexed in Scilit:
- How not to lie with statistics: the correct way to summarize benchmark resultsCommunications of the ACM, 1986
- Performance and Evaluation of LISP SystemsPublished by MIT Press ,1985
- REDUCE as a lisp benchmarkACM SIGSAM Bulletin, 1985
- On the use of benchmarks for measuring system performanceACM SIGARCH Computer Architecture News, 1982