Continuous program optimization
- 1 July 2003
- journal article
- Published by Association for Computing Machinery (ACM) in ACM Transactions on Programming Languages and Systems
- Vol. 25 (4) , 500-548
- https://doi.org/10.1145/778559.778562
Abstract
Much of the software in everyday operation is not making optimal use of the hardware on which it actually runs. Among the reasons for this discrepancy are hardware/software mismatches, modularization overheads introduced by software engineering considerations, and the inability of systems to adapt to users' behaviors.A solution to these problems is to delay code generation until load time. This is the earliest point at which a piece of software can be fine-tuned to the actual capabilities of the hardware on which it is about to be executed, and also the earliest point at wich modularization overheads can be overcome by global optimization.A still better match between software and hardware can be achieved by replacing the already executing software at regular intervals by new versions constructed on-the-fly using a background code re-optimizer. This not only enables the use of live profiling data to guide optimization decisions, but also facilitates adaptation to changing usage patterns and the late addition of dynamic link libraries.This paper presents a system that provides code generation at load-time and continuous program optimization at run-time. First, the architecture of the system is presented. Then, two optimization techniques are discussed that were developed specifically in the context of continuous optimization. The first of these optimizations continually adjusts the storage layouts of dynamic data structures to maximize data cache locality, while the second performs profile-driven instruction re-scheduling to increase instruction-level parallelism. These two optimizations have very different cost/benefit ratios, presented in a series of benchmarks. The paper concludes with an outlook to future research directions and an enumeration of some remaining research problems.The empirical results presented in this paper make a case in favor of continuous optimization, but indicate that it needs to be applied judiciously. In many situations, the costs of dynamic optimizations outweigh their benefit, so that no break-even point is ever reached. In favorable circumstances, on the other hand, speed-ups of over 120% have been observed. It appears as if the main beneficiaries of continuous optimization are shared libraries, which at different times can be optimized in the context of the currently dominant client application.Keywords
This publication has 21 references indexed in Scilit:
- Dynamic binary translation and optimizationIEEE Transactions on Computers, 2001
- Combining analyses, combining optimizationsACM Transactions on Programming Languages and Systems, 1995
- Optimally profiling and tracing programsACM Transactions on Programming Languages and Systems, 1994
- Towards better inlining decisions using inlining trialsACM SIGPLAN Lisp Pointers, 1994
- Profile-assisted instruction schedulingInternational Journal of Parallel Programming, 1994
- Using profile information to assist advanced compiler optimization and schedulingPublished by Springer Nature ,1993
- Profile‐guided automatic inline expansion for C programsSoftware: Practice and Experience, 1992
- Using profile information to assist classic code optimizationsSoftware: Practice and Experience, 1991
- Efficiently computing static single assignment form and the control dependence graphACM Transactions on Programming Languages and Systems, 1991
- Algorithm 656: an extended set of basic linear algebra subprograms: model implementation and test programsACM Transactions on Mathematical Software, 1988