Abstract
The goal of a dynamic power management policy is to reduce the power consumption of an electronic system by putting system components into different states, each representing a certain performance and power consumption level. The policy determines the type and timing of these transitions based on the system history, workload, and performance constraints. In this paper we propose a new abstract model of a power-managed electronic system. We formulate the problem of system-level power management as a controlled optimization problem based on the theories of continuous-time Markov derision processes and stochastic networks. This problem is solved exactly using linear programming or heuristically using "policy iteration." Our method is compared with existing heuristic methods for different workload statistics. Experimental results show that the power management method based on a Markov decision process outperforms heuristic methods by as much as 44% in terms of power dissipation savings for a given level of system performance.

This publication has 14 references indexed in Scilit: