Abstract
Many stochastic resource allocation problems may be formulated as families of alternative bandit processes. One example is the classical one-armed bandit problem recently studied by Kumar and Seidman. Optimal strategies for such problems are known to be determined by a collection of dynamic allocation indexes (DAI's). The aim of this note is to bring this important result to the attention of control theorists and to give a new proof of it. Applications and some related work are also discussed.