Computing optimal (s, S) policies in inventory models with continuous demands
- 1 June 1985
- journal article
- research article
- Published by Cambridge University Press (CUP) in Advances in Applied Probability
- Vol. 17 (2) , 424-442
- https://doi.org/10.2307/1427149
Abstract
Special algorithms have been developed to compute an optimal (s, S) policy for an inventory model with discrete demand and under standard assumptions (stationary data, a well-behaved one-period cost function, full backlogging and the average cost criterion). We present here an iterative algorithm for continuous demand distributions which avoids any form of prior discretization. The method can be viewed as a modified form of policy iteration applied to a Markov decision process with continuous state space. For phase-type distributions, the calculations can be done in closed form.This publication has 33 references indexed in Scilit:
- On the Objective Function Behavior in (s, S) Inventory ModelsOperations Research, 1982
- Optimal switching among a finite number of Markov processesJournal of Optimization Theory and Applications, 1981
- The Power Approximation for Computing (s, S) Inventory PoliciesManagement Science, 1979
- On the Convergence of Policy Iteration in Stationary Dynamic ProgrammingMathematics of Operations Research, 1979
- Methods for Determining the Re-order Point of an (s, S) Ordering Policy when a Service Level is SpecifiedJournal of the Operational Research Society, 1978
- (s, S) Policies Under Continuous Review and Discrete Compound Poisson DemandManagement Science, 1978
- A general markov decision method II: ApplicationsAdvances in Applied Probability, 1977
- Convergence Results and Approximations for Optimal (s, S) PoliciesManagement Science, 1974
- Dimensional and Computational Analysis in Stationary (s, S) Inventory Problems with Gamma Distributed DemandManagement Science, 1971
- Denumerable State Markovian Decision Processes-Average Cost CriterionThe Annals of Mathematical Statistics, 1966