Some graphical aids for deciding when to stop testing software
- 1 February 1990
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Journal on Selected Areas in Communications
- Vol. 8 (2) , 169-175
- https://doi.org/10.1109/49.46868
Abstract
It is noted that the developers of large software systems must decide how much software should be tested before releasing it. An explicit tradeoff between the costs of testing and releasing is considered. The former may include the opportunity cost of continued testing, and the latter may include the cost of customer dissatisfaction and of fixing faults found in the field. Exact stopping rules were obtained by Dalal and Mallows (J. Amer., Statist. Assoc., vol.83, p.872, 1988), under the assumption that the distribution of the fault finding rate is known. Here, two important variants where the fault finding distribution is not completely known are considered. They are (i) the distribution is exponential with unknown mean and (ii) the distribution is locally exponential with the rate changing smoothly over time. New procedures for both cases are presented. In case (i) it is shown how to incorporate information from related projects and subjective inputs. Several novel graphical procedures which are easy to implement are proposed, and these are illustrated for data from a large telecommunications software systemKeywords
This publication has 5 references indexed in Scilit:
- Quantifying software validation: when to stop testing?IEEE Software, 1989
- When Should One Stop Testing Software?Journal of the American Statistical Association, 1988
- A Unification of Some Software Reliability ModelsSIAM Journal on Scientific and Statistical Computing, 1985
- Validity of Execution-Time Theory of Software ReliabilityIEEE Transactions on Reliability, 1979
- Time-Dependent Error-Detection Rate Model for Software Reliability and Other Performance MeasuresIEEE Transactions on Reliability, 1979