Quantifying Software Validity by Sampling
- 1 June 1980
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Reliability
- Vol. R-29 (2) , 141-144
- https://doi.org/10.1109/tr.1980.5220756
Abstract
The point of all validation techniques is to raise assurance about the program under study, but no current methods can be realistically thought to give 100% assurance that a validated program will perform correctly. There are currently no useful ways for quantifying how 'well-validated' a program is. One measure of program correctness is the proportion of elements in the program's input domain for which it fails to execute correctly, since the proportion is zero i.f.f. the program is correct. This proportion can be estimated statistically from the results of program tests and from prior subjective assessments of the program's correctness. Three examples are presented of methods for determining s-confidence bounds on the failure proportion. It is shown that there are reasonable conditions (for programs with a finite number of paths) for which ensuring the testing of all paths does not give better assurance of program correctness.Keywords
This publication has 8 references indexed in Scilit:
- Social processes and proofs of theorems and programsCommunications of the ACM, 1979
- An evaluation of the effectiveness of symbolic testingSoftware: Practice and Experience, 1978
- Theoretical and Empirical Studies of Program TestingIEEE Transactions on Software Engineering, 1978
- Toward models for probabilistic program correctnessPublished by Association for Computing Machinery (ACM) ,1978
- On the Automated Generation of Program Test DataIEEE Transactions on Software Engineering, 1976
- Observations of Fallibility in Applications of Modern Programming MethodologiesIEEE Transactions on Software Engineering, 1976
- An Approach to Program TestingACM Computing Surveys, 1975
- Toward a theory of test data selectionIEEE Transactions on Software Engineering, 1975