Sample Sizes for Usability Studies: Additional Considerations
- 1 June 1994
- journal article
- research article
- Published by SAGE Publications in Human Factors: The Journal of the Human Factors and Ergonomics Society
- Vol. 36 (2) , 368-378
- https://doi.org/10.1177/001872089403600215
Abstract
Recently, Virzi (1992) presented data that support three claims regarding sample sizes for usability studies: (1) observing four or five participants will allow a usability practitioner to discover 80% of a product's usability problems, (2) observing additional participants will reveal fewer and fewer new usability problems, and (3) more severe usability problems are easier to detect with the first few participants. Results from an independent usability study clearly support the second claim, partially support the first, but fail to support the third. Problem discovery shows diminishing returns as a function of sample size. Observing four to five participants will uncover about 80% of a product's usability problems as long as the average likelihood of problem detection ranges between 0.32 and 0.42, as in Virzi. If the average likelihood of problem detection is lower, then a practitioner will need to observe more than five participants to discover 80% of the problems. Using behavioral categories for problem severity (or impact), these data showed no correlation between problem severity (impact) and rate of discovery. The data provided evidence that the binomial probability formula may provide a good model for predicting problem discovery curves, given an estimate of the average likelihood of problem detection. Finally, data from economic simulations that estimated return on investment (ROI) under a variety of settings showed that only the average likelihood of problem detection strongly influenced the range of sample sizes for maximum ROI.Keywords
This publication has 11 references indexed in Scilit:
- Refining the Test Phase of Usability Evaluation: How Many Subjects Is Enough?Human Factors: The Journal of the Human Factors and Ergonomics Society, 1992
- Case study in human factors evaluationInformation and Software Technology, 1992
- Comparison of empirical testing and walkthrough methods in user interface evaluationPublished by Association for Computing Machinery (ACM) ,1992
- A cost-effective evaluation method for use by designersInternational Journal of Man-Machine Studies, 1991
- The challenge of interface design for communication theory: from interaction metaphor to contexts of discoveryInteracting with Computers, 1991
- Streamlining the Design Process: Running Fewer SubjectsProceedings of the Human Factors Society Annual Meeting, 1990
- A discussion of modes and motives for usability evaluationIEEE Transactions on Dependable and Secure Computing, 1989
- How to Design Usable SystemsPublished by Elsevier ,1988
- Design rules based on analyses of human errorCommunications of the ACM, 1983
- Testing small system customer set-upPublished by American Psychological Association (APA) ,1982