User Response Data: The Potential for Errors and Biases

Abstract
Surveys that require users to evaluate or make judgments about information systems and their effect on specific work activities can produce misleading results if respondents do not interpret or answer questions in the ways intended by the researcher. This paper provides a framework for understanding both the cognitive activities and the errors and biases in judgment that can result when users are asked to categorize a system, explain its effects, or predict their own future actions and preferences with respect to use of a system. Specific suggestions are offered for wording survey questions and response categories so as to elicit more precise and reliable responses. In addition, possible sources of systematic bias are discussed, using examples drawn from published IS research. Recommendations are made for further research aimed at better understanding how and to what extent judgment biases could affect the results of IS surveys.

This publication has 0 references indexed in Scilit: