Validation, verification and evaluation

Abstract
A discussion is given on the validation, verification and evaluation of scientific visualization software. A "bug" usually refers to software doing something different than the programmer intended. Comprehensive testing, especially for software intended for use in innovative environments, is hard. Descriptions and summaries of the tests we have done are often not available to the users. A different source of visualization errors is software that does something different than what the scientist thinks it does. The particular methods used to compute values in the process of creating visualizations are important to the scientists, but vendors are understandably reluctant to reveal all the internals of their products. Is there a workable compromise? Another vulnerability of visualization users is in the choice of a technique which is less effective than others equally available. Visualization researchers and developers should give users the information required to make good decisions about competing visualization techniques. What information is needed? What will it take to gather and distribute it? How should it be tied to visualization software?.

This publication has 0 references indexed in Scilit: