An analytical approach to scoring model design — Application to research and development project selection

Abstract
Multiple criteria scoring models have been suggested for use in evaluating competing research and development project proposals. This model form, more than any other, affords the decision maker the opportunity to combine, in exacting fashion, both qualitative and quantitative factors that affect his decisions. To date, however, the project scoring models that have appeared in the R&D literature contain limitations that have caused research management to turn to economic analysis and mathematical programming models of project selection. These other models, while valuable, are severely limited in their ability to consider qualitative criteria important to the evaluation of research. This paper identifies some of the shortcomings present in standard scoring model formulations of the project selection problem and then defines a new form of scoring model, which not only overcomes existing limitations but also exhibits several qualities desirable in a prescriptive model. Literature study suggests that two difficulties are in large part responsible for an observed lack of managerial interest in the development and use of multiple criteria scoring models. First, the scoring model is often thought of as being considerably less accurate in its ability to process data than rate of return analysis or mathematical programming approaches to project selection. Second, because of lack of explicit model structure and due to the rather arbitrary manner in which previous scoring models have been presented, it is nearly impossible to prescribe how to construct an acceptable model for a specific environment. Described herein is a scoring model defined in terms of the statistical regularities of a given research environment, which compares favorably in accuracy and sensitivity to estimation error with other more widely accepted models of project evaluation. An analytical method for model design and verification is defined and discussed in terms of the following steps: 1) selection of evaluation criteria; 2) development of performance measures; 3) quantification of the research environment; 4) determination of criteria weights; 5) initial model specification; 6) selection of model objectives; 7) initial model verification; and 8) complete model specification and verification.

This publication has 0 references indexed in Scilit: