Abstract
Risky decisions are often difficult to make or defend because assessments of key facts are themselves subject to uncertainty. Theory bearing on this second‐order ‘assessment uncertainty’ (AU) has found little practical application. A methodology is proposed for representing AU in a form where it can be understood, judgmentally checked and effectively used in common decision situations, including some guidance on how to elicit or indirectly measure AU. It extends well‐established personal probability logic, with primary focus on predicting shifts in first‐order factual assessments which might result from new developments — whether these are realistically possible or ‘ideal.’ The approach attempts to be prescriptive, in that its input can be readily measured, its output fits intended use and user, and the intervening procedures are logically sound without being too burdensome. Successful application is illustrated in the context of a regulatory decision on whether a reactor should be required to install a costly safety feature.

This publication has 15 references indexed in Scilit: