Abstract
Here we use posterior densities based on relative entropy reference priors for two purposes. The first purpose is to identify data implicit in the use of informative priors. We represent an informative prior as the posterior from an experiment with a known likelihood and a reference prior. Minimizing the relative entropy distance between this posterior and the informative prior over choices of data results in a data set that can be regarded as representative of the information in the informative prior. The second implication from reference priors is obtained by replacing the informative prior with a class of densities from which one might wish to make inferences. For each density in this class, one can obtain a data set that minimizes a relative entropy. The maximum of these sample sizes as the inferential density varies over its class can be used as a guess as to how much data is required for the desired inferences. We bound this sample size above and below by other techniques that permit it to be approximated.

This publication has 0 references indexed in Scilit: