Partial knowledge, entropy, and estimation

Abstract
In a growing body of literature, available partial knowledge is used to estimate the prior probability distribution p identical with(p(1),...,p(n)) by maximizing entropy H(p) identical with-Sigmap(i) log p(i), subject to constraints on p which express that partial knowledge. The method has been applied to distributions of income, of traffic, of stock-price changes, and of types of brand-article purchases. We shall respond to two justifications given for the method:(alpha) It is "conservative," and therefore good, to maximize "uncertainty," as (uniquely) represented by the entropy parameter.(beta) One should apply the mathematics of statistical thermodynamics, which implies that the most probable distribution has highest entropy.Reason (alpha) is rejected. Reason (beta) is valid when "complete ignorance" is defined in a particular way and both the constraint and the estimator's loss function are of certain kinds.

This publication has 0 references indexed in Scilit: