Partial knowledge, entropy, and estimation
- 1 October 1975
- journal article
- Published by Proceedings of the National Academy of Sciences in Proceedings of the National Academy of Sciences
- Vol. 72 (10) , 3819-3824
- https://doi.org/10.1073/pnas.72.10.3819
Abstract
In a growing body of literature, available partial knowledge is used to estimate the prior probability distribution p identical with(p(1),...,p(n)) by maximizing entropy H(p) identical with-Sigmap(i) log p(i), subject to constraints on p which express that partial knowledge. The method has been applied to distributions of income, of traffic, of stock-price changes, and of types of brand-article purchases. We shall respond to two justifications given for the method:(alpha) It is "conservative," and therefore good, to maximize "uncertainty," as (uniquely) represented by the entropy parameter.(beta) One should apply the mathematics of statistical thermodynamics, which implies that the most probable distribution has highest entropy.Reason (alpha) is rejected. Reason (beta) is valid when "complete ignorance" is defined in a particular way and both the constraint and the estimator's loss function are of certain kinds.Keywords
This publication has 0 references indexed in Scilit: