Entropy and Uncertainty

Abstract
This essay is, primarily, a discussion of four results about the principle of maximizing entropy (MAXENT) and its connections with Bayesian theory. Result1provides a restricted equivalence between the two: where the Bayesian model for MAXENT inference uses an “a priori“ probability that is uniform, and where all MAXENT constraints are limited to 0–1 expectations for simple indicator-variables. The other three results report on an inability to extend the equivalence beyond these specialized constraints. Result2established a sensitivity of MAXENT inference to the choice of the algebra of possibilities even though all empirical constraints imposed on the MAXENT solution are satisfied in each measure space considered. The resulting MAXENT distribution is not invariant over the choice of measure space. Thus, old and familiar problems with the Laplacian principle of Insufficient Reason also plague MAXENT theory. Result3builds upon the findings of Friedman and Shimony (1971; 1973) and demonstrates the absence of an exchangeable, Bayesian model for predictive MAXENT distributions when the MAXENT constraints are interpreted according to Jaynes's (1978) prescription for his (1963) Brandeis Dice problem. Lastly, Result4generalizes the Friedman and Shimony objection to cross-entropy (Kullback-information) shifts subject to a constraint of a new odds-ratio for two disjoint events.

This publication has 16 references indexed in Scilit: