Alternative approach to maximum-entropy inference

Abstract
A consistent approach to the inference of a probability distribution given a limited number of expectation values of relevant variables is discussed. There are two key assumptions: that the experiment can be independently repeated a finite number (not necessarily large) of times and that the theoretical expectation values of the relevant observables are to be estimated from their measured sample averages. Three independent but complementary routes for deriving the form of the distribution from these two assumptions are reviewed. All three lead to a unique distribution which is identical with the one obtained by the maximum-entropy formalism. The present derivation thus provides an alternative approach to the inference problem which does not invoke Shannon's notion of missing information or entropy. The approach is more limited in scope than the one proposed by Jaynes, but has the advantage that it is objective and that the operational origin of the "given" expectation values is specified.