Probability Density Estimation Using Entropy Maximization
- 1 October 1998
- journal article
- Published by MIT Press in Neural Computation
- Vol. 10 (7) , 1925-1938
- https://doi.org/10.1162/089976698300017205
Abstract
We propose a method for estimating probability density functions and conditional density functions by training on data produced by such distributions. The algorithm employs new stochastic variables that amount to coding of the input, using a principle of entropy maximization. It is shown to be closely related to the maximum likelihood approach. The encoding step of the algorithm provides an estimate of the probability distribution. The decoding step serves as a generative mode, producing an ensemble of data with the desired distribution. The algorithm is readily implemented by neural networks, using stochastic gradient ascent to achieve entropy maximization.Keywords
This publication has 6 references indexed in Scilit:
- A Learning Algorithm for Boltzmann Machines*Published by Wiley ,2010
- Infomax and maximum likelihood for blind source separationIEEE Signal Processing Letters, 1997
- Multidimensional density shaping by sigmoidsIEEE Transactions on Neural Networks, 1996
- An Information-Maximization Approach to Blind Separation and Blind DeconvolutionNeural Computation, 1995
- The Helmholtz MachineNeural Computation, 1995
- Nonlinear neurons in the low-noise limit: a factorial code maximizes information transferNetwork: Computation in Neural Systems, 1994