Entropy and information in neural spike trains: Progress on the sampling problem

Abstract
The major problem in information theoretic analysis of neural responses is the reliable estimation of entropy-like quantities from small samples. We review a Bayesian estimator of entropies introduced recently to solve this problem, and study its performance on synthetic and experimental spike trains. The estimator performs admirably even very deep in the undersampled regime, where other techniques fail. This opens new possibilities for the information theoretic analysis of experiments, and may be of general interest as an example of learning from limited data.

This publication has 0 references indexed in Scilit: