Single-bit oversampled A/D conversion with exponential accuracy in the bit-rate

Abstract
We present a scheme for simple oversampled analog-to-digital conversion with single bit quantization and exponential error decay in the bit rate. The scheme is based on recording positions of zero-crossings of the input signal added to a deterministic dither function. This information can be represented in a manner which requires only logarithmic increase of the bit rate with the oversampling factor, r. The input-bandlimited signal can be reconstructed from this information locally, and with a mean squared error which is inversely proportional to the square of the oversampling factor, MSE=O(1/r/sup 2/). Consequently the mean squared error of this scheme exhibits exponential decay in the bit rate.

This publication has 5 references indexed in Scilit: