An optimum NLMS algorithm: performance improvement over LMS

Abstract
The authors prove that for zero-mean white data input, the optimum algorithm which modifies the data vector in the LMS (least mean square) gradient estimate to achieve the lowest excess mean-square error for a given convergence rate is the normalized LMS (NLMS) algorithm. It is shown that this adaptive filtering algorithm is equivalent to recursive least-squares adaptation with a known diagonal data covariance matrix. Moreover, the algorithm can be interpreted as a modified LMS algorithm in which the iterated weight vector is used to form the error estimate. Both theoretical calculations and simulations for white Gaussian data show that this NLMS algorithm performs as much as 3.6 dB better than standard LMS for the input.

This publication has 3 references indexed in Scilit: