Kernel Estimators of the Failure-Rate Function and Density Estimation: An Analogy

Abstract
In this article we explore methods by which the rate of convergence of the bias and the mean squared error of kernel estimators of the failure-rate function can be improved. We show that if the kernel is not restricted to be nonnegative and is suitably chosen, then the bias contribution to the asymptotic mean squared error can be eliminated to any required order, and the rate of convergence of the asymptotic mean squared error can be brought as close to n −1 as is desired. The generalized jackknife method of combining estimators is an adequate procedure that leads us to this goal. Some of our results parallel those obtained for density estimation. The material on constructing special kernels using the generalized jackknife is also applicable for density estimation. Wherever appropriate, we make connections between these two topics.

This publication has 0 references indexed in Scilit: