Abstract
Let $(X_i ,Y_i )$, $i = 1, \cdots ,n$ be i.i.d. bivariate random vectors such that $X_i \in \mathbb{R}^p ,Y_i \in \mathbb{R}^1 $. Suppose $r_n (x)$ denotes the k-nearest neighbor (k-NN) estimator of $r(x) = E(Y | X = x )$. Under appropriate conditions, we derive the rates of convergence for the bias and variance as well as asymptotic normality of $r_n (x)$. These appear to share some similarities with the k-NN density estimates. The technique is by conditioning on $R_n $, the Euclidean distance between x and its kth nearest neighbor among the $X_j $’s. Some comparison is made between the k-NN and kernel methods.

This publication has 7 references indexed in Scilit: