Local Properties of k-NN Regression Estimates
- 1 September 1981
- journal article
- Published by Society for Industrial & Applied Mathematics (SIAM) in SIAM Journal on Algebraic Discrete Methods
- Vol. 2 (3) , 311-323
- https://doi.org/10.1137/0602035
Abstract
Let $(X_i ,Y_i )$, $i = 1, \cdots ,n$ be i.i.d. bivariate random vectors such that $X_i \in \mathbb{R}^p ,Y_i \in \mathbb{R}^1 $. Suppose $r_n (x)$ denotes the k-nearest neighbor (k-NN) estimator of $r(x) = E(Y | X = x )$. Under appropriate conditions, we derive the rates of convergence for the bias and variance as well as asymptotic normality of $r_n (x)$. These appear to share some similarities with the k-NN density estimates. The technique is by conditioning on $R_n $, the Euclidean distance between x and its kth nearest neighbor among the $X_j $’s. Some comparison is made between the k-NN and kernel methods.
Keywords
This publication has 7 references indexed in Scilit:
- Multivariate k-nearest neighbor density estimatesJournal of Multivariate Analysis, 1979
- Consistent Nonparametric RegressionThe Annals of Statistics, 1977
- The Strong Uniform Consistency of Nearest Neighbor Density EstimatesThe Annals of Statistics, 1977
- Variable Kernel Estimates of Multivariate DensitiesTechnometrics, 1977
- Estimation of a regression function by the parzen kernel-type density estimatorsAnnals of the Institute of Statistical Mathematics, 1976
- Nonparametric estimates of probability densitiesIEEE Transactions on Information Theory, 1975
- Probability Inequalities for Sums of Bounded Random VariablesJournal of the American Statistical Association, 1963