Best subset selection, persistence in high-dimensional statistical learning and optimization under l1 constraint
Open Access
- 1 October 2006
- journal article
- Published by Institute of Mathematical Statistics in The Annals of Statistics
- Vol. 34 (5) , 2367-2386
- https://doi.org/10.1214/009053606000000768
Abstract
Let $(Y,X_1,...,X_m)$ be a random vector. It is desired to predict $Y$ based on $(X_1,...,X_m)$. Examples of prediction methods are regression, classification using logistic regression or separating hyperplanes, and so on. We consider the problem of best subset selection, and study it in the context $m=n^{\alpha}$, $\alpha>1$, where $n$ is the number of observations. We investigate procedures that are based on empirical risk minimization. It is shown, that in common cases, we should aim to find the best subset among those of size which is of order $o(n/\log(n))$. It is also shown, that in some ``asymptotic sense,'' when assuming a certain sparsity condition, there is no loss in letting $m$ be much larger than $n$, for example, $m=n^{\alpha}, \alpha>1$. This is in comparison to starting with the ``best'' subset of size smaller than $n$ and regardless of the value of $\alpha$. We then study conditions under which empirical risk minimization subject to $l_1$ constraint yields nearly the best subset. These results extend some recent results obtained by Greenshtein and Ritov. Finally we present a high-dimensional simulation study of a ``boosting type'' classification procedure.
Keywords
All Related Versions
This publication has 18 references indexed in Scilit:
- High-dimensional graphs and variable selection with the LassoThe Annals of Statistics, 2006
- Some theory for Fisher's linear discriminant function, `naive Bayes', and some alternatives when there are many more variables than observationsBernoulli, 2004
- Persistence in high-dimensional linear predictor selection and the virtue of overparametrizationBernoulli, 2004
- Nonconcave penalized likelihood with a diverging number of parametersThe Annals of Statistics, 2004
- Least angle regressionThe Annals of Statistics, 2004
- Population theory for boosting ensemblesThe Annals of Statistics, 2004
- On the Bayes-risk consistency of regularized boosting methodsThe Annals of Statistics, 2004
- Functional aggregation for nonparametric regressionThe Annals of Statistics, 2000
- Asymptotic distribution of the errors in scalar and vector quantizersIEEE Transactions on Information Theory, 1996
- Robust Regression: Asymptotics, Conjectures and Monte CarloThe Annals of Statistics, 1973