The Dantzig selector: Statistical estimation when p is much larger than n
Top Cited Papers
Open Access
- 1 December 2007
- journal article
- Published by Institute of Mathematical Statistics in The Annals of Statistics
- Vol. 35 (6) , 2313-2351
- https://doi.org/10.1214/009053606000001523
Abstract
In many important statistical applications, the number of variables or parameters $p$ is much larger than the number of observations $n$. Suppose then that we have observations $y=X\beta+z$, where $\beta\in\mathbf{R}^p$ is a parameter vector of interest, $X$ is a data matrix with possibly far fewer rows than columns, $n\ll p$, and the $z_i$'s are i.i.d. $N(0,\sigma^2)$. Is it possible to estimate $\beta$ reliably based on the noisy data $y$? To estimate $\beta$, we introduce a new estimator--we call it the Dantzig selector--which is a solution to the $\ell_1$-regularization problem \[\min_{\tilde{\b eta}\in\mathbf{R}^p}\|\tilde{\beta}\|_{\ell_1}\quad subject to\quad \|X^*r\|_{\ell_{\infty}}\leq(1+t^{-1})\sqrt{2\log p}\cdot\sigma,\] where $r$ is the residual vector $y-X\tilde{\beta}$ and $t$ is a positive scalar. We show that if $X$ obeys a uniform uncertainty principle (with unit-normed columns) and if the true parameter vector $\beta$ is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability, \[\|\hat{\beta}-\beta\|_{\ell_2}^2\le C^2\cdot2\log p\cdot \Biggl(\sigma^2+\sum_i\min(\beta_i^2,\sigma^2)\Biggr).\] Our results are nonasymptotic and we give values for the constant $C$. Even though $n$ may be much smaller than $p$, our estimator achieves a loss within a logarithmic factor of the ideal mean squared error one would achieve with an oracle which would supply perfect information about which coordinates are nonzero, and which were above the noise level. In multivariate regression and from a model selection viewpoint, our result says that it is possible nearly to select the best subset of variables by solving a very simple convex program, which, in fact, can easily be recast as a convenient linear program (LP).
Keywords
All Related Versions
This publication has 31 references indexed in Scilit:
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solutionCommunications on Pure and Applied Mathematics, 2006
- Stable signal recovery from incomplete and inaccurate measurementsCommunications on Pure and Applied Mathematics, 2006
- Decoding by Linear ProgrammingIEEE Transactions on Information Theory, 2005
- Persistence in high-dimensional linear predictor selection and the virtue of overparametrizationBernoulli, 2004
- Nonconcave penalized likelihood with a diverging number of parametersThe Annals of Statistics, 2004
- Uncertainty principles and ideal atomic decompositionIEEE Transactions on Information Theory, 2001
- Gaussian model selectionJournal of the European Mathematical Society, 2001
- Minimum complexity density estimationIEEE Transactions on Information Theory, 1991
- Estimating the Dimension of a ModelThe Annals of Statistics, 1978
- A new look at the statistical model identificationIEEE Transactions on Automatic Control, 1974