Complexity regularization via localized random penalties
Open Access
- 1 August 2004
- journal article
- Published by Institute of Mathematical Statistics in The Annals of Statistics
- Vol. 32 (4) , 1679-1697
- https://doi.org/10.1214/009053604000000463
Abstract
In this article, model selection via penalized empirical loss minimization in nonparametric classification problems is studied. Data-dependent penalties are constructed, which are based on estimates of the complexity of a small subclass of each model class, containing only those functions with small empirical loss. The penalties are novel since those considered in the literature are typically based on the entire model class. Oracle inequalities using these penalties are established, and the advantage of the new penalties over those based on the complexity of the whole model class is demonstrated.Keywords
All Related Versions
This publication has 23 references indexed in Scilit:
- Optimal aggregation of classifiers in statistical learningThe Annals of Statistics, 2004
- Concentration inequalities using the entropy methodThe Annals of Probability, 2003
- Empirical Margin Distributions and Bounding the Generalization Error of Combined ClassifiersThe Annals of Statistics, 2002
- Rademacher penalties and structural risk minimizationIEEE Transactions on Information Theory, 2001
- About the constants in Talagrand's concentration inequalities for empirical processesThe Annals of Probability, 2000
- Some applications of concentration inequalities to statisticsAnnales de la Faculté des sciences de Toulouse : Mathématiques, 2000
- Structural risk minimization over data-dependent hierarchiesIEEE Transactions on Information Theory, 1998
- On Talagrand's deviation inequalities for product measuresESAIM: Probability and Statistics, 1997
- Majorizing measures: the generic chainingThe Annals of Probability, 1996
- Some Limit Theorems for Empirical ProcessesThe Annals of Probability, 1984