Quantification of the Impact of Feature Selection on the Variance of Cross-Validation Error Estimation
Open Access
- 1 January 2007
- journal article
- research article
- Published by Springer Nature in EURASIP Journal on Bioinformatics and Systems Biology
- Vol. 2007 (1) , 1-11
- https://doi.org/10.1155/2007/16354
Abstract
Given the relatively small number of microarrays typically used in gene-expression-based classification, all of the data must be used to train a classifier and therefore the same training data is used for error estimation. The key issue regarding the quality of an error estimator in the context of small samples is its accuracy, and this is most directly analyzed via the deviation distribution of the estimator, this being the distribution of the difference between the estimated and true errors. Past studies indicate that given a prior set of features, cross-validation does not perform as well in this regard as some other training-data-based error estimators. The purpose of this study is to quantify the degree to which feature selection increases the variation of the deviation distribution in addition to the variation in the absence of feature selection. To this end, we propose the coefficient of relative increase in deviation dispersion (CRIDD), which gives the relative increase in the deviation-distribution variance using feature selection as opposed to using an optimal feature set without feature selection. The contribution of feature selection to the variance of the deviation distribution can be significant, contributing to over half of the variance in many of the cases studied. We consider linear-discriminant analysis, 3-nearest-neighbor, and linear support vector machines for classification; sequential forward selection, sequential forward floating selection, and the t-test for feature selection; and k-fold and leave-one-out cross-validation for error estimation. We apply these to three feature-label models and patient data from a breast cancer study. In sum, the cross-validation deviation distribution is significantly flatter when there is feature selection, compared with the case when cross-validation is performed on a given feature set. This is reflected by the observed positive values of the CRIDD, which is defined to quantify the contribution of feature selection towards the deviation variance.Keywords
This publication has 11 references indexed in Scilit:
- Genetic test bed for feature selectionBioinformatics, 2006
- Impact of error estimation on feature selectionPattern Recognition, 2005
- Prediction error estimation: a comparison of resampling methodsBioinformatics, 2005
- Superior feature-set ranking for small samples using bolstered error estimationBioinformatics, 2004
- Bolstered error estimationPattern Recognition, 2004
- Is cross-validation valid for small-sample microarray classification?Bioinformatics, 2004
- A Gene-Expression Signature as a Predictor of Survival in Breast CancerNew England Journal of Medicine, 2002
- Gene expression profiling predicts clinical outcome of breast cancerNature, 2002
- Feature selection: evaluation, application, and small sample performancePublished by Institute of Electrical and Electronics Engineers (IEEE) ,1997
- Floating search methods in feature selectionPattern Recognition Letters, 1994