Asymptotic improvement of supervised learning by utilizing additional unlabeled samples: normal mixture density case

Abstract
The effect of additional unlabeled samples in improving the supervised learning process is studied in this paper. Three learning processes, supervised, unsupervised, and combined supervised-unsupervised, are compared by studying the asymptotic behavior of the estimates obtained under each process. Upper and lower bounds on the asymptotic covariance matrices are derived. It is shown that under a normal mixture density assumption for the probability density function of the feature space, the combined supervised-unsupervised learning is always superior to the supervised learning in achieving better estimates. Experimental results are provided to verify the theoretical concepts.

This publication has 0 references indexed in Scilit: