Abstract
A new computational method for the maximum likelihood solution in factor analysis is presented. This method takes into account the fact that the likelihood function may not have a maximum in a point of the parameter space where all unique variances are positive. Instead, the maximum may be attained on the boundary of the parameter space where one or more of the unique variances are zero. It is demonstrated that such improper (Heywood) solutions occur more often than is usually expected. A general procedure to deal with such improper solutions is proposed. The proposed methods are illustrated using two small sets of empirical data, and results obtained from the analyses of many other sets of data are reported. These analyses verify that the new computational method converges rapidly and that the maximum likelihood solution can be determined very accurately. A by-product obtained by the method is a large sample estimate of the variance-covariance matrix of the estimated unique variances. This can be used to set up approximate confidence intervals for communalities and unique variances.