The monte carlo newton-raphson algorithm

Abstract
It is shown that the Monte Carlo Newton-Raphson algorithm is a viable alternative to the Monte Carlo EM algorithm for finding maximum likelihood estimates based on incomplete data. Both Monte Carlo procedures require simulations from the conditional distribution of the missing data given the observed data with the aid of methods like Gibbs sampling and rejective sampling. The Newton-Raphson algorithm is computationally more efficient than the EM algorithm as it converges faster. We further refine the procedure to make it more stable numerically. Our stopping criterion is based on a chi-square test for zero gradient. We control the type II error by working out the number of Monte Carlo replications required to make the non-centrality parameter sufficiently large. The procedure is validated and illustrated using three examples involving binary, survival and count data. In the last example, the Monte Carlo Newton-Raphson procedure is eight times faster than a modified version of the Monte Carlo EM algorithm.