Abstract
In the early 1960s, a Massachusetts program for testing neonates for phenylketonuria became the first organized effort to screen newborns for genetic or metabolic disease in order to identify treatable disorders before they became symptomatic. Since that time, newborn-screening programs have expanded to include additional genetic and nongenetic conditions and have been implemented in all U.S. states, as well as in other countries. Although the importance and clinical successes of such screening are well recognized, many issues in newborn-screening policy and practice remain controversial.1,2 Newborn screening in the United States is mandated and regulated by the states, with little . . .

This publication has 3 references indexed in Scilit: