An analytical approach to quantitative reconstruction of non-uniform attenuated brain SPECT

Abstract
An analytical approach to quantitative brain SPECT (single-photon-emission computed tomography) with non-uniform attenuation is developed. The approach formulates accurately the projection-transform equation as a summation of primary- and scatter-photon contributions. The scatter contribution can be estimated using the multiple-energy-window samples and removed from the primary-energy-window data by subtraction. The approach models the primary contribution as a convolution of the attenuated source and the detector-response kernel at a constant depth from the detector with the central-ray approximation. The attenuated Radon transform of the source can be efficiently deconvolved using the depth-frequency relation. The approach inverts exactly the attenuated Radon transform by Fourier transforms and series expansions. The performance of the analytical approach was studied for both uniform- and non-uniform-attenuation cases, and compared to the conventional FBP (filtered-backprojection) method by computer simulations. A patient brain X-ray image was acquired by a CT (computed-tomography) scanner and converted to the object-specific attenuation map for 140 keV energy. The mathematical Hoffman brain phantom was used to simulate the emission source and was resized such that it was completely surrounded by the skull of the CT attenuation map. The detector-response kernel was obtained from measurements of a point source at several depths in air from a parallel-hole collimator of a SPECT camera. The projection data were simulated from the object-specific attenuating source including the depth-dependent detector response. Quantitative improvement (>5%) in reconstructing the data was demonstrated with the nonuniform attenuation compensation, as compared to the uniform attenuation correction and the conventional FBP reconstruction. The commuting time was less than 5 min on an HP/730 desktop computer for an image array of 1282*32 from 128 projections of 128*32 size.