Real-time computation of depth from defocus
- 19 January 1996
- proceedings article
- Published by SPIE-Intl Soc Optical Eng
- Vol. 2599, 14-26
- https://doi.org/10.1117/12.230388
Abstract
A new range sensing method based on depth from defocus is described. It uses illumination pattern projection to give texture to the object surface. Then the image of the scene is split into two images with different focus settings and sensed simultaneously. The contrast map of the two images are computed and compared pixel by pixel to produce a dense depth map. The illumination pattern and the focus operator to extract the contrast map are designed to achieve finest spatial resolution of the computed depth map and to maximize response of the focus operator. As the algorithm uses only local operations such as convolution and lookup table, the depth map can be computed rapidly on a data-flow image processing hardware. As this projects an illumination pattern and detects the two images with different focus setting from exactly the same direction, it does not share the problem of shadowing and occlusion with triangulation based method and stereo. Its speed and accuracy are demonstrated using a prototype system. The prototype generates 512 by 480 range maps at 30 frame/sec with a depth resolution of 0.3% relative to the object distance. The proposed sensor is composed of off-the-shelf components and outperforms commercial range sensors through its ability to produce complete three-dimensional shape information at video rate.Keywords
This publication has 0 references indexed in Scilit: