An overview of geometric modeling using active sensing

Abstract
A brief overview is given of techniques for constructing descriptions of 3D solid objects based on active sensing, whereby scene illumination is manipulated by controlling the light source or the imaging geometry. The light source can be controlled to cast light of a certain spatial pattern to reveal the surface structure of imaged objects. The pattern can be a single point, a single line segment, a set of parallel line segments, or orthogonal grids: and this spatially encoded scene is observed from one or more camera positions. Two types of analysis are possible to relate the encoded images to the 3D configuration of objects. In the first type, the relative position of the light source and the camera is obtained through a calibration process. The correspondence of features of the projector and the image is then established. The 3D positions of the encoded surface points are recovered by means of triangulation. The second type relates the orientation of the projected stripes in the image plane to the surface orientation and structure without the correspondence of features being established.

This publication has 16 references indexed in Scilit: