Representation space: an approach to the integration of visual information

Abstract
An approach to representing objects viewed over long periods of time and with changing resolutions is presented. The basic strategy is to apply different representations as they become appropriate. As a result, the model of an object typically goes through a sequence of representation as new data are gathered and processed. One of these sequences might start with a crude blob description of an initially detected object, include a detailed structural model derived from a set of high-resolution images, and end with a semantic label based on the object's description and the sensor system's task. This evolution in representation is guided by a structure referred to as representation space: a lattice or representation that is traversed as new information about an object becomes available. One of the representations is associated with an object only after it has been judged to be valid. One approach for evaluating the validity of an object's description is described, based on the temporal stability of the description. These ideas are illustrated with results from a system which constructs and refines models of outdoor objects detected in sequences of range data.

This publication has 4 references indexed in Scilit: