A model for fusion of spatial information in dynamic vision

Abstract
Fusion of 3-D spatial information obtained by a dynamic vision system is addressed. The representation of information is seen to be critical in forming a solution. An object-centered representation that encodes, in image registered maps, the relative distances of objects to a set of scene referents is proposed. This representation facilitates fusion of dynamic spatial information without direct use or knowledge of camera motion and points the way for a new model for dynamic vision.

This publication has 4 references indexed in Scilit: