Computing visual attention from scene depth

Abstract
Visual attention is the ability to rapidly detect the interesting parts of a given scene. Inspired by biological vision, the principle of visual attention is used with a similar goal in computer vision. Several previous works deal with the computation of visual attention from images provided by standard video cameras, but little attention has been devoted so far to scene depth as source for visual attention. The investigation presented in this paper aims at an extension of the visual attention model to the scene depth component. The first part of the paper is devoted to the integration of depth in the computational model built around conspicuity and saliency maps. The second part is devoted to experimental work in which results of visual attention, obtained form the extended model and for various 3D scenes, are presented. The results speak for the usefulness of the enhanced computational model.

This publication has 4 references indexed in Scilit: