Simulating biological vision with hybrid neural networks

Abstract
We present an example of how vision systems can be modeled and designed by integrating a top-down computationally- based approach with a bottom-up biologi cally-motivated architecture. The specific visual processing task we address is occlusion-based object segmentation — the discrimination of objects using cues derived from object interposition. We construct a model of object segmentation using hybrid neural networks—distributed parallel systems consisting of neural units modeled at different levels af abstraction. We show that such networks are particularly useful for systems which can be modeled using the combined top-down/bottom-up approach. Our hybrid model is capable of discriminat ing objects and stratifying them in relative depth. In addition, our system can account for several classes of human perceptual phenomena, such as illusory contours. We conclude that hybrid systems serve as a powerful paradigm for understanding the information processing strategies of biological vision and for constructing artificial vision-based applications.

This publication has 18 references indexed in Scilit: