Impossible Objects and the things we do first in vision

Abstract
Interpreting an image involves postulating that it represents a structure drawn from some class C. Attention tends to focus on methods where C is large (e.g. ''rigid structures'') or small (e.g. ''faces''). However scene analysis programs where C is intermediate have interesting properties. In particular it is often neither easy nor worthwhile to establish the full consequences of the postulates underlying analysis. The price of this incompleteness is that inconsistent interpretations are occasionally accepted. This parallels the psychological phenomenon of seeing ''Impossible Objects'', so Impossible Objects may indicate that human vision too uses intermediate postulates and develops only a key subset of their implications. If this interpretation is correct, and demonstrations suggest that it is, then the particular pictures which cause us such problems should indicate what the relevant postulates are and to what level they are developed. Experiments and demonstrations suggest that the postulates concern angles between edges and that implications about edges'' orientations, but not their depths, are automatically derived and checked. This makes sense ecologically and computationally. Ecologically, precise depth information would only help in cases involving improbable alignments. Computationally, a description of edge orientation paves the way to obtain various other types of information as required.

This publication has 0 references indexed in Scilit: