LEARNING STRUCTURED VISUAL DETECTORS FROM USER INPUT AT MULTIPLE LEVELS
- 1 July 2001
- journal article
- Published by World Scientific Pub Co Pte Ltd in International Journal of Image and Graphics
- Vol. 1 (3) , 415-444
- https://doi.org/10.1142/s0219467801000256
Abstract
In this paper, we propose a new framework for the dynamic construction of structured visual object/scene detectors for content-based retrieval. In the Visual Apprentice, a user defines visual object/scene models via a multiple-level Definition Hierarchy: a scene consists of objects, which consist of object-parts, which consist of perceptual-areas, which consist of regions. The user trains the system by providing example images/videos and labeling components according to the hierarchy she defines (e.g., image of two people shaking hands contains two faces and a handshake). As the user trains the system, visual features (e.g., color, texture, motion, etc.) are extracted from each example provided, for each node of the hierarchy (defined by the user). Various machine learning algorithms are then applied to the training data, at each node, to learn classifiers. The best classifiers and features are then automatically selected for each node (using cross-validation on the training data). The process yields a Visual Object/Scene Detector (e.g., for a handshake), which consists of an hierarchy of classifiers as it was defined by the user. The Visual Detector classifies new images/videos by first automatically segmenting them, and applying the classifiers according to the hierarchy: regions are classified first, followed by the classification of perceptual-areas, object-parts and objects. We discuss how the concept of Recurrent Visual Semantics can be used to identify domains in which learning techniques such as the one presented can be applied. We then present experimental results using several hierarchies for classifying images and video shots (e.g., Baseball video, images that contain handhakes, skies, etc.). These results, which show good performance, demonstrate the feasibility and usefulness of dynamic approaches for constructing structured visual object/scene detectors from user input at multiple levels.Keywords
This publication has 23 references indexed in Scilit:
- A probabilistic framework for semantic video indexing, filtering, and retrievalIEEE Transactions on Multimedia, 2001
- Image classification for content-based indexingIEEE Transactions on Image Processing, 2001
- Algorithms for defining visual regions-of-interest: comparison with eye fixationsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2000
- Image Retrieval: Current Techniques, Promising Directions, and Open IssuesJournal of Visual Communication and Image Representation, 1999
- A survey on content-based retrieval for multimedia databasesIEEE Transactions on Knowledge and Data Engineering, 1999
- A fully automated content-based video search engine supporting spatiotemporal queriesIEEE Transactions on Circuits and Systems for Video Technology, 1998
- Temporal video segmentation using unsupervised clustering and semantic object trackingJournal of Electronic Imaging, 1998
- Grouping-based nonadditive verificationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1998
- Computer learning of subjectivityACM Computing Surveys, 1995
- Preattentive processing in visionComputer Vision, Graphics, and Image Processing, 1985