Building qualitative event models automatically from visual input
- 27 November 2002
- proceedings article
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- Vol. 1303, 350-355
- https://doi.org/10.1109/iccv.1998.710742
Abstract
We describ e an implemented technique for generating event models automatically based on qualitative reasoning and a statistical analysis of video input. Using an existing tracking program which generates labelled contours for objects in every frame, the view from a fixed camera is partitioned into semantically relevant regions based on the paths followed by movingobjects. The paths are indexed with temporal information so objects moving along the same path at different speeds can be distinguished. Using a notion of proximity based on the speed of the moving objects and qualitative spatial reasoning techniques, event models describing the behaviour of pairs of objects can be built, again using statistical methods. The system has been tested on a traffic domain and learns various event models expressed in the qualitative calculus which represent human observable events. The system can then be used to recognise subsequent selected event occurrences or unusual behaviours.Keywords
This publication has 4 references indexed in Scilit:
- Modeling Motion Qualitatively: Integrating Space and TimePublished by Springer Nature ,2002
- Integration of image sequence evaluation and fuzzy metric temporal logic programmingPublished by Springer Nature ,1997
- Qualitative spatial representation and reasoning techniquesPublished by Springer Nature ,1997
- Analogical representation of space and timeImage and Vision Computing, 1992