Prototype of a vision-based gaze-driven man-machine interface

Abstract
This paper describes preliminary work on a non-intrusive gaze-driven interface for man-machine interaction based on vision techniques. The current computer screen location pointed at by the user is evaluated using a linear model of image-to-screen mapping, which can be easily auto-calibrated at run-time on the basis of the current relative position of camera and user and camera settings. A simple active deformable model of the eye is defined, which is used both to track user's movements and estimate the current position of the user's pupil in the image in a decoupled fashion. Experiments show that the proposed approach is fast and accurate enough for the design of low-cost man-machine interfaces, with applications ranging from the assistance to the disabled people to multimedia systems.

This publication has 7 references indexed in Scilit: