Vision-based multisensor machine perception system for autonomous aircraft landing approach

Abstract
A machine perception system for aircraft and helicopters using multiple sensor data for state estimation is presented. By combining conventional aircraft sensors like gyros, accelerometers, artificial horizon, aerodynamic measuring devices and GPS with vision data taken by conventional CCD-cameras mounted on a pan and tilt platform, the position of the craft can be determined as well as the relative position to runways or helicopter landing spots. The vision data are required to improve position estimates of GPS is available only in the S/A mode. The architectural design of the machine perception system allows the connection of other processing modules, for example a radar sensor, using the pre-defined interface structure. The system presented also incorporates a control module which uses estimated vehicle states for navigation and control in order to conduct automatic flight and landing. The system has been tested in real-time within a hardware-in-the-loop simulation. Simulated aircraft measurements corrupted by noise and other characteristic sensor errors have been fed into the machine perception system; the image processing module for relative state estimation was driven by computer generated imagery. Results from real-time simulation runs are given.

This publication has 0 references indexed in Scilit: