Visual road following without 3D reconstruction

Abstract
The traditional approach to visual road following involves reconstructing a 3D model of the road. The model is in a world or vehicle-centered coordinate system, and it is symbolic, iconic, or a combination of both. Road-following commands (as well as other commands, e.g., obstacle avoidance) are then generated from this 3D model. Here we discuss an alternative approach in which a minimal road model is generated. The model contains only task-relevant information and a minimum of vision processing is performed to extract this information in the form of visual cues represented in the 2D image coordinate system. This approach leads to rapid and continuous update of the road model from the visual data. It results in inexpensive, fast, and robust computations. Road following is achieved by servoing on the visual cues in the 2D model. This approach results in a tight coupling of perception and action. In this paper, two specific examples of road following that use this approach are presented. In the first example, we show that road-following commands can be generated from visual cues consisting of the projection into the image of the tangent point on the edge of the road, along with the optical flow of this point. Using this cue, the resulting servo loop is very simple and fast. In the second example, we show that lane markings can be robustly tracked in real time while confining all processing to the 2D image plane. Neither knowledge of vehicle motion nor a calibrated camera is required. This system has been used to drive a vehicle up to 80 km/hr under various road conditions. The algorithm runs at a 15 Hz update rate.

This publication has 0 references indexed in Scilit: