Abstract
The paper deals with a new, quantitative, vision-based approach to road following. It is based on the theoretical framework of the recently developed optical flow-based visual field theory. By building on this theory, the authors suggest that motion commands can be generated from a visual feature, or cue, consisting of the projection into the image of the tangent point on the edge of the road, along with the optical flow of this point. Using this cue, they suggest several different vision-based control approaches. There are several advantages to using this visual cue: (1) it is extracted directly from the image, i.e. there is no need to reconstruct the scene, (2) it can be used in a tight perception-action loop to directly generate action commands, (3) for many road following situations this visual cue is sufficient, (4) it has a scientific basis, and (5) the related computations are relatively simple and thus suitable for real-time applications. For each control approach, they derive the value of the related steering commands.

This publication has 9 references indexed in Scilit: