DPRG List  

[DPRG] simpleminded monocular road detection

Subject: [DPRG] simpleminded monocular road detection
From: Michael Menefee mikem at mnetwork.org
Date: Sat May 6 13:15:54 CDT 2006

Here's another way to do it:

-----Original Message-----
From: dprglist-bounces at dprg.org [mailto:dprglist-bounces at dprg.org] On Behalf
Of Chris Jang
Sent: Wednesday, May 03, 2006 8:52 PM
To: dprglist at dprg.org
Subject: [DPRG] simpleminded monocular road detection

I tried hacking a very simple monocular road detection algorithm for a 
robot test drive yesterday. It is incredibly primitive. Everything is done 
in post processing for visualization of what can work. But as there is 
little computation involved, no major issues are foreseen for an onboard 

http://golem5.org/robot1/video/drive02.mpg (about 2.1 MB)

Upper left image - unmodified video from network camera on robot
Lower left image - quantized to 8 colors monochrome and then oil painted
Right side image - rough distance map of road edge from robot

The sensor readings are adjusted for about one second lag in the video 
stream off the camera. Yes, this is horrible. But in principle, any vision 
system, even an isochronous one, will have some lag from what the camera 
sees in real time and when the robot has processed the camera images 
sufficiently to make decisions. In practice, this means that a robot must 
compensate by feeding forward from what it perceives through cameras.

I think it's pretty obvious what this simpleminded scheme is. The video 
images are segmented into blobs. Each vertical column of pixels 
corresponds to a radial direction from the camera. Search from the bottom 
of the frame upwards until the color intensity decreases. However high 
this is, that's presumed to be the edge of the brighter road from the 
grass and dirt. So it looks for the first local maxima in each radial 
direction from the camera. There is a lot of noise as this does not 
attempt to compensate for shadows, lens flare, or ground clutter. There is 
no attempt to perform any kind of spectral transform. The only thing going 
on is pixel counting.

Anyway, it can work pretty well (if the lighting is right and road is 
reasonably uniform - unrealistic assumption?). Later in the video, the 
distance map sees my car on the left and a rowing boat on the right. It 
will be interesting to see how much performance can be improved with 
better algorithms.

What's effectively going on is a projection from the camera image pixels 
down to the ground plane. If accurate topography were available using 
laser rangefinders, etc, then the projection could be pretty accurate as 
the ground shape would be known. Then it would really be an inverse 
texture mapping.

Maybe this is overly optimistic. But I'm encouraged. It is possible to 
gather useful information from a single low-cost optical video camera 
under natural light. I hope to be able to improve the road detection 
sufficiently where the robot can follow a road. I believe that for indoor 
conditions allowing for controlled or structured lighting, cameras become 
very practical. Outdoors, it is more difficult. Some combination of sensor 
systems, each complimenting the weaknesses of the others, is probably 
best (just as with many of the DARPA Grand Challenge robots).
DPRGlist mailing list
DPRGlist at dprg.org

More information about the DPRG mailing list