DPRG
DPRG List  



DPRG: Questions for the experts

Subject: DPRG: Questions for the experts
From: Tom Raz Tom.Raz at email.swmed.edu
Date: Mon Nov 1 09:57:07 CST 1999

Thanks David!
The odometry tracking really simplifies the design.  This is why I'm going with
the "walking with eyes closed" approach.  The world model tells the robot where
all the toys go and helps it figure out it's starting position.

Here's an example:  setting the table

Model database:  House is mapped.  "Table set" state is mapped.  "Ordered"
state is mapped, telling the robot where each object belongs so it can find them.

Visual comparison (or via beacons):  "Where am I?"

Model database search:  "Find path to kitchen", a series of objectives that
get the robot to the kitchen.

Odometry tracking with sonar collision avoidance: "Go to X", where X is the
next objective, just like T-time objectives.  Robot needs to know what to do
if there are too many obstacles in the way to reach an objective (ie: the dog
knocked over a lamp and it's completely blocking the way).

At the kitchen, the robot takes a known position, then opens the cabinet
and picks up a dish (unbreakable I hope).  This motion is programmed,
maybe by using a special input gripper to record motion, or a wand to
identify areas for manipulation.

Model database search: "Find path to dining room".

Odometry tracking with sonar collision avoidance to dining room.

Learned motion to place the dish.

Back to kitchen and repeat until human says "Go away slowpoke,
I'll set the table myself!"

Do you think this is possible?  I'm trying to get all these ducks in a row
before I start purchasing parts and assembling the machine.  The
software development will be a perpetual thing but I want to have enough
figured out so that I don't have to re-invent the hardware too much.
- --Tom

>>> David Philip Anderson <dpa at io.isem.smu.edu> 10/29/99 06:58PM >>>

Howdy

Tom Raz writes:

> I haven't built anything with computer vision yet, but have been watching
> sites doing research.  My current plan is not real-time vision.  It will use
> snapshots of the area to compute an internal model.  Then the robot
> will navigate and manipulate objects determined by the model.  This is
> like peeking at a room, then walking with your eyes closed.  However,
> as the robot moves around it will compare sonar ranges with simulated
> sonar of the model to correct itself in real-time.  At various points, it
> will stop and take more shots from new perspectives.
> 
> (Until I saw SR04 navigate T-time at the contest, I was skeptical about
> doing it this way.  I wondered if the errors accruing in between snapshots
> would be too much.  The cool thing about watching SR04 navigate T-time
> was that it did not follow walls. Instead it plotted a course to a destination
> and adjusted itself along the way whenever it encountered an obstruction.
> This gave me much hope)

What SR04 is doing is actually much simpler than what is described above.

SR04 has no world model.  Instead it uses a reactive model of navigation,
which produces approximately the same effect, although with considerably 
more robustness in terms of a dynamic environment.  Perhaps a blow-by-blow
description of a simple navigation task might make this more clear.

Reactive Navigation

Assume two sensor behaviors are running.  The first attracts the robot
to bright lights, and the second pushes the robot away from IR reflections.
Both behaviors run at 20 Hz; that is, both test their sensors and output
commands to the robot motor subsystem 20 times per second.  The IR behavior
has higher priority than the photo behavior, so the photo commands are only
passed along to the motors when the IR is not active.  The photo behavior
attempts to point the robot toward a visible light source so that light falls
evenly on two photo cells mounted in the center of the robot, pointed
left and right of straight forward.

                    /---\
                   /     \              Robot moves straight ahead when
                  | SR04  |      <----- Photo detects nothing and
                  |       |      < ---- IR detects nothing
                 -----------    



                     ^
                   /   \
                  /     \
                 /       \       <---- obstacle
                /         \
               /           \
                    
                     0           <----- bright lamp



Initially the robot is pointed directly at the light, so the photo behavior
generates no commands, and the obstacle is too far away for the IR to sense,
so the IR behavior generates no commands.  In this case, the robot falls
back on its default behavior, which is to go straight.

As it approaches the obstacle, infrared light from the IR leds mounted on
the robot is reflected off of the obstacle and into the IR sensors, and the
IR behavior issues a turn command, either left or right depending on which
of the two IR sensors saw the reflections first.  Let's assume it turns left.
Now, 20 times per second the IR behavior will request a left turn until 
eventually the robot has turned far enough that the sensors can no longer 
"see" the obstacle, and the IR behavior stops generating commands.

During this time, the difference between the two visible light readings from 
the photo cells increases and the photo behavior begins to output a turn right
request, back toward the light.  This request is ignored, because the IR
request to turn left has higher priority.  

                                  
                               --- \
                     ^       \      \
                   /   \      \ SR04 \   <---- IR says turn left
                  /     \      \  	 <---- Photo says turn right (ignored)
                 /       \      
                /         \
               /           \
                    
                     0
                    

When the IR requests have rotated the robot far enough that the IR sensors
stop detecting the reflected light, the IR behavior becomes inactive and
stops requesting a turn.  The photo behavior requesting a turn to the right,
which has been active this whole time,  becomes the highest active priority
and is then passed along to the motors, and the robot begins to turn right,
towards the light.

After turning right only a small amount the IR sensors see the obstacle again
and again enforce a small turn to the left.  In this manner the robot follows
the edge of the obstacle, with the photo pulling the robot towards the lamp
and the IR pushing it away from the obstacle.  It proceeds in this manner
along the edge of the obstacle until the obstacle has been cleared and the
IR detections cease.


                     ^
                   /   \
                  /     \
                 /       \ 
                /         \
               /           \
                                  /  ---
                     0           /       /   <---- IR says nothing.
                                /  SR04 /    <---- Photo says turn right
                                       /


Once the obstruction has been cleared, the photo behavior, now uninterrupted
by the IR detections, turns the robot until it faces the light source.  At this
point the robot again defaults to going straight ahead, towards the light. 
Depending on how tightly it can turn, it will either run into the light
source or circle it like a moth.

Now this behavior looks for all the world as if the robot has an internal
model of the space it is in, the location of the light source, and the
location and size of the obstacle, and has skillfully plotted a course
around the obstacle to the target.  In fact, it "knows" none of the above,
and is only responding in a reactive fashion to its sensor inputs.  But the
result is the same.  In fact, the reactive navigation is more robust because
it treats dynamic and static obstacles the same.   The v-shaped wall in the
above example could just as easily have been my legs stepping out in front of
the robot, even walking along with it to block its path.  The robot would
continue to turn away from the IR detections until they ceased, and then
turn back towards the lamp.

The T-Time Task

For the T-Time task that Tom references above, the behavior is identical
with the exception that the photo behavior is replaced by the odometry
behavior.  This works very much like the photo behavior.  The robot calculates
its position as the global variables X and Y (in inches), and its rotation as
the global variable THETA (in degrees or radians) from shaft encoders mounted
on the drive motors.  This position is updated 20 times per second, relative
to the robot's position when it was last turned on or reset.  This calculated
position is our "sensor" input.

The lamp in the above example is replaced by a "target" position specified
in inches as target_X and target_Y.  Again, 20 times per second, the robot
determines if the target is to the left or right of its current location and
heading, or straight ahead.

Initially the target is straight ahead as before, the odometry and IR
behaviors issue no commands, and the robot defaults to moving straight
ahead.  The robot nears the target and the IR senses the obstacle and
issues a turn left command, as before.  The robot begins to turn away
>from the obstacle and the odometry behavior, "sensing" that the target
is now to its right, issues a turn right command, which is ignored as long
as the higher priority IR behavior senses the obstacle.  Thereafter, the
two behaviors contend for control of the motors, with the odometry pulling
the robot toward the target and the higher priority IR pushing it away from
the obstacle, until the obstacle is cleared.

Once again, when the obstacle has been cleared, the odometry behavior, which
has been issuing turn right commands all along, is allowed to control the
motors, as the IR behavior falls silent.  Only at this point do the photo
and odometry behaviors differ.  Since the odometry knows the actual distance
to the target, it can request a deceleration from the motors so that the robot
slows down as it approaches the target.  When the target has been reached,
the navigation software can then select a new target from a global target
list, update the target_X and target_Y variables, and off we go again.

In no sense does the robot "know" anything about its environment, or try to
build and maintain an internal world model, nor does it need to discriminate
moving from static objects in order to navigate successfully.   My wife and
I took SR04 down to the Science Place museum during the State Fair and
allowed it to explore the exhibit areas which were crowded with fair visitors.
It was really cool to see it nimbly picking its way through the moving sea of
feet and legs, seeking out a target.  It only got kicked once!  Seems like
this would be difficult if not impossible to do with a world model approach:
attempting to navigate by matching sensor inputs to a constantly changing
internal representation.  Its just too hard to do it that way, and not at all
necessary. 

Not that it wouldn't be fun to try...   I'd like to hear if anyone has made
it work.  But I have my doubts.  Like our friend Bill James (who boo'd me 
last week when I finished the CanCan contest, and he was one of the judges!) 
I think you might very well end up with a lovely robot sculpture that will 
never actually do anything.  (nudge-nudge-wink-wink-say-no-more)  Looks good
in the newspaper, though.  As long as you can keep those pesky mpeg movies
off the web page, nobody will ever know!   :-)

Cruise missiles use this kind of navigation, or used to, with two important
caveats.  First, the world model is given to them in extreme detail, rather
than generated from sensor inputs.  Second, the features they are recognizing
are large scale geographic features like rivers, roads, and railways, and
they are viewed from above from a known perspective and viewing angle.  I've
been told that the newest versions just use GPS (global positioning system)
satellite data and have abandoned the internal model geographic recognition
approach as too difficult and unreliable.  That doesn't mean we can't do
it.  It just means its probably really hard.  Not a bite-sized task.

peace,
dpa

------------------------------

More information about the DPRG mailing list