DPRG
DPRG List  



[DPRG] nBot progress

Subject: [DPRG] nBot progress
From: David Anderson davida at smu.edu
Date: Thu Jul 5 02:23:43 CDT 2012

Don,

Yes, that would be the way to do it.    And a rotating head on nBot 
would certainly look cool.   Especially a little R2D2 style hemisphere.

But what it really gets you, it seems to me, is wider field of view.  
The robot can look back and forth and see things that it can't see 
looking straight ahead.  In practical terms, it just has a wider field 
of view.     That can also be accomplished by just adding more fixed 
sensors,  without all the added mechanical and algorithm complexity of a 
rotating head.  I'd argue that's probably a more robust solution as well.


I've actually thought about this quite a bit.  Suppose you have a sensor 
head on a full two axis servo controlled platform, maybe with a bunch of 
high-bandwidth sensors (cameras, radar, etc) which you could then aim at 
things of interest.  If you were controlling the camera mount yourself, 
it would be simple and fairly intuitive to know where to point it while 
the robot is navigating an unknown environment.  This is because you 
have an enormous amount of knowledge about the world and how it works.


Translating that intuition into algorithms to run on a robot 
autonomously to do the same thing has turned out to be an incredibly 
difficult task that is, at least, beyond me and my meager abilities.  
What this takes us into is the high level concept that psychologists 
call "attention."  How do we, humans, decide what to play attention to?  
How should a robot decide?  It's a pretty deep subject.


The one obvious application I've seen of that so far is the CMUCam's 
ability to track a color blob (orange traffic cone) by driving a two 
axis servo-driven mount with signals generated from the camera, which 
keep the blob in the center of it's frame.  The robot then just reads 
the servo angles as the camera tracks the blob, and knows azimuth and 
elevation and, by extension, distance to the target.


So in that case, the robot drives around with the camera head rotating 
independently of the robot and it's fixed sensors, seeking on and 
tracking orange blobs as the objects of it's "attention."   And the 
robot navigates towards a cone once it's found by following what the 
camera servos are doing.  In other words, you let the head do what it 
wants and then rotate the body around underneath it to try to keep it 
aligned with the head,  whenever higher-priority avoidance behaviors 
will allow.   It works because we have defined "attention" in a very 
narrow  way.


But for nBot's general navigation in an unknown environment,  I think a 
multi-directional array of fixed sensors probably has advantages over a 
rotating sensor head in everything but the "coolness" factor.   It would 
be cool.   I've thought a bit about mounting the sensors on servos to 
keep them level (rotating in the vertical direction) as driven by the 
inverse of robot's IMU tilt angle.  It would be useful to keep the 
sensors parallel to the ground as the robot tilts back and forth while 
maneuvering.   But so far I haven't needed it.  Which is just as well as 
it would probably make the robot look a little like one of those 
bobble-head dolls...


happy roboting
David




On 07/04/2012 10:02 AM, don clay wrote:
> To maintain your fixed angle, couldn't you keep track of how much 
> you've rotated and add that to your angles?
>
>
> On 6/24/2012 5:55 PM, David Anderson wrote:

More information about the DPRG mailing list