DPRG
DPRG List  



DPRG: Deep Thoughts and rambling

Subject: DPRG: Deep Thoughts and rambling
From: Jim Brown jgbrown at spd.dsccc.com
Date: Mon Jan 12 09:30:55 CST 1998

> > If one were to make a robot based on these observations, their
> > robot would go through a baby state, and start out with no
> > knowledge.  It would lay there and just look around.  Then as
> > time progresses (perhaps in microseconds) the baby robot would
> > begin to gain knowledge.
> 
> Why should it look around? What makes it to look around? and at
> what it is going to look at? and why? That we can call intutive
> all right!

Probably looking around could be attributed to the an inherent
behavior.  The act of looking around would be experimentation.
The result of looking around would be information gathering.
The result of information gathering should be pattern analysis.
the result of pattern analysis should be that they learn their
surroundings (familiarity).
   
> I was lucky to have a small child in my house for the last two
> week. The child is about one year old and I am observing it.

One year old will be much different than what I am talking about.
 
> The baby used to cry when I made first attempt to get near.
> I am sure I did not cause any pain. What was it then? 
> Can you please explain?

I think you're stuck on the pain principle.  I wouldn't
call that cry based on the pain principle that a newborn
would have.  Newborns work basically on pain and feeling
well.  After even a couple of weeks, they not only cry
because of pain, but they also cry because they do not
get what they want, etc.  So, when you're talking about
a baby old than a newborn, the rules change.  A baby learns
after a few weeks that when it crys it gets what it wants.
Then that transferrs to other things they want, like toys.
They don't feel pain for a toy, but because they don't
have what they want, they have a frustration which is a
perceived pain.  So they cry until their frustration is
relieved.  As time goes by, they grow accustom to the
things around them.  When something startles them, they
have fear (innate) which might be another perceived pain.
So they cry.  When that thing which startels them (you?)
becomes accustom to them, they no longer have a fear of
you.  So they no longer cry when you come near.  (familiarity)

> It has become friendly to me and now always jumps in joy
> whem I return from my research laboratory to home. I am
> taking her to the gardens and also enjoy playing with her.
> 
> This child always wants to repeat everything it has first
> experience and I get mad when she will alk me to come to
> the garden every time she will she me. She will cry if
> I don'y follow her demands. Why? Am I cousing some pain to
> her knowledge about me?

She wants you to do something.  If you don't do it and she
doesn't get her way, it is a frustration to her.  Frustration
makes her cry.  Frustration would be a perceived pain.

> I had difficult time today morning when she wanted me to 
> take her to the garden when I just picked my briefcase to 
> come to the research laboratory. What is this now? 
> 
> Will she learn to understand me that she should ask from 
> me only the required level of help?

someday, but obviously not yet.

> How do we make the robot to do so when we have no way to
> alter the logic. This is my real feeling about the CORE
> logic as well or any other that I myself may try. I always
> feel that the logic should work as I wish and the intelligent
> systems work as they think. We want them to go to Alska
> and for sure not to Maxico Maringe dance clubs. What to do 
> there. As far as safety devices are concern we can always 
> have a logic to keep away from things till the robot gets
> a reasonable understanding. Like the blue movies are all
> for elders and not for children (I don't look at them).

Well, if it's a neural network based robot, of course it could
be trained, but that's another issue.

I build a robot to carry out my purposes, not just for an excercise
to if it can learn.  Learning is great, but robotic artifical
intelligence doesn't always require it.

For example, if you built a robot that can learn how to mow a lawn.
After it learned how to mow the lawn, and you wanted to mass
produce this robot by just copying the intelligence to another
robot, even if the new robots couldn't do any more learning, would
you think less of that intelligence the robot had?
The only difference would be that new contingencies might not
be covered, but it would probably work 99% of the time.

So what's the difference between programming a robot to
do a specific task and letting it learn how to do the specific
task?  One way, you're assured that it will do what you wanted
(provided you programmed it right). The other way would be
cool, but with so many variables, how could you be sure what
you've ended up with.

I think, in the short term the preprogrammed robot is the
way to go.  It will do what you want.

I think, in the long them, the AI robot is what everyone
is shooting for.  It will do what it wants.  Then you have
to try to make it obey.

> You will never be able to build a complete learning system
> for sure. There is no need to question this point.

hmm.

1.  A system that learns?
#1, probably so!!!

2.  A system that learns as well as a human?
#2, maybe not in our lifetime.

3.  A system that learns as well as angels?
#3, how would we know if it did.

4.  A system that learns everything and all things?
#4, probably not!!!  <---- the complete learning system???  ;-)

- - - ____ - - - - - ___ - - - - - - - - - - - - - - - - - - - - - - - - -
    \/\_\@ ____   /  /\ __  ___          ___  http://www.dprg.org (Jan 17)
    / //\ / / /\ /--/ //\_\/\_/\ /\/\/\ /\_/\ http://users.why.net/jbrown
/__/ // // / / //__/ // / /__/ //_/_/ // // /(972)519-2868, (972)495-3821
\__\/ \/ \/\/\/ \__\/ \/  \__\/ \_\_\/ \/ \/jgbrown at spdmail.spd.dsccc.com

My employer won't claim these opinions so I'm giving them away for free.

------------------------------

More information about the DPRG mailing list