DPRG
DPRG List  



[DPRG] neural nets fundamental flaw

Subject: [DPRG] neural nets fundamental flaw
From: paradug paradug at gmail.com
Date: Tue May 27 16:39:43 CDT 2014

David,
     Their experiments showed that for a trained neural network, it was
possible to provide a correctly classified image, modify the image
slightly but in a way that humans could not detect, and achieve an
adversarial negative that would fail classification.  Their conclusion
was that these adversarial negatives are low enough in probability that
they are unlikely to be in the training set of the network. However they
existed in enough quantity to always be present when the system is in use.

They stated "So far, unit-level inspection methods had relatively little 
utility
beyond confirming certain intuitions regarding the complexity of the
representations learned by a deep neural network." This is similar to
my conclusions about the defect identification system that I worked
with in the past. I believe the question they were looking to answer
was "Why?".

   My reading of the paper didn't suggest that they addressed a
system based on multiple sampling of an "object" from different
perspectives or a network that make decisions based on a continuingly
updating sample set.

  That is why I said that in an environment providing no single point
of failure (or image) such as a robot or human may be better at overcoming
this type of classification failure than a network classifying single
images of objects.



Regards,
Doug P.

-----Original Message----- 
From: David Anderson
Sent: Tuesday, May 27, 2014 2:12 PM
To: dprglist at dprg.org
Subject: Re: [DPRG] neural nets fundamental flaw

Ed, that's essentially what they are doing.  The interesting thing is
that images that are identical to a human are not even recognized as the
same object by the neural net.  We are obviously doing something very
different.

Doug, I think the point of the paper is that neural nets, as currently
conceived, are NOT useful for robotics.   Too easily fooled.
Especially for something as potentially dangerous as self-driving cars, etc.

Bud, since these are flat 2D images, the eye movement "saccades" really
won't help.  But you're right, we're clearly in need of a new or refined
approach.

I sometimes recognize a friend at a distance by their characteristic
movement, rather than their features.  That means time is involved, and
changes over time, which these systems don't deal with.  Lots still to
know...

dpa



05/27/2014 12:47 PM, Ed Koffeman wrote:
> I wonder if introducing some noise and doing the  match a number of
> times with different noise would improve the reliability. It's a
> little similar to how adding noise to an A2D converter and then
> averaging the result can give more effective bits of resolution.
>
> Ed Koffeman
>
> On 5/27/2014 8:34 AM, David P. Anderson wrote:
>> Via slashdot:
>>
>>
>> <http://slashdot.org/story/14/05/27/1326219/the-flaw-lurking-in-every-deep-neural-net>
>>
>>
>> dpa
>>
>>
>>
>> _______________________________________________
>> DPRGlist mailing list
>> DPRGlist at dprg.org
>> http://list.dprg.org/mailman/listinfo/dprglist
>>
> _______________________________________________
> DPRGlist mailing list
> DPRGlist at dprg.org
> http://list.dprg.org/mailman/listinfo/dprglist

_______________________________________________
DPRGlist mailing list
DPRGlist at dprg.org
http://list.dprg.org/mailman/listinfo/dprglist 

More information about the DPRG mailing list