Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses
 
PerlMonks  

AI NEWBIE -- Neural Net problem/question... um Tutorial request?? :)

by talwyn (Monk)
on May 24, 2004 at 16:53 UTC ( #355979=perlquestion: print w/ replies, xml ) Need Help??
talwyn has asked for the wisdom of the Perl Monks concerning the following question:

Hey all.... playing with AI for the first time...So I got AI::Neuralnet::backprop to play with. I thought how about trying to get the thing to identify even numbers? Well I failed miserably ... I attached my code and the sample output it produced. It got numbers wrong that I explicitly gave it in its pattern set to learn. I don't understand the problem with the code and was trying to learn how the neural net thingy works so I'm a bit mystified. Am I missing something? Any help or suggestions welcome.

CODE

use AI::NeuralNet::BackProp; my $net = new AI::NeuralNet::BackProp(2,5,1); # Add a small amount of randomness to the network $net->random(0.001); $net->range ([0,1]); $net->debug (4); # Create a data set to learn my @set = ( [ 0,0,0,0,0 ], [ 1 ], [ 0,0,0,1,0 ], [ 1 ], [ 0,0,1,0,0 ], [ 1 ], [ 0,0,1,1,0 ], [ 1 ], [ 0,0,0,0,1 ], [ 0 ], [ 0,0,0,1,1 ], [ 0 ], [ 0,0,1,0,1 ], [ 0 ], [ 0,0,1,1,1 ], [ 0 ], ); my $f = $net->learn_set (\@set); print "Forgetfulness: $f unit\n"; # Run a test phrase through the network for ( $i = 0 ; $i < 17 ;$i++){ @bit= (0,0,0,0,0); #reset bits $bit[0] = 1 if ( ($i & 16) == 16 ); $bit[1] = 1 if ( ($i & 8) == 8 ); $bit[2] = 1 if ( ($i & 4) == 4 ); $bit[3] = 1 if ( ($i & 2) == 2 ); $bit[4] = 1 if ( ($i & 1) == 1 ); print "Testing $i @bit -> "; my $result = $net->run( \@bit); my $answer = $$result[0]; print "$answer\n"; }
RESULTS
Forgetfulness: 0.0000000000 Testing 0 0 0 0 0 0 -> 1 Testing 1 0 0 0 0 1 -> 1 Testing 2 0 0 0 1 0 -> 1 Testing 3 0 0 0 1 1 -> 0 Testing 4 0 0 1 0 0 -> 1 Testing 5 0 0 1 0 1 -> 0 Testing 6 0 0 1 1 0 -> 0 Testing 7 0 0 1 1 1 -> 0 Testing 8 0 1 0 0 0 -> 1 Testing 9 0 1 0 0 1 -> 0 Testing 10 0 1 0 1 0 -> 0 Testing 11 0 1 0 1 1 -> 0 Testing 12 0 1 1 0 0 -> 0 Testing 13 0 1 1 0 1 -> 0 Testing 14 0 1 1 1 0 -> 0 Testing 15 0 1 1 1 1 -> 0 Testing 16 1 0 0 0 0 -> 1

Comment on AI NEWBIE -- Neural Net problem/question... um Tutorial request?? :)
Select or Download Code
Re: AI NEWBIE -- Neural Net problem/question... um Tutorial request?? :)
by jdporter (Canon) on May 24, 2004 at 17:02 UTC
    It got numbers wrong that I explicitly gave it in its pattern set to learn.
    That should be no surprise. A NN does not memorize all of the inputs. It's trying, in effect, to deduce a formula for converting inputs into correct outputs. How well it can do that depends on the extent, or richness, of the training inputs. With the tiny training set you gave, it's no wonder the NN wasn't able to deduce an accurate formula.
Re: AI NEWBIE -- Neural Net problem/question... um Tutorial request?? :)
by halley (Prior) on May 24, 2004 at 17:50 UTC
    Neural nets require a LOT of reinforcing training time. If you have a child, you understand this process intuitively. People can learn simple things by rote, but they can extrapolate and understand things only if they are shown enough examples to reinforce their forming theory about a governing rule.

    A 2- or 3-year old kid needs to be exposed to a rule hundreds of times before they really can apply it (unless it somehow sparks their imagination and interest). A 10-year old probably only needs a few dozen math problems before they can start to apply the method on similar problems. An adult should be able to recognize and extrapolate on a pattern after a few examples.

    In the case of your neural net that learns even/odd number rules, you are hoping to train the network to do two things: develop a direct 1:1 relationship between the lowest binary bit input and the output answer bit, while simultaneously completely burning out or dismissing all value from every other binary input bit. Seems simple to you, but not to a 3-year old.

    --
    [ e d @ h a l l e y . c c ]

      Thanks. I inserted a feedback indicating when something was demonstrably wrong. After running the feedback pattern once it produced the correct data. I was under the impression that the learn function should perform "off-line" learning. With the feedback aren't I performing "on-line" learning?

      I tried adding the problem pattern as the last pattern in its training pattern... but it still fails to identify 1 as an odd number the first time it encounters it. After this it identifies all the evens and odds correctly. Is it possible to set this up to learn what it needs in the initial training set?

      my $answer = $$result[0]; if ( ($answer == 1) && ( $i % 2 != 0) ) { print "Got it wrong!\n"; #Learn the pattern $net->learn(\@bit,[0]); $i=-1; # Start over } print "$answer\n";
      Results
      Testing 0 0 0 0 0 0 -> 1 Testing 1 0 0 0 0 1 -> Got it wrong! 1 Testing 0 0 0 0 0 0 -> 1 Testing 1 0 0 0 0 1 -> 0 Testing 2 0 0 0 1 0 -> 1 Testing 3 0 0 0 1 1 -> 0 Testing 4 0 0 1 0 0 -> 1 Testing 5 0 0 1 0 1 -> 0 Testing 6 0 0 1 1 0 -> 1 Testing 7 0 0 1 1 1 -> 0 Testing 8 0 1 0 0 0 -> 1 Testing 9 0 1 0 0 1 -> 0 Testing 10 0 1 0 1 0 -> 1 Testing 11 0 1 0 1 1 -> 0 Testing 12 0 1 1 0 0 -> 1 Testing 13 0 1 1 0 1 -> 0 Testing 14 0 1 1 1 0 -> 1 Testing 15 0 1 1 1 1 -> 0 Testing 16 1 0 0 0 0 -> 1
Re: AI NEWBIE -- Neural Net problem/question... um Tutorial request?? :)
by gjb (Vicar) on May 24, 2004 at 18:59 UTC

    The reason this isn't working is quite simple: you're asking the NN to do something that is impossible.

    The network you use consists of 5 input units and 1 output unit. Let's simplify this to 2 input units and 1 output unit so that we cna visualize what's happening. This implies:

      0 0 -> 1
      1 0 -> 0
      0 1 -> 0
      1 1 -> 1
    
    Now lets draw this:
         |
      1  0- - - -1
         |
         |       |
         |
      0  1-------0--
         0       1
    
    This is a visual representation of the tabel above: (0,0) yields 1, (0, 1) yields 0, (1, 0) yields 0, (1, 1) yields 1.

    Now there's one more thing you should realize and that's the maths behind the thing. For the two input units, one output unit case, the output of the network is given by: o = f(x_1 w_1 + x_2 w_2) where x_1, x_2 represent the values of the first and second input unit, w_1 and w_2 are the weights to be determined by the backpropagation algorithm during the training phase of the network, f is some transfer function, typically nonlinear but monotone and o is the output value.

    Now since the transfer function f is monotone, the expression x_1 w_1 + x_2 w_2 above essentially defines a line in the plane of the plot above. All input tuples that are to be mapped to 1 should be on one side of that line, all those to be mapped to 0 on the other. Aha, that's exactly the root of the problem! Just try to draw a line that separates the 1s and 0s in the drawing above, you simply can't!

         |
      1  0- - - -1
        \|
         \       |
         |\
      0  1-\-----0--
         0  \     1
    

    The problem you're trying to solve simply can't be solved by a two layer network. You'll need one with three layers, that will (almost) do the trick. The reason is that now the plane is not divided by one line, but rather by two. Now map those points that are to the left of both lines to 1, those that are to the right of both lines to 1 and everything in between to 0 and you're done.

       \ |   \
      1 \0- - \ -1
         \     \
         |\     \|
         | \     \
      0  1--\----0\--
         0   \    1\
    

    So

    my $net = new AI::NeuralNet::BackProp(3,5,1);
    will do better, but there's still an extra trick to apply to get good results. The problem is too symmetric and will be very hard to learn, so the trick is to break the symmetry in the input, and that's simple. Rather than using the network above, use
    my $net = new AI::NeuralNet::BackProp(3,6,1);
    and pad the inputs you have with a 1, so
    my @set = ( [ 0,0,0,0,0,1 ], [ 1 ], [ 0,0,0,1,0,1 ], [ 1 ], [ 0,0,1,0,0,1 ], [ 1 ], [ 0,0,1,1,0,1 ], [ 1 ], [ 0,0,0,0,1,1 ], [ 0 ], [ 0,0,0,1,1,1 ], [ 0 ], [ 0,0,1,0,1,1 ], [ 0 ], [ 0,0,1,1,1,1 ], [ 0 ],

    Note however that it will still take a considerable amount of learning steps in order to train this very hard example (yes, you picked a though one). It could take a few thousand steps, depending on the version of the backprop algorithm used.

    One further point: it would be better to use +/- 1 values rather than 0/1, but don't try that since I don't think this implementation supports that kind of data.

    For tutorials you might have a look at http://www.calresco.org/tutorial.htm, but especially at Denni Rögnvaldsson lecture notes that I very much enjoyed (and I shamelessly adapted some of his slides for the course I teach on this subject ;)

    Hope this helps, -gjb-

    PS: not that it particularly matters in this context, but things get much more interesting when the transfer function is not monotone.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://355979]
Approved by Happy-the-monk
Front-paged by broquaint
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (15)
As of 2014-07-22 15:44 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My favorite superfluous repetitious redundant duplicative phrase is:









    Results (117 votes), past polls