Beefy Boxes and Bandwidth Generously Provided by pair Networks
go ahead... be a heretic
 
PerlMonks  

RFC: AI::NeuralNet::Simple

by Ovid (Cardinal)
on Oct 31, 2003 at 18:22 UTC ( [id://303643]=perlmeditation: print w/replies, xml ) Need Help??

Frequently, I get heavily into a project and wind up burning out on it. It's frustrating, but (like many of us, I suspect) I hate having this pile of half-finished projects around. One of them is a simple neural network. Rather than let it languish, unfinished, I thought I would reveal this half finished beast for you to see if it's even interesting enough that people would want to play with it.

In researching neural networks, I repeatedly found myself frustrated because existing neural networks have documentation that assumes quite a bit of knowledge about them. In working with them, I also found that the pure Perl networks were so slow while training as to be almost useless to me. To this end, I started doing a bit of research about these things and have come up with a module, AI::NeuralNet::Simple. The core network is written in C with the interface in Perl. You can download it from my Web site.

The intent is to have a *simple* neural network that someone new to neural nets can just pick up, read the docs, and play with. Later they can move to a more robust solution, if desired. If people would provide me with feedback on the docs, I would appreciate it.

Some caveats:

  • Somehow, I managed to not include the error measuring function. Whoops! This function allows the user to measure the error rate to determine how training is going. Not sure how I missed that, but I'm just tossing this out there for people while at work, so I'll write this later and issue a new version, if anyone is interested.
  • There is currently no way to load or save a network. I'm not sure if this is an issue as this is only designed to be something to play with.
  • You can not have more than one network at a time because of my rotten C code -- I am *not* a C programmer, but I'll probably fix this later if it's a problem.
  • The docs do not make it clear that this module works best for "winner take all" results.
  • The docs need a lot more work. Let me know if I'm on the right track.

I have two examples in the "examples" directory. The "game_ai.pl" example is borrowed from the book "AI Application Programming" (with the permission of the author). I included that because it's fun. It simulates the behavior of an game character based upon the characters health, weapons, and number of enemies.

And the docs:



NAME

AI::NeuralNet::Simple - A simple learning module for building neural nets.


SYNOPSIS

use AI::NeuralNet::Simple; my $net = AI::NeuralNet::Simple->new(2,1,2); # teach it logical 'or' for (1 .. 10000) { $net->train([1,1],[0,1]); $net->train([1,0],[0,1]); $net->train([0,1],[0,1]); $net->train([0,0],[1,0]); } printf "Answer: %d\n";, $net->winner([1,1]); printf "Answer: %d\n", $net->winner([1,0]); printf "Answer: %d\n", $net->winner([0,1]); printf "Answer: %d\n\n", $net->winner([0,0]);


ABSTRACT

This module is a simple neural net learning tool designed for those who have an interest in artificial intelligence but need a "gentle" introduction. This is not intended to replace any of the neural net modules currently available on the CPAN. Instead, the documentation is designed to be easy enough for a beginner to understand.


DESCRIPTION

The Disclaimer

Please note that the following information is terribly incomplete. That's deliberate. Anyone familiar with neural networks is going to laugh themselves silly at how simplistic the following information is and the astute reader will notice that I've raised far more questions than I've answered.

So why am I doing this? Because I'm giving just enough information for someone new to neural networks to have enough of an idea of what's going on so they can actually use this module and then move on to something more powerful, if interested.

The Biology

A neural network, at its simplest, is merely an attempt to mimic nature's ``design'' of a brain. Like many successful ventures in the field of artificial intelligence, we find that blatantly ripping off natural designs has allowed us to solve many problems that otherwise might prove intractable. Fortunately, Mother Nature has not chosen to apply for patents.

Our brains are comprised of neurons connected to one another by axons. The axon makes the actual connection to a neuron via a synapse. When neurons receive information, they process it and feed this information to other neurons who in turn process the information and send it further until eventually commands are sent to various parts of the body and muscles twitch, emotions are felt and we start eyeing our neighbor's popcorn in the movie theater, wondering if they'll notice if we snatch some while they're watching the movie.

A simple example of a neuron

Now that you have a solid biology background (uh, no), how does this work when we're trying to simulate a neural network? The simplest part of the network is the neuron (also known as a node or, sometimes, a neurode). A we might think of a neuron as follows (OK, so I won't make a living as an ASCII artist):

Input neurons Synapses Neuron Output

                            ----
  n1            ---w1----> /    \
  n2            ---w2---->|  n4  |---w4---->
  n3            ---w3----> \    /
                            ----

(Note that the above doesn't quite match what's in the C code for this module, but it's close enough for you to get the idea. This is one of the many oversimplifications that have been made).

In the above example, we have three input neurons (n1, n2, and n3). These neurons feed whatever output they have through the three synapses (w1, w2, w3) to the neuron in question, n4. The three synapses each have a ``weight'', which is an amount that the input neurons' output is multiplied by.

The neuron n4 computes its output with something similar to the following:

  output = 0
  foreach (input.neuron)
      output += input.neuron.output * input.neuron.synapse.weight
  ouput = activation_function(output)

The ``activation function'' is a special function that is applied to the inputs to generate the actual output. There are a variety of activation functions available with three of the most common being the linear, sigmoid, and tahn activation functions. For technical reasons, the linear activation function cannot be used with the type of network that AI::NeuralNet::Simple employs. This module uses the sigmoid activation function. (More information about these can be found by reading the inforamation in the SEE ALSO section or by just searching with Google.)

Once the activation function is applied, the output is then sent through the next synapse, where it will be multiplied by w4 and the process will continue.

AI::NeuralNet::Simple architecture

The architecture used by this module has (at present) 3 fixed layers of neurons: an input, hidden, and output layer. In practice, a 3 layer network is applicable to many problems for which a neural network is appropriate, but this is not always the case. In this module, we've settled on a fixed 3 layer network for simplicity.

Here's how a three layer network might learn ``logical or''. First, we need to determine how many inputs and outputs we'll have. The inputs are simple, we'll choose two inputs as this is the minimum necessary to teach a network this concept. For the outputs, we'll also choose two neurons, with the neuron with the highest output value being the ``true'' or ``false'' response that we are looking for. We'll only have one neuron for the input layer. Thus, we get a network that resembles the following:

         Input  Hidden  Output
 input1  ---->n1\    /---->n5---> output1
                 \  /
                  n3
                 /  \
 input2  ---->n2/    \---->n5---> output2

Let's say that output 1 will correspond to ``false'' and output 2 will correspond to true. If we feed 1 (or true) or both input 1 and input 2, we hope that output 2 will be true and output 1 will be false. The following table should illustrate the expected results:

 input   output
 1   2   1    2
 -----   ------
 1   1   0    1
 1   0   0    1
 0   1   0    1
 0   0   0    0

The type of network we use is a forward-feed back error propagation network, referred to as a backpropagation network, for short. The way it works is simple. When we feed in our input, it travels from the input to hidden layers and then to the output layers. This is the ``feed forward'' part. We then compare the output to the expected results and measure how far off we are. We then adjust the weights on the ``output to hidden'' synapses, measure the error on the hidden nodes and then adjust the weights on the ``hidden to input'' synapses. This is what is referred to as ``back error propagation''.

We continue this process until the amount of error is small enough that we are satisfied. In reality, we will rarely if ever get precise results from the network, but we learn various strategies to interpret the results. In the example above, we use a ``winner takes all'' strategy. Which ever of the output nodes has the greatest value will be the ``winner'', and thus the answer.

In the examples directory, you will find a program named ``logical_or.pl'' which demonstrates the above process.

Building a network

In creating a new neural network, there are three basic steps:

  1. Designing
  2. This is choosing the number of layers and the number of neurons per layer. In AI::NeuralNet::Simple, the number of layers is fixed.

    With more complete neural net packages, you can also pick which activation functions you wish to use and the ``learn rate'' of the neurons.

  3. Training
  4. This involves feeding the neural network enough data until the error rate is low enough to be acceptable. Often we have a large data set and merely keep iterating until the desired error rate is achieved.

  5. Measuring results
  6. One frequent mistake made with neural networks is failing to test the network with different data from the training data. It's quite possible for a backpropagation network to hit what is known as a ``local minimum'' which is not truly where it should be. This will cause false results. To check for this, after training we often feed in other known good data for verification. If the results are not satisfactory, perhaps a different number of neurons per layer should be tried or a different set of training data should be supplied.


Programming AI::NeuralNet::Simple

new($input, $hidden, $output)

new() accepts three integers. These number represent the number of nodes in the input, hidden, and output layers, respectively. To create the ``logical or'' network described earlier:

  my $net = AI::NeuralNet::Simple->new(2,1,2);

train(\@input, \@output)

This method trains the network to associate the input data set with the output data set. Representing the ``logical or'' is as follows:

$net->train([1,1], [0,1]); $net->train([1,0], [0,1]); $net->train([0,1], [0,1]); $net->train([0,0], [1,0]);

Note that a one pass through the data is seldom sufficient to train a network. In the example ``logical or'' program, we actually run this data through the network ten thousand times.

for (1 .. 10000) { $net->train([1,1], [0,1]); $net->train([1,0], [0,1]); $net->train([0,1], [0,1]); $net->train([0,0], [1,0]); }

train_set(\@dataset, [$iterations])

Similar to train, this method allows us to train an entire data set at once. It is typically faster than calling individual ``train'' methods. The first argument is expected to be an array ref of pairs of input and output array refs. The second argument is the number of iterations to train the set. If this argument is not provided here, you may use the iterations() method to set it (prior to calling train_set(), of course). A default of 10,000 will be provided if not set.

$net->train_set([ [1,1], [0,1], [1,0], [0,1], [0,1], [0,1], [0,0], [1,0], ], 10000);

iterations([$integer])

If called with a positive integer argument, this method will allow you to set number of iterations that train_set will use. If called without an argument, it will return the number of iterations it was set to.

$net->iterations(100000); # let's have lots more iterations! $net->iterations; # returns 100000 my @training_data = ( [1,1], [0,1], [1,0], [0,1], [0,1], [0,1], [0,0], [1,0], ); $net->train_set(\@training_data);

C<infer(\@input)>

This method, if provided with an input array reference, will return an array reference corresponding to the output values that it is guessing. Note that these values will generally be close, but not exact. For example, with the ``logical or'' program, you might expect results similar to:

use Data::Dumper; print Dumper $net->infer([1,1]); $VAR1 = [ '0.00993729281477686', '0.990100297418451' ];

That clearly has the second output item being close to 1, so as a helper method for use with a winner take all strategy, we have ...

winner(\@input)

This method returns the index of the highest value from inferred results:

print $net->winner([1,1]); # will likely print "1"

For a more comprehensive example of how this is used, see the ``examples/game_ai.pl'' program.

EXPORT

None by default.


SEE ALSO

``AI Application Programming by M. Tim Jones, copyright (c) by Charles River Media, Inc.

The C code in this module is based heavily upon Mr. Jones backpropogation network in the book. The ``game ai'' example in the examples directory is based upon an example he has graciously allowed me to use. I *had* to use it because it's more fun than many of the dry examples out there :)

``Naturally Intelligent Systems'', by Maureen Caudill and Charles Butler, copyright (c) 1990 by Massachussetts Institute of Technology.

This book is a decent introduction to neural networks in general. The forward feed back error propogation is but one of many types.


AUTHOR

Curtis ``Ovid'' Poe, <poec@yahoo.com>


COPYRIGHT AND LICENSE

Copyright 2003 by Curtis ``Ovid'' Poe

This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.

Update: Many thanks to kvale for helping me out when I was first playing with NNs.

Cheers,
Ovid

New address of my CGI Course.

Edit by tye, remove PRE around Abstract section

Replies are listed 'Best First'.
Re: RFC: AI::NeuralNet::Simple
by gjb (Vicar) on Oct 31, 2003 at 21:37 UTC

    The docs say you implement backpropagation which is a method that typically involves a parameter that is often called the learning rate. In my experience it's important for the performance of the learning algorithm to tune it since either the network will converge very slowly (learning rate too small) or the training process will diverge (learning rate too high).

    According to the docs, there's no way to control the learning rate, so how do you deal with this?

    I do like the idea, it's good to have something simple to play around with. I should even have a Perl implementation of this around somewhere which I whipped together to do a quick test in a project I once did. It would have been nice if it were around at the time.

    Just my two cents, -gjb-

      D'oh! You're right. I've hardcoded the learning rate in the C, but it's trivial to expose that to the Perl.

      Cheers,
      Ovid

      New address of my CGI Course.

Re: RFC: AI::NeuralNet::Simple
by hardburn (Abbot) on Oct 31, 2003 at 21:59 UTC

    Little readability note:

    $net->train_set([ [1,1], [0,1], [1,0], [0,1], [0,1], [0,1], [0,0], [1,0], ], 10000);

    Since the arrayrefs on the left and right columns are associated, the above might be better as:

    $net->train_set([ [1,1] => [0,1], [1,0] => [0,1], [0,1] => [0,1], [0,0] => [1,0], ], 10000);

    ----
    I wanted to explore how Perl's closures can be manipulated, and ended up creating an object system by accident.
    -- Schemer

    : () { :|:& };:

    Note: All code is untested, unless otherwise stated

Re: RFC: AI::NeuralNet::Simple
by SpritusMaximus (Sexton) on Nov 02, 2003 at 05:57 UTC
    I tried to download this, and all I got were the examples. What about the interface and the C code? Am I missing something?
      Did you look at the source for AI::NeuralNet::Simple.pm? It uses Inline::C.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://303643]
Approved by particle
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others having an uproarious good time at the Monastery: (3)
As of 2024-04-20 01:49 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found