Speeding things up

I know this is a rather late addition to this topic, but I've been doing a lot of work on nnflex since it was first posted on CPAN at the beginning of Jan:

gmpassos, if you're still working on this you might want to think about implementing a momentum term in backprop.
Momentum works by adding a fraction (often about half, but you'll get the right value for your network by trial and error) of the weight change from the previous learning pass to the weight change for this pass. That way, when the network is a long way from converging, the weight changes are quite large, and become progressively smaller as the network nears convergence.

My experiments with XOR suggest an improvement in learning speed (measured in the number of passes) of up to 5 fold, and improved network stability (because your network is less likely to get stuck in local minima).

You can get the code from the momentum.pm module in nnflex.

I hope thats useful.

c

VGhpcyBtZXNzYWdlIGludGVudGlvbmFsbHkgcG9pbnRsZXNz

In reply to Re: AI::NNEasy to setup fast a Neural Network using just Perl and XS. by g0n
in thread AI::NNEasy to setup fast a Neural Network using just Perl and XS. by gmpassos

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":