Welcome to the Monastery | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
The docs say you implement backpropagation which is a method that typically involves a parameter that is often called the learning rate. In my experience it's important for the performance of the learning algorithm to tune it since either the network will converge very slowly (learning rate too small) or the training process will diverge (learning rate too high). According to the docs, there's no way to control the learning rate, so how do you deal with this? I do like the idea, it's good to have something simple to play around with. I should even have a Perl implementation of this around somewhere which I whipped together to do a quick test in a project I once did. It would have been nice if it were around at the time. Just my two cents, -gjb- In reply to Re: RFC: AI::NeuralNet::Simple
by gjb
|
|