http://www.perlmonks.org?node_id=441158


in reply to AI::NNFlex::Backprop error not decreasing

Hi,

Your problem is probably partly my fault. The 'sigmoid' activation function uses a formula that I haven't worked out how to integrate yet, so there is no corresponding sigmoid_slope function to return the slope of the error curve. I should really have taken that activation function out - apologies for the oversight.

I would suggest you use the tanh activation function instead. I'll correct the module & documentation for the next release.

Could some kind monk tell me the 1st order derivative of this function:

(1+exp(-$value))**-1

so I can correct the code?

You've also got several layers defined. While there is no theoretical reason why you shouldn't (and I wrote the code with that in mind) it is more usual to use 3 layers, and adjust the number of nodes in the hidden layer to reflect the number of values you need the network to learn.

Update: Oops, didn't spot the question at the bottom. Theoretically there is no reason why you shouldn't have analogue values learned by the network although again it's unusual, and you'll lose a bit of precision on the output. I must admit I've never tried implementing an analogue net with AI::NNFlex::Backprop though, so I can't guarantee it will work.

While analogue nets are possible, it's an unusual approach, and takes a good deal of thinking about. Backprop nets are what my tutor likes to call a 'universal approximator'. Given the precision and size of your data set, my feeling is that trying to teach a backprop net this kind of data in this form is likely to fail - the output values will always be too approximate, so the error slope will never have a true 'solution'.

The fact that the module didn't fail when unable to find a slope function suggests that you are using 0.2. This bug is fixed in 0.21, which is a lot faster as well, so you might want to get that version from CPAN.

g0n, backpropagated monk