Beefy Boxes and Bandwidth Generously Provided by pair Networks
Come for the quick hacks, stay for the epiphanies.
 
PerlMonks  

Transformative functions...?

by edwyr (Sexton)
on Feb 13, 2017 at 19:39 UTC ( [id://1181919]=perlquestion: print w/replies, xml ) Need Help??

edwyr has asked for the wisdom of the Perl Monks concerning the following question:

I'm working on a pet project, feeding lots of data into a neural net. My neural net results (how fast the net trains) might be (positively!) affected by using a transformative function. What I'm trying to do is to "separate" the values apart so that one value is easier for the neural network to discriminate the values while training. In my uneducated attempts this seems like a "kernel" method (forcing values into higher dimensions for better network value discrimination). One of the common functions I found is tanh. The Wolfram site had a definition of tanh using e (exp) so I wrote my own tanh for experimentation. Currently I'm not using tanh, I'm just squaring the calculated values before the values are fed to the neural network. What other functions might be transformative functions? TIA

Replies are listed 'Best First'.
Re: Transformative functions...?
by BrowserUk (Patriarch) on Feb 13, 2017 at 23:34 UTC
    'm just squaring the calculated values before the values are fed to the neural network.

    Looking at your values (posted elsewhere) it seems to me that what you are doing is having exactly the opposite affect from what I think would be desirable.

    Ie. Your values are quite widely spread at the two extremes, but all clumped together in the middle. Squaring them, will makes the big gaps at each end even bigger (not to mention mapping the minus values to positives, thus conflating them), but will leave the gaps in the middle barely changed:

    Values: -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 - +0.3 -0.2 -0.1 0 0.1 0.111111 0.12 0.13 0.16 0.19 0.2 0.21 0.22 0.3 0. +4 0.5 0.6 0.7 0.8 0.9 1 1.00001 1.001 1.1 1.2 1.3 1.8 1.9 2.000001 2. +001 2.1 2.11 2.2 2.3 2.4 3.1 4 5 6 7 8 9 10 1.16873493792849e-006 108 +8.99977561893 Squared: 100 81 64 49 36 25 16 9 4 1 0.81 0.64 0.49 0.36 0.25 0.16 0.0 +9 0.04 0.01 0 0.01 0.012345654321 0.0144 0.0169 0.0256 0.0361 0.04 0. +0441 0.0484 0.09 0.16 0.25 0.36 0.49 0.64 0.81 1 1.0000200001 1.00200 +1 1.21 1.44 1.69 3.24 3.61 4.000004000001 4.004001 4.41 4.4521 4.84 5 +.29 5.76 9.61 16 25 36 49 64 81 100 1.36594135513471e-012 1185920.511 +29808

    My suggestion (assuming that you are trying to even out the gaps in the distribution to make discrimination easier), would be to take the square root of the (absolute) values and then multiply by the sign of the original value and a 'spreading constant'. Eg.

    sub xform{ my $v = shift; my $sgn = $v<=>0; $v = $v * $sgn; return $sgn * sqrt( $v ) * 10; };;

    The result of that applied to your posted values is that the overall range of the distribution is lessened, but the values are more even distributed within that range:

    -31.6227766016838 -30 -28.2842712474619 -26.4575131106459 -24.49489742 +78318 -22.3606797749979 -20 -17.3205080756888 -14.142135623731 -10 -9 +.48683298050514 -8.94427190999916 -8.36660026534076 -7.74596669241483 + -7.07106781186548 -6.32455532033676 -5.47722557505166 -4.47213595499 +958 -3.16227766016838 0 3.16227766016838 3.33333166666625 3.464101615 +13775 3.60555127546399 4 4.35889894354067 4.47213595499958 4.58257569 +495584 4.69041575982343 5.47722557505166 6.32455532033676 7.071067811 +86548 7.74596669241483 8.36660026534076 8.94427190999916 9.4868329805 +0514 10 10.000049999875 10.0049987506246 10.4880884817015 10.95445115 +01033 11.4017542509914 13.4164078649987 13.7840487520902 14.142139159 +2644 14.1456707158056 14.4913767461894 14.525839046334 14.83239697419 +13 15.1657508881031 15.4919333848297 17.606816861659 20 22.3606797749 +979 24.4948974278318 26.4575131106459 28.2842712474619 30 31.62277660 +16838 0.0108108044933228 329.999966002866

    Without knowing what you values represent or the algorithm(s) used by your neural net, I would expect this to make it easier for your NN to discriminate between the bulk of values which sit in the middle of your distribution.

    In the same vein, you might go further and use the cube root and multiply by 100:

    sub xform2{ my $v = shift; my $sgn = $v<=>0; $v = $v * $sgn; my $root3 = exp( log( $v||1e-100 ) / 3 ); return $sgn * $root3 * 100; }

    Giving:

    -215.443469003188 -208.00838230519 -200 -191.293118277239 -181.7120592 +83214 -170.99759466767 -158.74010519682 -144.224957030741 -125.992104 +989487 -100 -96.548938460563 -92.8317766722556 -88.7904001742601 -84. +3432665301749 -79.37005259841 -73.6806299728077 -66.943295008217 -58. +4803547642573 -46.4158883361278 0 46.4158883361278 48.074969651913 49 +.3242414866094 50.6579701910089 54.2883523318981 57.4889707894483 58. +4803547642573 59.4392195276313 60.3681073679769 66.943295008217 73.68 +06299728077 79.37005259841 84.3432665301749 88.7904001742601 92.83177 +66722556 96.548938460563 100 100.000333332222 100.033322228391 103.22 +8011545637 106.265856918261 109.139288306111 121.644039911468 123.856 +232963017 125.992125988168 126.013100174843 128.057916498749 128.2608 +61238441 130.059144685139 132.000612179591 133.886590016434 145.80997 +3582671 158.74010519682 170.99759466767 181.712059283214 191.29311827 +7239 200 208.00838230519 215.443469003188 1.05334832497567 1028.82757 +714918

    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
    In the absence of evidence, opinion is indistinguishable from prejudice.
      Oh, thanks. I think the idea of a cube root times 100 is very interesting! :) (Tone doesn't come through text. That's sincere, not sarcasm.)
Re: Transformative functions...?
by BillKSmith (Monsignor) on Feb 13, 2017 at 22:38 UTC
Re: Transformative functions...?
by LanX (Saint) on Feb 13, 2017 at 20:17 UTC
    That's a very interesting question!

    I think discussing it would be even easier, if you'd showed us the connection to Perl (probably a module you use or some code) and provided some links pointing to definitions and underlying theory of "neural networks" and "transformative functions" in this context. :)

    Cheers Rolf
    (addicted to the Perl Programming Language and ☆☆☆☆ :)
    Je suis Charlie!

      The connection to PERL is simply that I'm writing my code in PERL. Here's the example code I'm playing with for transformative functions:
      #!/usr/bin/perl use strict; use warnings; my @values = (-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, -0.9, -0.8, -0.7, -0.6, -0.5, -0.4, -0.3, -0 +.2, -0.1, 0, 0.1, 0.111111, 0.12, 0.13, 0.16, 0.19, 0.2, 0.21, 0.2 +2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1, 1.00001, 1.001, 1. +1, 1.2, 1.3, 1.8, 1.9, 2.000001, 2.001, 2.1, 2.11, 2.2, 2.3, 2.4 +, 3.1, 4, 5, 6, 7, 8, 9, 10, 1.16873493792849e-06, 1088.99977561893); my $wval; my $wexp; my $wtanh; my $wfisher; my $we2; # http://mathworld.wolfram.com/HyperbolicTangent.html sub tanh($) { my $in = shift; my $a = exp($in); my $b = exp(-$in); return ($a - $b) / ($a + $b); } # https://support.office.com/en-us/article/FISHER-function-D656523C-50 +76-4F95-B87B-7741BF236C69 sub fisher($) { my $in = shift; return 0 if $in == 1; my $a = (1 + $in) / (1 - $in); return 0 if $a <= 0; return 0.5 * log($a); } sub e2($) { my $in = shift; return 0 if $in == 1; return 0.5 * exp((1 + $in) / (1 - $in)); } map { $wval = $_; $wexp = exp($_); $wtanh = &tanh($_); $wfisher = &fisher($_); $we2 = &e2($_); write; } @values; format STDOUT_TOP = value exponent(e) tanh fisher e2 --------- -------------------- ----------------- --------------- ----- +---------- . format STDOUT = @###.#### @######.############ @###.############ @###.########## @###. +########## $wval, $wexp, $wtanh, $wfisher, $we2 .
      Mike
Re: Transformative functions...?
by Cow1337killr (Monk) on Feb 14, 2017 at 21:40 UTC
      Just plugging this in for you Cow1337killr, perl interface to mxnet was recently released to cpan, check out AI::MXNet

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://1181919]
Approved by Corion
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others sharing their wisdom with the Monastery: (4)
As of 2024-04-19 20:39 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found