Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw

Re: Karatsuba and Toom-cook multiplication

by choroba (Chancellor)
on Feb 11, 2014 at 22:58 UTC ( #1074531=note: print w/replies, xml ) Need Help??

in reply to Karatsuba and Toom-cook multiplication

Cool! I implemented Karatsuba in a programming competition years ago, too. Any plans to add Schönhage–Strassen?
لսႽ† ᥲᥒ⚪⟊Ⴙᘓᖇ Ꮅᘓᖇ⎱ Ⴙᥲ𝇋ƙᘓᖇ
  • Comment on Re: Karatsuba and Toom-cook multiplication

Replies are listed 'Best First'.
Re^2: Karatsuba and Toom-cook multiplication
by ohcamacj (Beadle) on Feb 12, 2014 at 00:43 UTC

    No . . . this was for a pragmatic reason. The recent recursive function challenge on, lacked any perl solutions, as of a week or two ago. I thought "WTF ?" surely that would only be about 130 bytes if I used Math::BigInt; which is installed on the golf server.

    Problem was, that it took about 7 or 8 seconds to execute, using Math::BigInt; and the execution timeout was only 3 seconds or so. Math::BigInt::GMP would solve it almost instantly; but it wasn't installed on the golf server; and I lacked module install permission, or patience to beg for it to be installed.

    My first attempt was actually to use unpack() to print the solution from binary digits. And found that there's a 20 kb size limit on solutions; which prevents using unpack() to print out a 53,000 digit number.

    (Execution times might be misremembered).
    Karatsuba, cut the time down from 7 or 8 seconds, to about 6.5 seconds. A change that I initially didn't understand the value of

    from $blog10 = int($mindigits / 2) to my $low = $mindigits - 1; my $high = int($maxdigits / 2); my $blog10 = $low <= $high ? $low : $high;
    cut the execution time, from about 6.5 seconds, to about 4.5 seconds.

    A lot of benchmarking, profiling, and false leads later, I finally had a version (more complex and less readable than above) which finished in 3.3 seconds or so.

    Which was still, just barely greater than the execution timeout.

    In it, carrying is done manually. 999999999 * 999999999 can be added to itself, 18 times, before it begins to lose precision. So, I did a carry every 16 additions.

    But, how often does 999999999 * 999999999 really occur in the input data? Isn't stuff like 333333333 * 333333333 statistically, a lot more likely?

    So, I wrote a script to assist in performing a binary search for the maximum length of time I could postpone carrying, within each multiplication of the input. It printed out stuff like

    Sample id 29, 38863124 ... 04402533 (2503 digits) With carry_lim at 50, result 38863124 ... 04402533 (2503 digits) is +Succeeded. With carry_lim at 75, result 38863124 ... 04402533 (2503 digits) is +Failed.
    At which point, I'ld tweak the parameters, and rerun.

    Then, copying the results (carry_lim just 1 below failure, for elements 29 - 37), back to the first script, there was a substantial performance improvement -- from 3.3 seconds to about 2.8 seconds.

    Toom-cook, took so long to comprehend, write, profile, and benchmark; it wasn't ready for use, before the challenge deadline had already passed.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1074531]
and dust plays in a shaft of sunlight...

How do I use this? | Other CB clients
Other Users?
Others imbibing at the Monastery: (4)
As of 2017-07-21 06:16 GMT
Find Nodes?
    Voting Booth?
    I came, I saw, I ...

    Results (319 votes). Check out past polls.