Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris
 
PerlMonks  

Confirming what we already knew

by AssFace (Pilgrim)
on Mar 05, 2003 at 02:44 UTC ( [id://240512]=perlmeditation: print w/replies, xml ) Need Help??

I have been programming a genetic analysis tool to run on stock data.
Perl has served well in that it is easy to write very quickly - and for 90% of what I do it is plenty fast. Whether the code executes in 1 second or 10 seconds doesn't really matter to me too much if I'm only going to run it 1 to 5 times a day.
But this particular code does kind of a lot of things, so it took some time to get through it all. I need to execute this at least once a day, so it needs to finish all of its work in under 20 hours or so (so that there are 4 hours left to run many other programs on the data that is returned).
The Perl code, depending on what machine I was on, was taking at best ~196 seconds to run one dataset (one stock ticker). I need to run over 2000 stocks through it everyday, so that was going to take well over a day on a single machine. I could easily get around it with clusters, but I still wanted something a little faster.

So I rewrote it all in C, here are the stats that I saw after doing so:

(keep in mind that these are on two different machines, and also the Perl code writes out 5 short lines of text to a file at the very end of each stock analysis)

The Perl version was run on a P4 2G with half a gig of RAM and an IDE HD, running Win2K and Active State 5.6.1 Perl. The C version was compiled and run on an Athlon 1G with half a gig of RAM and an IDE HD, running FreeBSD 4.6STABLE, using gcc version 2.95.4.

On the P4 2G, the perl code ran through one ticker in ~196 seconds.
The C code, with no optimizations, running on the Athlon 1G ran in ~2.8 seconds. Then using optimizations -O2 and -O3 it brought it down to ~1.4seconds.

So on a slower machine, it runs about 140 times faster.

The Perl file is around 48K in size, with 959 lines (100-200 are just comments). The C file is 768 lines (50-100 lines are just comments) and the compiled code is 11K with no optimizations, and 9K with optimizations.

I can now run all the stocks on one machine in under an hour (it will be run on an Athlon XP 2.1G with 256M RAM) - to that will allow me to run many more variations of the tests on the machine (will likely still be using a cluster) as well as neural net analysis among other things.


I don't think the writing to a file at the end of the Perl code that is missing from the C code accounts for the 190+ second difference. At most it might add one more second to the C code, but even that is questionable.
I'm curious how Java would compare since it would prove much much easier to write with the String objet to work with. 1.3 is faster, but 1.4 has RegEx in there.
I also still haven't tested it out under Linux with the newer 3.2 gcc or the intel compiler (not sure how well -if at all - the icc will help on an Athlon XP).


The code itself is pretty small and doesn't take that much in terms of resources.

It reads in rows of data (trading days) of a stock. Then iterates over it all thousands of times evolving how it analyzes the data. Then outputs the results (the C out to stdout and the perl out to a file).
It will likely end up being done in Perl, feeding in which ticker to run into the C code, and then passing it off to the C code. I don't think the speed in a Perl for loop will slow it down to much.


So the point of all that being that Perl is excellent for developing, is easy to write fast, and ports over to C fairly easily. It is plenty fast for the bulk of what you want to do, but for more intense things, then you will want C of course.

Replies are listed 'Best First'.
Re: Confirming what we already knew
by toma (Vicar) on Mar 05, 2003 at 07:08 UTC
    You are correct that C is much faster than perl at number crunching. In this case, perl is a screwdriver and the problem is a nail. Perl could compete if you use something like PDL to handle the math in matrix form.

    Otherwise, you can write your own Inline::C or XS code to speed up your code. I think that Inline::C could work very well for your application. You can have the speed of C with perl's ease of development. An electric nail-gun!

    For analyzing stocks I like R, especially as a post-processor for the data. It has CRAN, which takes after CPAN.

    It should work perfectly the first time! - toma

      interesting - I wasn't familiar with R nor S prior to this. I'll have to read up on that - from my 2 second glance at the page, it looks like it is good at graphically representing data?

      I'll look into the Inline::C stuff in the future, but for now I'm just going to use Perl to grab a directory, read in all of the filenames in there, and then loop over those, feeding one at a time into forkmanager, which will then spawn off my C program - this works nicely on clustered systems, but will be fine for a single machine as well.

      That way I can write the controlling and analysis code in Perl and that suits my needs just fine.
      For future projects I will definitely look to Inline::C if I run into a performance question again.
        R is really good at creating statistical models from your data. It works interactively and as a programming languate. It can create decent graphs, which is helpful for interactive model-building.

        Typical models in R include linear, generalized linear, generalized additive, local regression, tree-based, and nonlinear. A good book on this is Statistical Models in S by Chambers and Hastie.

        I have used it for stock analysis and for web log file analysis. It is not particularly fast, so you would benefit from performing data reduction on your dataset before handing it off to R.

        It should work perfectly the first time! - toma

Re: Confirming what we already knew
by revdiablo (Prior) on Mar 05, 2003 at 03:07 UTC

    While you do have a valid point about using the right tool for the job, your comparison is not very useful in and of itself. Without seeing the code in question, as mentioned in the reply above, it's hard to tell whether the differences you are seeing are based on problems and inefficiencies with the language itself, or problems and inefficiencies with your code. And though I would like to give you the benefit of the doubt here, I would be shocked if there weren't a few changes to the Perl program that wouldn't result in a substantial increase in performance. That is not to say your effort was wasted, just that this may not necessarily be a valid comparison.

      very true - as is the case with any code that is ported around I suppose.

      I can say that the code looks nearly identical between the two - where it differs is in built in methods.
      In Perl, if I want to split an array, I call split - in C I call strsep. The way I personally do the split in Perl, I then go directly to the spot (2 spots actually) that I want in the array and use them. In C, I iterate over the strsep calls. So it gets the same end result, but there are different things going on - so yes, there are going to be different results in how efficient each is at doing it.

      Perl allows me to write it all extremely quickly because I don't have to constantly worry about the type checking like I do in C - but that also slows Perl down.
      I'm iterating over 1000+ trading days, and that iteration is done over 3000 times. Aside from the counters that are incremented in loops, all the math is easy floating point math - all things that I think C is just better suited for. Perl is very good at many things - but especially dealing with Strings. For this, once I have the data loaded in from a file, the only other time it deals with a string is when it is keeping track of its ideal algorithm.

      but yes, you are right in that the Perl code might have opportunities for optimization - but the same goes for the C code. I know Perl much better than I know C - so I would imagine that someone that knows C very well could make it even faster. But whether I can run through 2000 stocks in 20mins or an hour on a single machine doesn't matter for me since I will be clustering them and will get it well below that anyway.

      Perl does what it is great at, dealing with strings - I use it to get all of the data and to update data once I have analyzed it.
      Since I work with it frequently, I find that it is great for getting a rough draft of a program done so that it is funcitonal, and then from there I can see what speed increases I need, if any. If I don't need any - leave it in Perl. If I need some, but I really need the ease of String manipulation, then look to Java (but that currently isn't an option in clustering - should be within a year or two though). And then if that still isn't fast enough, C.
      Obviously it varies as to what the task at hand is.
Re: Confirming what we already knew
by Elian (Parson) on Mar 05, 2003 at 15:16 UTC
    This doesn't surprise me in the least. Seeing a speedup of a factor of 200 or so isn't unusual when turning an algorithm from perl to C. Like it or not, perl has a fair number of inefficiencies built into it, and some of the things it requires to happen, like operator overloading and proper tie behaviour, have unavoidable overhead.

    C's a lot more limited, and as such can make more assumptions at compile time, which can generate better code, and that's just fine. (Plus, of course, there's been far more work on any C compiler's optimizer than on perl's optimizer, which can make a pretty darned big difference)

    Really does look like you did the appropriate thing--used perl for a fast knock-together project, and moved to a faster language when it turned out to be necessary. FWIW, I don't think you'd have seen much, if any, speed win from going with Java. The cost of moving to Java from perl would've been about the same as moving to C, and C's definitely much faster than Java...

      Java has actually made a lot of progress and is pretty impressive considering how easy it is to write.

      I would guess that the stuff I am trying to do would be around 5-10 seconds on my laptop - still slow compared to the C of course.

      Java has actually slowed down in the 1.4.x realm - but if you have the IBM JDK and the 1.3.x version - it can catch you off guard as to how fast it is.

      The main issue I have with it is not the speed nor overhead (none of the things I do take up all that much RAM), but it is that Java currently can't be used in clustering - at least not in the ways that I'm familiar with. There are people working on that right now, so perhaps soon enough that will be an option.

      I personally would chose C over Java in terms of speed, but I would choose Java over C++ - certainly not something I would have imagined myself saying in 1995 - and probably not even 2 years ago. But lately it has gotten pretty fast... (and then slowed down a bit :) - depending on which JDK "brand" and which release)
        While Java doesn't inherently suck too badly, at least no more than most other languages, I stand by my statement--it would've been as much work to move from perl to Java as from perl to C, and his win would've been much less. Dunno about C++, as I find that when I write C++ code, the only difference for me between C and C++ is the comment style, so it's not like I'm taking advantage of the language... :)

        The design of the JVM (not Java the language, mind, butt he JVM itself) puts some inherent limits on what can be done to run fast without extraordinary work, but this isn't really the place for that particular discussion.

Re: Confirming what we already knew
by Anonymous Monk on Mar 05, 2003 at 02:57 UTC

    I don't suppose you would/could sanitise the two programs to remove proprietory information and post them for use to view/compare?

      Part of me wants to do that just for the sake of discussion and open source, etc etc. But the other part of me is paranoid about it :)

      The general idea of both sets of code is pretty much exactly the same - that is what is so nice about Perl - it translates nearly exactly over to C when you are writing it.

      The differences are in split in Perl and strsep in C (although I think that might be a BSD biased thing and not ANSI - that said, Linux has it too) - which lead to very very slight ways of how I get that info out.
      In perl I have to split, then on the array that it returns I grab the known positions that I want. In C I have to iterate on calls to strsep to the spot I want and then it returns a string (or pointer to a string).

      In C I prepend increments (++i) and in perl I postpend (i++) if they are on lines by themselves since I read that C is slightly more efficient with the former.

      For the most part though, it is exactly the same, only differing where Perl and C don't have matching built in functionality.

      Again, in Perl I can do:
      $strWhatever = $strA . ',' . $strB . ',' $strC;
      (that probably being a great example of some inefficiency that another response on this thread alluded to me likely making - I would guess that the above is probably like in Java how it is better to append to a stringBuffer instead of creating many strings and having it deal with it all)
      In C I have to strcat the series together.


      The code in general, in ugly not-even-pseudo-code (I'll try to note in their respective comments how Perl or C is treated differently - perhaps shedding light on what is causing the Perl slowdown... aside from it being an interpreted language and all):


      if(fileLength >= 1200 rows)
      arrClosingDayData = grabTicker(TickerName)
      #in Perl this loads it right into an array, and later splits on individual spots in the array to grab certain spots of the csv and assigns it to a different array
      #open(TICKERDATA, $tickerName) or die "Can't open TICKERDATA: $!\n";
      #my @allTickerData = <TICKERDATA>;
      #close(TICKERDATA) or die "Can't close TICKERDATA: $!\n";

      /*in C it is a string and then that string is iterated along newlines, we then strsep the data up based on the ",", grabbing the data out that we want, and populating an array of known size with those float values*/

      Then we generate a trading algorithm to test to see how well it performs with certain variables.

      Then we iterate N times (there is a range that shows the most benefit, and it is over 3K times - the amount is the same in both Perl and C).
      for(0..N)
      foreach(arrClosingDayData)
      in here we look to test how that algorithm performs on a range of data around this date. it is just easy floating point math. variables are incremented up to keep track of when this trading method is right, and when it is wrong.


      When it is done with the trading series, the performance score of that algorithm is stored, and then (checking that we aren't at the end of the outer loop) a random number of variables in that algorithm are evolved a various number of ways.

      That ends one cycle of the outer loop - then it goes back in, testing out the new version of the trading algorithm - if it performs better, then it becomes the new value to shoot for, etc etc.

      When that outer loop is finished, then we are done and we output the stats of that to: Perl(a file), C(stdout)

      This data that is output to files is later scanned by other perl scripts that then feed it into other stuff - but that is not important to this.

      The only real difference in programming are the built in functions of Perl/C, and then how the file is read in initially. Taking out all of the other code from Perl and only timing reading it in, it is "TotalTime: 0 wallclock secs ( 0.16 usr + 0.00 sys = 0.16 CPU)" on a Mobile Athlon 1G laptop.
      So I don't think that is the hold up.
        that is what is so nice about Perl - it translates nearly exactly over to C when you are writing it

        Perhaps that's part of your problem. I don't mean that in a nasty way - I know when I started writing Perl my code looked a lot like C too. The secret to writing Perl that runs fast is to use Perl's built-in functions to do the work for you, since they are written in optimised C (albeit optimised for a general case). The more lines of Perl code that you write, the more time Perl will need to spend branching and switching between ops. Whereas if you can pare your code down so that the work gets done by map, foreach, tr, regexes etc then the execution time should drop.

        It really raises a red flag for me that your Perl implementation ended up longer than the equivalent thing written in C. These days when I have to write C I get frustrated at how many lines of code I have to execute to do anything useful.

        My best guess based on your breif description is that when you split your float data into the array, the C-version is performing the ascii-to-binary conversion once and the array iterated over is float *; or double * and all subsequent process of these numbers id performed on them in their binary form.

        In the Perl version, the numbers are being maintained in the array in their ascii form, and each time any calculation with or comparison of them is done, perl has to perform the conversion to binary (though this step can be skipped if this scalar was use for numeric purposes previously I think as perl keeps a copy of the binary representation once a scalar has been used for math), do the math and then convert back to ascii representation and store.

        The upshot is that every calculation between 2 floats, (ex. num1 *= num2; in the C version comes down to

        load reg a, num1; move 8 bytes from memory (probably cache) ; maybe 2 or 4 clock cycles barring stalls which given the ;C array is probably contigious memory probably happens every 128K val +ues ;after the array is initially addressed depending on the size of the L +1/L2 caches ;and what else the surrounding code is doing load reg b, num2; ditto fmul reg a, regb; A Floating Point processor instruction ; Depending on the processor could be 1 to 10 or maybe 20 clock cycles store reg a, num1; 8 bytes stored. ; Another 2/4 clock cycles.

        maybe 30/40 clock cycles at most, and usually much less.

        Whereas for Perl, the equivalent processor instructions involve

        1. locating the base SV,
        2. indexing that to find the XVPLV.
        3. Locating, and possible performing ascii-binary conversion on the index var,
        4. then using the value to calculate the offset into the storage pointed at by the XVPLV to
        5. locate the SV pointing to the float element.
        6. Loading the base of that SV,
        7. checking to see if the NOK flag is set.
        8. If it is load the previous binary value of this scalar.
        9. If not, chase the XPVNV to the ascii representation.
        10. Read the string byte by byte in a loop and perform the math to convert it to binary form.
        11. Repeat from step 1 for the second variable.
        12. Finally we have the two floats in binary form, but in (temporary?) storage not registers.
        13. Perform the actual math using essentially the same steps as outlined above for the C version.
        14. Now peform the reverse of the first 10 steps, twice, remembering we must always do the binary-to-ascii convertion on store as the next use of the variable may be as a string (eg. print), but that we also store both binary reps (which may save considerable time if the next use is numeric).
        15. Do lots of flag checking/setting and other housekeeping.

        I make no claims for the above being complete, correct or accurate in anyway whatsoever, but it gives a feel for whats involved

        As you can see, the process of adding two floats held in an array in perl is considerably more involved than in C and takes in the order of 100's if not low 1000's of clock cycles, as opposed to 10's in C. That's the price you pay for Perl's infinitely simpler and more flexible type-polymorphism, DWIMery and transparent housekeeping.

        It makes me wonder if Perl 6 couldn't reverse the magic used by Perl 5 as far as numbers are concerned. That is to say, once a value has been used as a number, if the ascii representation of that var couldn't be flagged as 'dirty' and not maintained until such times as an attempt is made to use it's contents in a non-numeric capacity. Ie. As well as the IOK and NOK flags, have a AOK flag, allowing the ascii representation not to be updated until necessary?

        That said, in the above sequence, it's the pointer chasing and flag waving required by perls infinitly flexible array representation that makes the biggest difference beween it and the C-float array process, so it probably wouldn't make a whole heap of difference.

        The upshot is, that if you want to do math intensive programming, use a language that lends itself to that, or find a module that allows you to hand over the storage and processesing of numeric data to a library written in such a language.

        It would be nice to thing that our favorite language could be extended in its upcoming 'new' form to allow us to gives clues when we declare vars as to their most prevelent usage and that it could transparently (and without too much overhead) allocate, store and manipulate them ways more efficient to that usage, whilst retaining the DWIMery and flexibility of the standard Perl data representations. (You have no idea how hard it was to write that sentance without mentioning types :). I'm sort of thinking about the perl 6 attributes here.


        Examine what is said, not who speaks.
        1) When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
        2) The only way of discovering the limits of the possible is to venture a little way past them into the impossible
        3) Any sufficiently advanced technology is indistinguishable from magic.
        Arthur C. Clarke.
Re: Confirming what we already knew
by AssFace (Pilgrim) on Mar 05, 2003 at 21:10 UTC
    Okay - the code length difference is better explained - there were two other functions in there that were used for manipulating files after this was done executing - the C version doesn't have that.
    So to be fair, this is the code with everything extra stuff stripped out, no real comments.
    I also removed the output to a file at the end. It has benchmark code in there.
    I renamed things, but it should still get accross the idea of what is being done.
    use Benchmark; $tg = 3000; $fip = 0.03; $lfv = 41; $ccc = 20.00; $pond = 100; @rddt = (); $mbvr = 0; sub analyzeThis { my ($tickerName) = @_; open(TICKERDATA, $tickerName) or die "Can't open TICKERDATA: $!\n" +; my @allTickerData = <TICKERDATA>; close(TICKERDATA) or die "Can't close TICKERDATA: $!\n"; shift @allTickerData;#strip out the first row of descriptive text @rddt = (); #this also reorders the data from what was newest to oldest in the + array to now being oldest to newest (Better for our needs) for(my $i = 0; $i < scalar(@allTickerData); $i++){ my @tempArray = split(',' , $allTickerData[$i]); unshift @rddt , $tempArray[3]; } if(scalar(@rddt) > 1200){ my $loopCount = scalar(@rddt) - 1200; for(my $i = 0; $i < $loopCount; $i++){ shift @rddt; } } #################################################################### + #up to here is only run once and takes less than 1 wallclock second# #################################################################### + $bahh = ""; $bvr = 0; my $cond = int rand 6; my $oper = int rand 3; my $pon = int rand 2; my $amt = rand 20; $amt = sprintf("%0.2f", $amt); if($pon == 0){ $amt = $amt * -1; } my $ifPos = 0;#currently will stay this "always on" state for now $bahh = $cond . "," . $oper . "," . $amt . "," . $ifPos; for(my $mg = 0; $mg < $tg; $mg++){ my $shortLoopCount = (int rand 5) + 1;#loop at least one time, + max 5 times for(my $aa = 0; $aa < $shortLoopCount; $aa++){ my $modMe = int rand 7; if($modMe == 0){ $cond = int rand 6; } elsif($modMe == 1){ my $oper = int rand 3; } elsif($modMe == 2){ $pon = int rand 2; $amt = rand 20; $amt = sprintf("%0.2f", $amt); if($pon == 0){ $amt = $amt * -1; } } elsif($modMe == 3){ $ifPos = 0;#switched this to "always on" } elsif($modMe == 4){ my @tempArray = split(',',$bahh); my $tempcond = int rand 6; $cond = int ($tempArray[0]/2 + $tempcond/2); } elsif($modMe == 5){ my @tempArray = split(',',$bahh); my $tempoper = int rand 3; $oper = int ($tempArray[1]/2 + $tempoper/2); } elsif($modMe == 6){ my @tempArray = split(',',$bahh); my $tempamt = rand 20; my $tempPON = rand 2; $amt = $tempArray[2]/2 + $tempamt/2; $amt = sprintf("%0.2f", $amt); if($tempPON == 0){ $amt = $amt * -1; } } } my $car= $cond . "," . $oper . "," . $amt . "," . $ifPos; + my $csc = 0; my $cpp = 0; my $cnp = 0; my $tp = 0; for(my $i = 64; $i < (scalar(@rddt) - (2 * $pond)); $i++){ $cv = ccond($cond, $i); my $tv = mvsrt($cv, $oper, $amt); $checkValue = 0; if($tv == 0){ if($ifPos == 0){ for(my $ii = $i; $ii < ($i + $lfv + 1); $ii++){ + $checkValue = (($rddt[$ii]) - ($rddt[$i]))/$rd +dt[$i]; if($checkValue >= $fip){ $cpp++; last; } } } else{ if($rddt[$i + $lfv] < $rddt[$i]){ $cnp++; } } } else{ my $otherPos = 0; if($ifPos == 0){ $otherPos = 1; } if($otherPos == 0){ for(my $ii = $i; $ii < $i + $lfv + 1; $ii++){ $checkValue = (($rddt[$ii]) - ($rddt[$i]))/$rd +dt[$i]; if($checkValue >= $fip){ $cpp++; last; } } } else{ if($rddt[$i + $lfv] < $rddt[$i]){ $cnp++; } } } $tp++; } $csc = ($cpp + $cnp)/$tp; if($csc > $bvr){ $bvr = $csc; $bahh = $car; } } $mbvr = $bvr; return $bahh; } sub mvsrt{ my (@params) = @_; my $returnValue = 0; if($params[1] == 0){ if($params[0] < $params[2]){ $returnValue = 0; } else{ $returnValue = 1; } } elsif($params[1] == 1){ if($params[0] > $params[2]){ $returnValue = 0; } else{ $returnValue = 1; } } elsif($params[1] == 2){ if($params[0] == $params[2]){ $returnValue = 0; } else{ $returnValue = 1; } } return $returnValue; } sub ccond{ my (@params) = @_; my $returnValue = 0; if($params[0] == 0){ my $tt = 0; my $average = 0; for(my $i = $params[1] - 12; $i < $params[1]; $i++){ $tt = $tt + $rddt[$i]; } $average = $tt / 12; $returnValue = ($rddt[$params[1]] - $average); } elsif($params[0] == 1){ my $tt = 0; my $average = 0; for(my $i = $params[1] - 50; $i < $params[1]; $i++){ $tt = $tt + $rddt[$i]; } $average = $tt / 50; $returnValue = ($rddt[$params[1]] - $average); } elsif($params[0] == 2){ my $absMin = $rddt[$params[1]-5]; for(my $i = $params[1] - 5; $i < $params[1]; $i++){ if($rddt[$i] < $absMin){ $absMin = $rddt[$i]; } } $returnValue = ($rddt[$params[1]] - $absMin); } elsif($params[0] == 3){ my $absMin = $rddt[$params[1]-63]; for(my $i = $params[1] - 63; $i < $params[1]; $i++){ if($rddt[$i] < $absMin){ $absMin = $rddt[$i]; } } $returnValue = $rddt[$params[1]] - $absMin; } elsif($params[0] == 4){ my $absMax = 0; for(my $i = $params[1] - 5; $i < $params[1]; $i++){ if($rddt[$i] > $absMax){ $absMax = $rddt[$i]; } } $returnValue = $rddt[$params[1]] - $absMax; } elsif($params[0] == 5){ my $absMax = 0; for(my $i = $params[1] - 50; $i < $params[1]; $i++){ if($rddt[$i] > $absMax){ $absMax = $rddt[$i]; } } $returnValue = $rddt[$params[1]] - $absMax; } #no err check $returnValue = sprintf("%0.2f", $returnValue); return $returnValue; } ################# #main() ################# opendir(DATA_DIR,"data"); my @tickers = grep { $_ ne "." and $_ ne ".." and $_ ne "returns" } re +addir DATA_DIR; closedir(DATA_DIR); foreach(@tickers){ #check if the file exists and has data >= 1200 rows before it pass +es the ticker into the analysis program. my $lines = 0; open(FILE, "data/$_") or die "Can't open $_: $!"; while (sysread FILE, $buffer, 4096) { $lines += ($buffer =~ tr/\n//); } close FILE; #got the line count, now do the check if($lines >= 1200){ my $ttTime0 = new Benchmark; my $bestA = analyzeThis("data/$_"); my $ttTime1 = new Benchmark; my $ttDifference = timediff($ttTime1, $ttTime0); print "\n ttTime: " . timestr($ttDifference) . "\n"; print '************************' , "\n"; } }
      While I don't know how much of the difference between the perl and c code this makes up, after a cursory scan I do see a number of inefficiencies in the perl code. The first thing that jumps out at me are all the my calls inside the loops. If you are going to be declaring these variables so many times, it's more efficient to do so outside the loops. Here's a quick benchmark:
      Benchmark::cmpthese(5000, { 'outside' => sub { my $x; my $y; my @z; my $i; for ($i = 0; $i < 1000; $i++) { $x = $i; $y = $x; @z = ($x, $y) } }, 'inside' => sub { for (my $i=0; $i < 1000; $i++) { my $x = $i; my $y = $x; my @z = ($x, $y)} } }); Rate inside outside inside 120/s -- -13% outside 138/s 15% --
      So in this example, declaring the my variables outside of the loop gives a 15% speed up. Another problem I found was calculating the loop limiting condition in the for loop. e.g.,
      for(my $i = 64; $i < (scalar(@rddt) - (2 * $pond)); $i++){
      Since it doesn't look like the size of @rddt or the value of $pond is changing, you should do that calculation outside of the loop. Here are the benchmarks:
      @rddt = (1) x 1000; $pond = 32; Benchmark::cmpthese(500, { 'inside' => sub { for ( my $i = 64 ; $i < ( scalar(@rddt) - ( 2 * $pond ) ) +; $i++ ) { $x++; } }, 'outside' => sub { $limit = scalar(@rddt) - ( 2 * $pond ); for ( my $i = 64 ; $i < $limit ; $i++ ) { $x++; } }, }); inside 273/s -- -39% outside 450/s 65% --
      A 65% speed up here. There are probably lots of other optimizations that you could do in this perl code. Those two were the most obvious.
        *smacks forehead*
        I got sloppy with the "my"s for sure. Ugh. Normally I'm fairly careful about such things - but I think I slipped up here for sure.
        I went through and took out all of the "my" declarations and did put them outside of loops.

        The result of that benchmarked at 268 seconds - *but* this is on my laptop that normally does it in 305 seconds (Athlon M 1G on WinXP with Active State Perl and half a gig of RAM). So a good speed up so far.

        Then changing the calculation to be just before its for loop instead of up top.
        That brought ended up one second slower than the above code.
        So then moving it up above the highest loop made it go from 268 to to 286.
        Then just putting in a number there and having no calculation at all in there brings it down to 264 seconds.
        Doesn't seem like the improvement that I would expect to see after looking at your benchmark... but still an improvement - largely from that stupid "my" thing that I just screwed up on.
      This is the sort of code where I would expect to be faster in C. Lots of numerical comparisons, and not much else. However, there are a couple of things that jump out at me. One is your mvsrt() routine. I think you could remove that entirely by changing this line:
      my $tv = mvsrt($cv, $oper, $amt);
      to this:
      my $tv = ( ($cv <=> $amt) == ($oper - 1) );
      (Well, that changes the actual operation each value of $oper performs, but you get the idea.)

      Also, you have lots of C-style for loops which could be rewritten to use foreach. For example, change this:

      my $absMax = 0; for(my $i = $params[1] - 5; $i < $params[1]; $i++){ if($rddt[$i] > $absMax){ $absMax = $rddt[$i]; } } $returnValue = $rddt[$params[1]] - $absMax;
      to this:
      my $absMax = 0; # loop over array slice foreach my $rddt_value (@rddt[($params[1] - 5) .. $params[1]]) +{ if($rddt_value > $absMax){ $absMax = $rddt_value; } } $returnValue = $rddt[$params[1]] - $absMax;
      Foreach loops do tend to run quite a bit faster, and you have many of these.
        I read another suggestion and have taken out the "my" declarations within the loops, and I moved the limit declaration outside of the for loops.

        That brought me down from 305 to 264 seconds per ticker. (note that I am on a different machine that before - this is a laptop with an Athlon M 1G processor, half a gig of ram, WinXP, and Active State Perl.

        So using your suggestions to use my $tv = ( ($cv <=> $amt) == ($oper - 1) ); instead of the sub call.
        That, with the previous changes mentioned above, resulted in a new time of 289 seconds... so slower (and from what I can tell it corrupts the algorithm decision - so I'm going to scrap that one).

        So back to the way it was prior, and then replacing the for loops with foreach like you suggest gives me a new time of 237 seconds.

        So in the end I had a drop of nearly 70 seconds from what this code did prior to the optimizations. Over 2000 stocks that would save me over a day and a half of processing... but it is still not a huge difference (in comparison to what I saw in C that is).

        Had my mistakes being corrected brought the speed down to 20-30seconds per stock, I would have been very impressed - but for now, I still think I will use my method of coding it in Perl (perhaps sloppily) and then seeing from there what speed improvements are needed (if any) for it to be useful.

        UPDATE:
        Now that I'm back in on the P4 2G, I ran the updated code on that and it is now at 179 secs - previously at 196.
        I guess slight variations in speed changes come from the random loop variations each time it is run. Also I'm not sure what versions of ActiveState perl is on my laptop compared to here on this machine.
        For this code I've noticed that the Athlon tends to improve more easily than Intel - why that is, I don't know - perhaps cache sizes? No clue.

      Tons of numerics and lots of array lookups - the kind of job that lends itself well to C. That said, I see quite a bit of room in your Perl.

      There's loads of pointless temporary variables and intermediate assignments. Why do @params = @_? Just use @_ directly, there's nothing special about it. List::Util is also likely to hugely speed up parts of your job. Your if/elsif chains are not helping either. The ccond() function f.ex should be written along these lines, using aforementioned module:

      my @ccond = sub { $rddt[$_[0]] - sum(@rrdt[$_[0]-12 .. $_[0]-1]) / 12; }, sub { $rddt[$_[0]] - sum(@rrdt[$_[0]-50 .. $_[0]-1) / 50; }, sub { $rddt[$_[0]] - min(@rddt[$_[0]-5 .. $_[0]-1]); }, # ... );
      and the call becomes
      $cv = sprintf "%0.2f", $ccond[$cond]->($i);

      That way, rather than rippling through the entire if/elsif cascade every time, the correct code block is selected in constant time. An analogous change applies to the other function.

      Obviously, this approach will be much harder to translate into C. As you can see, properly Perlish code would also have been drastically shorter than your offering.

      Will those practices let Perl beat the C version? Not likely. However, I'm fairly confident that given a capable Perl programmer, resorting to C will only be required very rarely. (And note that the min and sum functions from List::Util I used here are written in C. So in a way, you have outsourced your C rewriting to CPAN authors - not a bad deal IMO.)

      Makeshifts last the longest.

      Now that we have your code, is there any chance we can get some sample data? :) I'd really like to play around with this, just for fun more than anything. Not trying to be pushy, just curious.

      PS: just a few lines of example data will suffice. I'd just like to see what form the data is in, then I can synthesize some on my own. It wouldn't exactly be representational of your data, but it'd be something to play with.

        in the end it is all stock data - so the @rddt array is populated with dollar amounts that range in a way that stock prices do. say 20.00 to 30.00 over some arbitrary range - 1200 trading days in this one.
Re: Confirming what we already knew
by Anonymous Monk on Mar 06, 2003 at 12:33 UTC
    it needs to finish all of its work in under 20 hours or so

    You definately chose right in the end. I have a 1 hour rule for Perl. If it takes longer than an hour to run, I rewrite it in another language. How someone can wait 20+ hours for their code to run is beyond me. Mind you, hardware's pretty cheap these days so if that can solve the problem, I'm all for it.

      I prefer time limit rules to be flexible based on the input data. I had a program that had to walk a directory structure, un-gzip each of the 50M+ files, parse the headers for information, and find them in a database. All remotely. Over NFS.

      I'm pretty sure nothing could've managed that one in only an hour... (I, for one, was happy with the 5 day run time it had)

        Situations like that are why I'm glad Sun still exists ;-)

      This does analysis on the closing data of stocks and needs to finish by the next time around to be useful - 20 hours was the extreme cut off, 10 hours is much more favorable - and then anything below that is great. (also the number of stocks that we are looking at changes - it currently is actually below 2000 and is closer to 1000 - but over time that figure will grow to be over 2000 as more data is collected - faster hardware over time will help in that, but I still wanted to plan with the idea in mind that we would have to do around 2000 of them)
      But like I said, once it gets down to the difference between 30 mins and an hour, it doesn't matter much to me.

      I have a cluster of nodes that currently price out at about $350 each - I could build them even cheaper, but I use silent components to try to reduce noise levels when working near them - and those also tend to use less power in order to be quieter.
      With the cluster it makes it feasible to take "slow" code and spread it out (as long as the task at hand lends itself to that) over several machines and get it done much faster.
      But it is certianly nice to have it run quickly on a single machine and not need the cluster tied up for that amount of time and instead have a single node cruise through it while the other nodes can work on other things.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://240512]
Approved by diotalevi
Front-paged by newrisedesigns
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others scrutinizing the Monastery: (3)
As of 2024-04-18 00:11 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found