Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw
 
PerlMonks  

Re^9: bit by overhead

by BrowserUk (Patriarch)
on Jan 07, 2011 at 16:40 UTC ( [id://881120]=note: print w/replies, xml ) Need Help??


in reply to Re^8: bit by overhead
in thread bit by overhead

You are correct, that is a bug in the benchmark. I'd code it this way:

sub from_cache2 { my $ticker = shift; return [ map unpack("A10FFFFL", $_), @{ $cache2{$ticker} } ]; }

Unfortunately, it takes the edge off the speedup :(

C:\test>880868-2 Rate orig mod1 orig 8.01/s -- -36% mod1 12.4/s 55% --

Still worth having but less so. If you are the OP of this thread, then you should seriously consider the ideas in the last paragraph of my previous post.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^10: bit by overhead
by Anonymous Monk on Jan 07, 2011 at 17:39 UTC
    I have indeed tried your suggestion:
    sub to_cache { my( $ticker, $data ) = @_; $data_cache{$ticker} = pack "(A10FFFFL)*", map @$_, @$data; } sub from_cache { my $ticker = shift; return [ map[ unpack("A10FFFFL", $_) ], unpack '(A[A10FFFFL])*', $data_cache{$ticker} ]; }
    But it doesn't seem to be working. Is this code assuming that the packed rows are stored together as a single scalar?

      That code comes from Re^9: bit by overhead and is identified as being slower than this:

      my %cache2; sub to_cache2 { my( $ticker, $data ) = @_; $cache2{$ticker} = [ map pack("A10FFFFL", @$_), @$data ]; } sub from_cache2 { my $ticker = shift; return [ map unpack("A10FFFFL", $_), @{ $cache2{$ticker} } ]; }

      which you should use instead.

      The suggestions I was referring to are:

      Do you use all 251 sets of 6 values every time you retrieve the data from the cache?

      The gist of where I'm going with this, is that if you don't use them all each time, you might be better to use a two level cache so that you unpack less data each time.

      Or, if you do use all the value for each ticker each time, then could you not cache the results of whatever you do with them, rather than the raw data itself?


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://881120]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others scrutinizing the Monastery: (5)
As of 2024-03-28 17:42 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found