Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot
 
PerlMonks  

Re^2: Too Many IDs

by ForgotPasswordAgain (Priest)
on Jan 09, 2020 at 13:36 UTC ( [id://11111244]=note: print w/replies, xml ) Need Help??


in reply to Re: Too Many IDs
in thread Too Many IDs

"Depends" is about as good of an answer as I could think of, too. (Whenever someone uses a comparative like "better", I usually insist on "better, for what?" if I'm going to spend time answering, and start thinking "is this an X-Y question?", etc.) I would say that I don't usually use selectall_hashref like that. Often I'd go with selectall_arrayref($sql, {Slice => {}}, @bind) so you iterate over the hrefs directly to build the aggregations (hashes). And end up, like you say, splitting things up and doing subqueries. Depends..

Replies are listed 'Best First'.
Re^3: Too Many IDs
by The_Dj (Beadle) on Jan 09, 2020 at 13:53 UTC

    Yep.

    I will traverse all the data

    The data lives on another server

    Pulling it all at once is just faster*

    I acutlly have both selectall_hashref and selectall_arrayref($sql, {Slice => {}}) in my code, each where it is best (I believe)

    * I should probably benchmark that

      Have you given thought that maybe you should just do the work on the db server via stored procedure or some such, and only return back the data needed to render user output (regardless of format)? Since you've claimed both of these keys to be unique, you presumably already have enforced that in the db and that should mean they're indexed anyway, and then you don't really need to worry about how many lookup keys you have.

      And, out of left field, comes Tanktalus...

        Yeah, this. Databases are actually amazingly fast at doing this type of operations.

        Alex / talexb / Toronto

        Thanks PJ. We owe you so much. Groklaw -- RIP -- 2003 to 2013.

        Sadly, not possible

        The task is to update the database based on fresh results from a few 1000 3rd party API calls

        I suppose It's not impossible to do that as a stored procedure, but I couldn't put that load on the DB server anyway

        Also, my SQL isn't as strong as my Perl ;-)

      So "best" means fastest?

      We don't know how big your table is and if memory is an issue.

      In programming you can almost always trade memory with time!

      ... like pulling n big chunks of the table in sliding windows.

      For instance I suppose (see your footnote) two SQL queries for ID and SN are faster but your map solution occupies less memory ( the second level hash refs are reused scalars)

      So .... It really depends...

      > * I should probably benchmark that

      That's a bingo! ;)

      Cheers Rolf
      (addicted to the Perl Programming Language :)
      Wikisyntax for the Monastery FootballPerl is like chess, only without the dice

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11111244]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others lurking in the Monastery: (3)
As of 2024-04-19 21:19 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found