http://www.perlmonks.org?node_id=469353


in reply to Fast seeking in a large array of hashes to generate a report.

It seems to me that what is missing here is a comparison of the relative speed of using direct database queries versus using the "hash-in-memory" method. Since you seem most worried about speed, it seems reasonable to just code up the database query version and see if it is faster or not. I'm guessing that it will be, presuming you have indices on all your lookup columns and that you use appropriate DBI calls (sticking to arrayrefs, fetchall, etc.). But there really isn't any way to know unless you try, is there? Perl is a wonderful tool, but modern relational databases are also. Using both to their fullest is the goal.

Also, note that you may not have to fetch all records in every case. If you have queries that only generate summary statistics, the DB can usually do those quickly as well, so you never have to retrieve those records directly at all.

Sean

  • Comment on Re: Fast seeking in a large array of hashes to generate a report.

Replies are listed 'Best First'.
Re^2: Fast seeking in a large array of hashes to generate a report.
by jbrugger (Parson) on Jun 23, 2005 at 11:05 UTC
    Yes i know how to use DBI and a database.
    As stated before, the problem i get when i use sql, is that i create a monsterous sql with inner, outer left and rigt joins, if -statemetns( not fast) and 100'ds of where statements.
    The database is big, and the structure complex, the problem as described here, is only a simple representation of the real situation.

    "We all agree on the necessity of compromise. We just can't agree on when it's necessary to compromise." - Larry Wall.
      Sorry. Despite your explanation above, I didn't really grasp the complexity of the problem. Thanks for clarifying....

      Sean