XP is just a number | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
When faced with a similar requirement, I shrewdly anticipated that this requirement was the tip of a larger iceberg: that the client would very soon be asking for the result of other queries against the data. So, I designed the process from the get-go to use an SQLite database ... which, of course, is just a disk-file. (There is no “server.”) It is very easy even with command-line tools (incorporated as a bash-script, say) to import a CSV and make a table from it. Even gigantic amounts of data can be efficiently processed. (Sqlite is frankly an awesome piece of work ...) DBD::Sqlite2 is particularly handy because it includes a copy of Sqlite as part of the package.
And then, you just solve the problem with queries: Anything and everything else that they next dream up, can be answered quickly and in the same way. (You can also remind them that they can access the data and run queries against it using any spreadsheet ...)
The only “gotcha” that you should be aware of with SQLite, particularly when you are updating information but also when you are reading it, is that you should do everything in a “transaction.” If you do not, you will see that everything runs very slowly, because SQLite is then purposely not using “lazy reads and writes.” In reply to Re: perl group by and sort from a csv input file
by sundialsvc4
|
|