|Problems? Is your data what you think it is?|
DBI vs Bulk Loadingby jimbus (Friar)
|on Sep 14, 2005 at 21:02 UTC||Need Help??|
jimbus has asked for the
wisdom of the Perl Monks concerning the following question:
Here's something I posted at forums.mysql.com. It's more of a mysql thing, but I am scriptong it in perl and you guys are infinitely more helpful then they are and a lot cooler, too. :)
I'm processing log files in perl and I'm using the timestamp as the primary key , but the files are segregated by another field, so timestamps can potentially be spread over more than one file... Originally, I was using the DBI interface to query on the timestamp, if it didn't exist, I inserted it, if it did, I summed the data and updated the record; which worked but was too slow.
So I googled a bit on MySql tuning and performance and found that the best way insert, speed wise, is to write the digested data to a CSV and bulk load it. What I need help with is how to duplicate my PK violation logic with this method.
One though I had was that I could write the file, run the bulk load, have it place error raising records in another file (I believe it will do that) and use my perl logic process the second signifigantly small file, but that seems like a hack... so I thought I would post and ask if anyone had a more elegant solution.
Never moon a werewolf!