|Problems? Is your data what you think it is?|
advice needed for processing largish data in multiple filesby jimbus (Friar)
|on Aug 18, 2006 at 21:09 UTC||Need Help??|
jimbus has asked for the
wisdom of the Perl Monks concerning the following question:
I'm looking for advice on how to set up merging files when the data in the secondary files are row oriented, not column oriented like the first... that is in the first file, the phone number is the key and the additional info is columns on the same line, real simple to dump into a DB. There are about 1.5 million records. In the next 3 files there are basically 2 columns: the phone number and legal value from a list of about 25 over the 3 files. The phone number and legal value form the key. In the end I need a list with the phone number as the key and all the info and options as columns in a single table.
My thought was that the first file could be parsed and inserted into mysql pretty simply or maybe even using awk to print out the columns I need and sqlloading it into the DB. For the other files I was thinking I would read each line and build a HOH with the phone number and lvl as the two keys, then update DB, but I'm concerned about speed and memory usage with that many records.
Your input on this is appreciated
--Jimbus aka Jim Babcock
Wireless Data Engineer and Geek Wannabe