|
|
|
good chemistry is complicated, and a little bit messy -LW |
|
| PerlMonks |
Re: Processing files column-wiseby BrowserUk (Patriarch) |
| on Feb 22, 2012 at 22:49 UTC ( [id://955637]=note: print w/replies, xml ) | Need Help?? |
|
I've also looked at Tie::Handle::CSV and Text::CSV, but these modules seem to only process a file line-wise, not column-wise, which considering the size of my files would be quite inefficient and complex (once the column header is read, this is all the information necessary to determine where to copy the entire column to). There is no mechanism for reading a column from a file without reading the file line by line. That's just the way files work. But, line-by-line processing of files is perfectly efficient. Provided that you do not have to re-process each line for each column. That means placing all the fields from the first line into their respective files, before reading and processing the second line. This makes a lot of assumptions about the formatting of your ids and data files, but may serve to illustrate the technique even if you need to use one of the bastardised csv format processors. Update: reversing the %ids array -- ie. using the filenos as the keys and pushing the field nos to an array as the value would save having to grep the hash 4 times for every record. This is untested beyond basic syntax checking:
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
In Section
Seekers of Perl Wisdom
|
|
||||||||||||||||||||||||||||||