http://www.perlmonks.org?node_id=61453

smferris has asked for the wisdom of the Perl Monks concerning the following question:

Me again. (Can you tell I just really found this site. I knew it existed but really didn't browse it)

I'm parsing a fixed width flat file for use in loading to different destinations. Possibly back to a file, possibly into a database.

I figured unpack would be faster and cleaner and it is. As long as you don't assign the output of unpack to an array. EG:

open(FH,"large.file") or die $!; while($row=<FH>){ unpack("a9 a40 a15 a15 a15 a2 a9 a9 a9",$row); }

Runs in about 20seconds on 2.1million rows. Modify the code as such: (added the array assignment)

open(FH,"large.file") or die $!; while($row=<FH>){ @data=unpack("a9 a40 a15 a15 a15 a2 a9 a9 a9",$row); }

Now the code runs in over a minute. I have to "transform" the different elements in @data. Is there anyway to make this faster? I'm assuming the memory structure for @data is being reallocated every iteration. Is that true?

As always, all help is greatly appreciated.

Shawn M Ferris
Oracle DBA