|Problems? Is your data what you think it is?|
I understand that not assigning the data back to an array isn't useful. But it is still parsing the row, correct? My point was that to parse 2.1 million rows is fast. But storing it slows it considerably.
I think what's taking the time is the deletion and re-creation of the memory structure for each iteration of the loop. Uneccessary in my mind as the successive iterations (in this case) are always going to be of identical size.
Given the above, I was hoping for..
a) That the memory used by the unpack itself could be reused, rather than having to copy it to a perl structure.
Of course.. I'm not a seasoned programmer and this entire thread is just a waste of everyones time in which case I apologize. 8)
I just think if unpack has to put it into it's own array(has to or how does it know what to send back) that assigning it to a perl data type shouldn't take at least 6 times as long. If course, I really don't know what the behind the scenes of perl actually does to store data in memory.