http://www.perlmonks.org?node_id=412401

punch_card_don has asked for the wisdom of the Perl Monks concerning the following question:

Malkovitchian Monks, I am populating a mysql databse programmatically, analyzing a giant text file and then inserting into one of the many tables.
pseudo code: open FILE while ($line = FILE) { analyze $line to extract ($table, $value) pairs foreach ($table_$value_pair) { $sql = "INSERT INTO ".$table." VALUES(".$value.")" prepare execute() } }
This works, but is very slow. I can't get around the repeated prepares because the table changes with each iteration and there is way too much data (~1Gb) to hold it all in table-keyed hashes ready for a table-wise insertion at the end.

One option I'm considering is doing it in blocks, storing for a while, then purging:

open FILE $index = 0; while ($line = FILE) { analyze $line to extract ($table, $value) pairs push each found $value onto $values_hash{$table} as 2-d array so +$values_hash{$table} = a list of $values $index++; when $index reaches, say, 10,000 or 100,000 { foreach $key (keys $values_hash) { $sql = "INSERT INTO ".$table." VALUES(?)" prepare foreach $value in list $values_hash{$key} execute($value) } } clear $values_hash ready to start storing again } }
I figure this should help, but not sure how much I will actually gain by that.

Are the are any other slick speed tricks? I vaguley remember something to do with pre-generating csv files and then importing....