Take hammer. See nail. Hit nail on the head.
Physical latencies – seek-time, basically – is really the only thing that matters here. Although the operating-system’s buffer pools will mitigate the seeking somewhat, you still want to hold that to a minimum ... taking care that I/O doesn’t sneak up on you from the backside in the form of virtual-memory paging.
I’m personally not sure that threads would help here ... although I do yield to your greater expertise on this matter ... but I think that simply buffering a few hundred thousand extracted records, e.g. in an array of hashrefs, just might hit pay dirt. Instead of writing each record out immediately, push it onto an array (of hashrefs). When the array reaches some agreed-upon and easily-tweakable threshhold, pause to go shift them all off and write them to the output file, then continue reading. (The read/write head moves from over-here to over-there, stays there a while to write the stuff out, then moves back.) Set the threshhold to some point where you can be fairly sure that there will be enough physical RAM available to hold everything without paging. I suspect that you will be astonished at what just-this does for the program.