Limbic~Region axed:
Leaving Java aside, is there a more run-time efficient way than my second suggestion in Perl?
Probably. (But in order to answer your question with confidence, I would need to know more about the OS and filesystem that you are using, the input size, and the distribution of files that must be opened for writing. Lacking that information, here is my best guess.)

Assuming sufficiently small input size, we can load the entire input into RAM and build an optimal write plan before attempting further I/O. The plan's goal would be to minimize disk seek time, which is likely the dominant run-time factor under our control. An optimal strategy would probably be to open one file at a time, write all of its lines, close it, and then move on to the next file. If input size is larger than RAM, the speediest approach would then be to divide the input into RAM-sized partitions and process each according to its own optimal write plan.

Caching the output filehandles (as in your second implementation) probably will not be competitive. Even if you can hold all of the output files open simultaneously, a write pattern that jumps among files seemingly at random will probably kill you with seek time. Your OS will do its best to combine writes and reduce head movement with elevator (and better) algorithms, but you'll still pay a heavy price. You'll do much better if you can keep the disk head writing instead of seeking.

If it turns out that the number of distinct files to be created is nearly the same as the number of input lines, no strategy is likely to improve performance significantly over the naive strategy of opening and closing files as you walk line by line through the input.

One more thing. If the input that Mr. Java tested your program against was millions of lines long, does that imply that your code may have been creating thousands of files? If so, you might want to determine whether the filesystem you were using has a hashed or tree-based directory implementation. If not, your run time may have been dominated by filesystem overhead. Many filesystems (e.g., ext2/3) bog down once you start getting more than a hundred or so entries in a directory.


In reply to Re: Performance Trap - Opening/Closing Files Inside a Loop by tmoertel
in thread Performance Trap - Opening/Closing Files Inside a Loop by Limbic~Region

Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":