I would try and use the system's sort(1) first as it's already optimized for this kind of stuff. 500M 32-bit integers would barely fit your memory if you put them in a straight C-style array with none of Perl's overhead, so it's probably a good idea to do a disk-based mergesort. Split the file into half a dozen or so parts, run sort on them individually and then use sort once more with -m to merge the results.
If you really need to do it in Perl, you could have a look at File::Sort but I have no idea whether it works well.
| [reply] |
I assume that by citing merge sort, you are using native sort -- it will certainly have a smaller footprint than a homebrew solution. If you can't do it all in memory, have you considering using the divide and conquer algorithm that underlies merge sort and start by breaking the file into chunks, interleaving the final rounds with explicit file read/writes? As well, you may consider loading the data into an actual database for handling it. See Sorting data that don't fit in memory.
#11929 First ask yourself `How would I do this without a computer?' Then have the computer do it the same way.
| [reply] |
You need to use a sorting algorithm that is "in place". In other words, you won't need to have an entire list AND temporary lists in memory at the same time. Assuming you've got the members in an array and loaded in memory (can you run the program without starting to sort?), you should be able to use any algorithm that is specified as in place.
http://en.wikipedia.org/wiki/Sorting_algorithm should give you some good ideas
| [reply] [d/l] |
| [reply] |
This is why they created databases. Create an SQLite database and then you can sort anyway you want. | [reply] |
| [reply] |
Agree with BrowserUK ... 500 million integers is a lot to index, and if you aren’t searching for anything, it’s pure overhead to get “sorted answers” that way. But a good external sorting package would have no particular difficulty.
Ideally, you would arrange the whole data-processing flow which includes this file so that everything gets put into a known sort-sequence early and things are done in such a way as to keep it that way from one step to the next. So you might have a 500 million record master-file which is simply “known to be” sorted, and you manipulate that file in ways that require it to be that way and which keep it that way. This avoids searching, and it avoids repetitive sorting. It also avoids indexes and the overhead of the same. At the same time, though, you do not want to schleb a bunch of data through disk-reads and disk-writes if you are not actually doing anything with most of it.
Obviously, RAM is the fastest resource and it avoids I/O entirely ... provided that virtual-memory swapping is not going on, which can be killer. Your strategy entirely depends on your situation, and sometimes you can get a lot of mileage simply by chopping a large file into smaller chunks so that each one does fit in the RAM that you have without swapping.
The key here is ... “without swapping.” If you are doing high volume processing “in memory” to avoid I/O, but push the limit so that you start to swap, not only is “I/O going on,” but it can be of a particularly murderous kind.
| [reply] |