Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister
 
PerlMonks  

"Out of memory" problem

by abdullah.yildiz (Novice)
on Nov 30, 2012 at 21:28 UTC ( #1006521=perlquestion: print w/ replies, xml ) Need Help??
abdullah.yildiz has asked for the wisdom of the Perl Monks concerning the following question:

Hi, I am trying to sort 500.000.000 numbers which are stored in a file. The file is approximately 5 GB in size. Initially, I read the numbers into an array and then perform merge sort. However, the memory is not enough to do the calculations and comparisons and after some time the program gives "Out of memory" error and dies. Could you help me about this problem? Thank you. Specifications: OS: Debian 6 RAM: 2 GB Swap: 2.6 GB

Comment on "Out of memory" problem
Re: "Out of memory" problem
by Ransom (Beadle) on Nov 30, 2012 at 21:45 UTC

    You need to use a sorting algorithm that is "in place". In other words, you won't need to have an entire list AND temporary lists in memory at the same time. Assuming you've got the members in an array and loaded in memory (can you run the program without starting to sort?), you should be able to use any algorithm that is specified as in place.

    http://en.wikipedia.org/wiki/Sorting_algorithm should give you some good ideas

Re: "Out of memory" problem
by mbethke (Hermit) on Nov 30, 2012 at 21:49 UTC

    I would try and use the system's sort(1) first as it's already optimized for this kind of stuff. 500M 32-bit integers would barely fit your memory if you put them in a straight C-style array with none of Perl's overhead, so it's probably a good idea to do a disk-based mergesort. Split the file into half a dozen or so parts, run sort on them individually and then use sort once more with -m to merge the results.

    If you really need to do it in Perl, you could have a look at File::Sort but I have no idea whether it works well.

Re: "Out of memory" problem
by kennethk (Monsignor) on Nov 30, 2012 at 21:49 UTC
    I assume that by citing merge sort, you are using native sort -- it will certainly have a smaller footprint than a homebrew solution. If you can't do it all in memory, have you considering using the divide and conquer algorithm that underlies merge sort and start by breaking the file into chunks, interleaving the final rounds with explicit file read/writes? As well, you may consider loading the data into an actual database for handling it. See Sorting data that don't fit in memory.

    #11929 First ask yourself `How would I do this without a computer?' Then have the computer do it the same way.

Re: "Out of memory" problem
by rpnoble419 (Pilgrim) on Nov 30, 2012 at 22:04 UTC
    This is why they created databases. Create an SQLite database and then you can sort anyway you want.

      That's my 'Dumb Answer of the Week Month' winner!


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

      RIP Neil Armstrong

      Agree with BrowserUK ... 500 million integers is a lot to index, and if you aren’t searching for anything, it’s pure overhead to get “sorted answers” that way.   But a good external sorting package would have no particular difficulty.

      Ideally, you would arrange the whole data-processing flow which includes this file so that everything gets put into a known sort-sequence early and things are done in such a way as to keep it that way from one step to the next.   So you might have a 500 million record master-file which is simply “known to be” sorted, and you manipulate that file in ways that require it to be that way and which keep it that way.   This avoids searching, and it avoids repetitive sorting.   It also avoids indexes and the overhead of the same.   At the same time, though, you do not want to schleb a bunch of data through disk-reads and disk-writes if you are not actually doing anything with most of it.

      Obviously, RAM is the fastest resource and it avoids I/O entirely ... provided that virtual-memory swapping is not going on, which can be killer.   Your strategy entirely depends on your situation, and sometimes you can get a lot of mileage simply by chopping a large file into smaller chunks so that each one does fit in the RAM that you have without swapping.

      The key here is ... “without swapping.”   If you are doing high volume processing “in memory” to avoid I/O, but push the limit so that you start to swap, not only is “I/O going on,” but it can be of a particularly murderous kind.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://1006521]
Approved by mbethke
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others avoiding work at the Monastery: (9)
As of 2014-11-26 11:18 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My preferred Perl binaries come from:














    Results (167 votes), past polls