in reply to Handling HUGE amounts of data
Someone has suggested packing the data - which is probably a good suggestion if I could figure out how.
Perhaps if I explain the flow.
'popfileb' creates a 2d array with just 'a','x', and 'd' as the values. The values are assigned to each element row by row based on both the original inputted data and the data already written to the previous line. So, if there is a 'd' at $aod[4][5], then $aod[4][6] should also be a 'd', but if $aod[4][5] is an 'a', then $aod[4][6] has a (more or less) random chance of being assigned a 'd'.
'model1' and 'model2' (only one is ever called per run) creates a 2d array (@aob). The data for each element in @aob is also dependent on the values in the line above as well as the values in the corresponding element in @aod AND then has a random number added to it.
The values from @aod and @aob are then combined in 'write_to_output' so during the printing to file phase all 'a's in @aod are replaced with the corresponding values in @aob.
So, how do I pack and unpack @aod one line (or one element) at a time? Again, I'm sure there's a simple way I'm not seeing, but I've never used pack/unpack before. I've never needed to.
Updated: As an experiment, I tried to generate and print out the full 17000 x 8400 @aod - ran out of memory at line 2216 (74,016 kb).
At least I now know where the bottle-neck is.
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^2: Handling HUGE amounts of data
by BrowserUk (Patriarch) on Jan 31, 2011 at 01:20 UTC | |
by Dandello (Monk) on Jan 31, 2011 at 05:51 UTC | |
by BrowserUk (Patriarch) on Jan 31, 2011 at 07:04 UTC | |
by Dandello (Monk) on Jan 31, 2011 at 07:40 UTC | |
by Dandello (Monk) on Jan 31, 2011 at 19:45 UTC | |
by BrowserUk (Patriarch) on Jan 31, 2011 at 20:07 UTC | |
by Dandello (Monk) on Jan 31, 2011 at 20:49 UTC | |
| |
Re^2: Handling HUGE amounts of data
by ELISHEVA (Prior) on Jan 30, 2011 at 21:52 UTC |