|We don't bite newbies here... much|
Re: Handling HUGE amounts of databy Dandello (Scribe)
|on Jan 30, 2011 at 20:53 UTC||Need Help??|
Someone has suggested packing the data - which is probably a good suggestion if I could figure out how.
Perhaps if I explain the flow.
'popfileb' creates a 2d array with just 'a','x', and 'd' as the values. The values are assigned to each element row by row based on both the original inputted data and the data already written to the previous line. So, if there is a 'd' at $aod, then $aod should also be a 'd', but if $aod is an 'a', then $aod has a (more or less) random chance of being assigned a 'd'.
'model1' and 'model2' (only one is ever called per run) creates a 2d array (@aob). The data for each element in @aob is also dependent on the values in the line above as well as the values in the corresponding element in @aod AND then has a random number added to it.
The values from @aod and @aob are then combined in 'write_to_output' so during the printing to file phase all 'a's in @aod are replaced with the corresponding values in @aob.
So, how do I pack and unpack @aod one line (or one element) at a time? Again, I'm sure there's a simple way I'm not seeing, but I've never used pack/unpack before. I've never needed to.
Updated: As an experiment, I tried to generate and print out the full 17000 x 8400 @aod - ran out of memory at line 2216 (74,016 kb).
At least I now know where the bottle-neck is.