in reply to Reduce CPU utilization time in reading file using perl
Using Tie::File on such a huge file -- or any file over a few (single digit) megabytes is stupid. It will use huge amounts of cpu and be very slow.
You can build your hash much (much, much) more quickly this way:
open BIGFILE, '<', "testfile.dat" or die "Can't open file: $!\n"; my %hash; while( <BIGFILE> ) { chomp; my( $type, $No, $date ) = split(/\|/); $hash{$No.$date} = $type."@".$No."@".$date; } close BIGFILE; ## do something with the hash.
Will use far less cpu & memory and complete in less than 1/2 the time.
However, it is really doubtful that you will be able to build a hash from that size of file without running out of memory unless:
- there are huge numbers of duplicate records in that file.
- You have a machine that has huge amounts of memory.
- You have a huge swap partition. (Preferably sited on a SSD).
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^2: Reduce CPU utilization time in reading file using perl
by madtoperl (Hermit) on Sep 28, 2013 at 13:18 UTC | |
by BrowserUk (Patriarch) on Sep 28, 2013 at 13:31 UTC | |
by madtoperl (Hermit) on Sep 30, 2013 at 06:39 UTC | |
by BrowserUk (Patriarch) on Sep 30, 2013 at 08:42 UTC |
In Section
Seekers of Perl Wisdom