I was trying to save memory by not having to load Config.pm. But because of your remark, I decided to do a Benchmark:
Benchmark: timing 1000 iterations of open, use...
open: 38 wallclock secs ( 2.62 usr 0.00 sys + 16.14 cusr 12.96 csys = 31.72 CPU) @ 381.68/s (n=1000)
use: 14 wallclock secs (13.15 usr + 0.00 sys = 13.15 CPU) @ 76.05/s (n=1000)
This surprised me a lot!. The fork() approach with open() seems to be 5 times as fast as loading Config.pm!
Alas, I think I stumbled upon a bug / feature / problem of Benchmark: apparently, only "usr" CPU is taken into account when calculating the number of runs/second, and the "usr" CPU is of course a lot less than with fork/open than it is with use.
Still, the fork/open approach only takes 2.5 times as much CPU as loading Config.pm. I wonder if that is a testament of the efficiency of fork(), or the slowness of Config.pm. ;-)
The code:
use Benchmark;
timethese( 1000, {
open => sub {
open my $handle, $^X.' -V:ccflags |';
my $ccflags = <$handle>;
delete $INC{'Config.pm'};
},
use => sub {
require Config; Config->import;
my $ccflags = $Config{ccflags};
delete $INC{'Config.pm'};
},
} );
Liz |