Clear questions and runnable code get the best and fastest answer |
|
PerlMonks |
Re^5: Use of do() to run lots of perl scriptsby jcb (Parson) |
on Mar 04, 2021 at 02:54 UTC ( [id://11129099]=note: print w/replies, xml ) | Need Help?? |
So using a RAM-disk could have the best cost benefit ratio. While our questioner has not told us what operating system this is using, modern systems tend to cache frequently used files in RAM. Linux kernels, in particular, will effectively move frequently-accessed files to a RAM disk, available RAM permitting. This is the "dcache" and Linux has had it for decades. I have my doubts that refactoring 500 scripts is an option The work could be done incrementally. The Pareto principle suggests that 20% of the scripts are probably used 80% of the time, with a long tail into the noise. Our questioner also mentioned that they are not all Perl scripts, so presumably it is already known that the largest contributors to server load are Perl scripts, otherwise the entire question is pointless. Precompiling them all into the master process would make them vulnerable to global effects in the BEGIN-phase. ByteLoader does not work like that. You use B::Bytecode to compile the script in advance, load ByteLoader itself in the master process, and use ByteLoader in each child to load and run the precompiled script. (Actually, the precompiled script can include use ByteLoader; so that fork or do "script.plc"; will cause ByteLoader to be correctly invoked.) Global effects in the BEGIN-phase could be an issue for refactoring the scripts into modules, but addressing those issues would be part of refactoring scripts into modules. :-)
In Section
Seekers of Perl Wisdom
|
|