|Just another Perl shrine|
Perl always reads in 4K chunks and writes in 1K chunks... Loads of IO!by NeilF (Sexton)
|on Jan 01, 2006 at 00:14 UTC||Need Help??|
NeilF has asked for the wisdom of the Perl Monks concerning the following question:
I've been doing some analysis of some of my perl scripts... Take this simple example reading/writing a 1 meg file (with about 3000 lines):-
When I watch this running (XP Pro SP2) using File Monitor (by SysInternals) I can see the read generates a new IO process 4K at a time. Worse still, when writing, it generates an IO process for each 1 K... In total this simple operation generates over 1500 IO processes (in File Monitor).
Using standard open and print results in the same number of IO processes (in File Monitor). I've also tried the same thing running perl in CYGWin with File Monitor looking, and the same results are shown, 4K chunks are read, and 1K chunks are written.
Is there anyway around this? Am I monitoring it correctly? You can see by my example I've tried using the more exotic calls to try and stop this "buffering"...
Here's a link to an example output from FileMonitor showing the 4K chunks being read in. www.hvweb.co.uk/fawcettn/filemon4.gif (Rememember to maximise the image size)
UPDATE---- Seems when I test using CYGWin BINMOD makes NO improvement at all... So what right? I want to improve the software on my ISPs Unix machine, but get contradictory results from Active Perl and CYGWin on my XP system... What can I do :(