Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"
 
PerlMonks  

printout of large files. loop vs. flush at once

by ISAI student (Scribe)
on Apr 18, 2013 at 16:21 UTC ( [id://1029384]=perlquestion: print w/replies, xml ) Need Help??

ISAI student has asked for the wisdom of the Perl Monks concerning the following question:

Hello all. We are seeing cases of slowdown when printing out files (our fileserver could be better).

I was wondering if there is any real difference between printing line by line vs. flushing out 1 big array, or multiline string. Right now, the scipts print out line by line. You can assume that the system's memory can handle it. Or in code:

... print FH $f; ... print FH $b; ...
vs.:
... $printout .= $f; ... $printout .= $b; ... print FH $printout
Thanks.

Replies are listed 'Best First'.
Re: printout of large files. loop vs. flush at once
by BrowserUk (Patriarch) on Apr 18, 2013 at 17:19 UTC
    I was wondering if there is any real difference between printing line by line vs. flushing out 1 big array, or multiline string.

    Accumulating the lines in memory will be considerably slower.

    Each time you concatenate a new line:

    1. A new chunk of memory the size of the existing accumulation + the new line will need to be allocated.

      As the accumulation gets larger, this will more and more frequently involve going ot to the OS to get the process' virtual allocation extended.

    2. Then all the data from the existing accumulation + the new line will be copied to the new allocation.
    3. Then the existing allocation will be freed

    Ie. By the time you've accumulated a 1000 line file, the first line will have been copied 1000 times, the second 999 times...

    In addition, when the entire thing has been accumulated and you come to writing it to the filesystem; the file cache will likely need to make a request to the OS to allocate sufficient cache memory for the thing; which in turn might force the OS to swap out (parts of) other running processes to accommodate it.

    When writing line by line, the data will first be written to cache memory in pages and only queued for flushing to disk when a page is full; meanwhile, another page from the existing cache pool will be used to buffer subsequent output allowing your process to continue full speed whilst the flush to disk happens in parallel.

    We are seeing cases of slowdown when printing out files

    The first question you'll need to asnwer before you'll get sensible answers is, how are you detecting and measuring this slowdown?

    With that information it might be possible to work out what is causing it.

    It is doubtful if there is anything that you can do in your Perl code to prevent or alleviate this problem; but with more information, there might be some useful advice forthcoming.

    Eg. What OS/version? What filesystem? How much data is being written? Are you running one instance of your program at a time or many concurrently? What else is using the file server? How fast is the network between you and the fileserver?


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
Re: printout of large files. loop vs. flush at once
by davido (Cardinal) on Apr 18, 2013 at 17:17 UTC

    If you want to slurp the entire file, don't read it line by line and then build up a giant $printout string. That will result (internally) in lots of copying and reallocating of larger memory chunks as the string grows. ...Perl's pretty smart, but you could almost look at that technique as an optimized version of Schlemiel the Painter's Algorithm. As you append, Perl has to allocate a larger chuncks of memory, and move the entire string to the new, larger locations.

    Better to just slurp it in one step using one of the following:

    my $string = do{ open my $fh, '<', 'filename' or die $!; local $/ = undef; <$fh>; };

    ...or...

    use File::Slurp; my $text = read_file('filename');

    ...or...

    use File::Slurp; my @lines = read_file('filename');

    Any of the above ought to be relatively efficient, though the File::Slurp methods are easier on the eyes. But if reading the file line by line (your existing approach) is too slow, slurping is only going to yield incremental savings, at best. Either way, you've got to touch the entire file. Line-by-line approaches tend to work out well, and scale well.

    You're probably already aware of this, but if the file size is large enough to swamp physical memory, slurping is not a good approach.


    Dave

Re: printout of large files. loop vs. flush at once
by mbethke (Hermit) on Apr 18, 2013 at 17:06 UTC
    Perl I/O is buffered unless you say otherwise, so what actually hits the disk (or rather the OS, which may do its own buffering) are usually blocks of ~4k anyway. On local disks 4k isn't usually very slow but as you're talking about a file server, I suppose there's some network in between your application and the disk. With their much higher latencies network file systems tend to have bigger differences between writing small and big blocks so it could be worth it. The only way to find out is to benchmark---just slurp a big text file into an array and then try how long it takes either way before you change all the printing in a non-trivial program.
Re: printout of large files. loop vs. flush at once
by RichardK (Parson) on Apr 18, 2013 at 17:20 UTC

    Whichever OS you're on should buffer any writes, so my first guess would be not enough memory somewhere.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://1029384]
Approved by kcott
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others cooling their heels in the Monastery: (3)
As of 2024-03-19 11:12 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found