Cody Pendant has asked for the wisdom of the Perl Monks concerning the following question:
I want to process a text file in human-compatible chunks of a certain size or below, as in, not "sysread" for chunks of 2048 bytes, but something which won't have end up dividing words or sentences.
By which I mean I want to go paragraph by paragraph, until I get to a chunk of a certain size, so I got something like this:
which is fine, but what I'd really like to do is have some kind of sub which would return me those chunks until the file ran out.open( X, 'x.txt' ); my ( $chunk, $line ); while (<X>) { $line = $_; if ( ( length($chunk) + length($line) ) > 2048 ) { # if the next line would take us over the set size doSomethingWithChunk($chunk); $chunk = $line; } else { $chunk .= $line; # append line and keep going } } doSomethingWithChunk($chunk); # process whatever's left in $chunk at the end
Something like how Algorithm::Permute does this:
my $p = new Algorithm::Permute(['a'..'d']); while (@res = $p->next) { # do something with @res }
Is there a module or a recognised way to do this? I'm blanking on the way to do it.
($_='kkvvttuu bbooppuuiiffss qqffssmm iibbddllffss')
=~y~b-v~a-z~s; print
|
---|
Replies are listed 'Best First'. | |
---|---|
Re: Handling A File In Human-Compatible Chunks
by BrowserUk (Patriarch) on Jun 27, 2005 at 11:59 UTC | |
Re: Handling A File In Human-Compatible Chunks
by holli (Abbot) on Jun 27, 2005 at 12:23 UTC | |
by Cody Pendant (Prior) on Jun 27, 2005 at 13:35 UTC | |
by holli (Abbot) on Jun 27, 2005 at 13:43 UTC | |
by Cody Pendant (Prior) on Jun 29, 2005 at 03:41 UTC | |
Re: Handling A File In Human-Compatible Chunks
by tlm (Prior) on Jun 27, 2005 at 11:58 UTC | |
Re: Handling A File In Human-Compatible Chunks
by broquaint (Abbot) on Jun 27, 2005 at 12:10 UTC |
Back to
Seekers of Perl Wisdom