http://www.perlmonks.org?node_id=479199

My quick & dirty getfile subroutine. And a one liner too, courtesy of Merlyn (with some modifications) ... not sure which would be faster.

Added one after reading Best Practices. I don't know how fast it would be, or if it would be faster. I haven't even syntax checked this. I just wanted to get this down during a quick stop.

Minor fix to the last sample

sub getfile { my $filename = shift; open my $FH, "<$filename" or die "Unable to open $filename: $!"; return wantarray ? map /(.*)/, <$FH> : do { local $/ ; <$FH> }; }
sub getfile { local *ARGV ; @ARGV = shift ; wantarray ? map /(.*)/, <> + : do { local $/ ; <> } }
sub getfile { my $filename = shift; open my $FH, "<$filename" or die "Unable to open $filename: $!"; sysread $FH, my $contents, -s $FH; return wantarray ? split m{$/}, $contents : $contents; }

Replies are listed 'Best First'.
Re: getfile( $filename )
by merlyn (Sage) on Jul 29, 2005 at 02:10 UTC
      Yeah, I've run across that little ditty before, most recently as a yak shaving on my IPC::Open3 journey, but it doesn't handle wantarray. It just occurred to me that it doesn't chomp the lines either ... hmmm ...
      return wantarray ? do { my @lines = <$FH> ; chomp @lines ; @lines } : do { local $/ ; <$FH> };
      I wish there was a way to chomp <$FH>.
      Harley J Pig
Re: getfile( $filename )
by xdg (Monsignor) on Jul 29, 2005 at 11:22 UTC

    c.f File::Slurp

    use File::Slurp; my @lines = read_file( $filename );

    Also see the article in that distribution on file slurping and efficiency. It sugggests reading the file in one shot and then splitting on newlines instead of reading line-by-line.

    -xdg

    Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

      I do a lot of work for clients who either don't have the ability (or the capability) to install from CPAN or just plain don't trust it (I have no idea why) and refuse to use CPAN modules. I have, on occasion, filed off the serial numbers and used a cpan module in order to get something done quickly but I *really* prefer not to do that.

      Also, if the file is too big to be read entirely into memory then reading the entire file in one shot isn't a good idea.

      Harley J Pig
        Also, if the file is too big to be read entirely into memory then reading the entire file in one shot isn't a good idea.
        Then when you file off the serial numbers, you can add an optional parameter for chunk size, and convert the function to an iterator. Make a module version and put it on CPAN for everyone else (unless there's already one there?)

        -QM
        --
        Quantum Mechanics: The dreams stuff is made of