I've used the practice you outline to process human-list-reports into nice, machine-treatable reports to great success, except without the slurping part:
package Parser::ReportFoo;
use strict;
sub new {
...
$self->columns = [qw(date customer amount)];
};
my @handlers = [
# Line containing uninteresting information
[qr/^Line containing uninteresting information/ => \&discard],
[qr/^------------/ => \&discard],
[qr/^(\d\d-[A-Z]{3}-\d\d) (..........) (\d+\.\d+)/ => \&captur
+e_transaction],
[qr/^\s*$/ => \&flush ],
[qr/^ ) (..........)\s+$/ => \&capture_name2],
];
sub discard {};
sub flush {
my ($self) = @_;
my @row = map { $self->$_ } (@{ $self->columns });
print join "\t", @row;
};
sub capture_transaction {
my ($self,$date,$customer,$amount) = @_;
$self->date($date);
$self->customer($customer);
$self->amount($amount);
};
sub capture_name2 {
my ($self,$customer) = @_;
$self->customer($self->customer . " " . $customer);
$self->flush();
};
sub parse {
my ($self,$file) = @_;
my $fh = open "<", $file or die "$file: $!";
while (defined my $line = <$fh>) {
# First, check which regex
my $handled;
for (@handlers) {
my ($re,$code) = @$_;
if (my @match = ($line =~ /$re/)) {
$code->($self, @match);
$handled = 1;
last;
};
};
warn "Unhandled line >>$line<<";
};
};
The separation of flush() and capture_foo is because I have sometimes reports where one logical row spans several lines or where the transaction date is noted at the top of the "page", so some information has to persist before a whole line can be printed to the results.
I use tab separated files with a file extension of .xls which is the most Excel friendly
thing you can get without involving Win32::OLE or SpreadSheet::WriteExcel.
Update: monarch spotted a typo/error in sub parse, I was using @_ where it should have been @$_.