in reply to Re: DBD::CSV limitation versus aging/unmaintained modules
in thread DBD::CSV limitation versus aging/unmaintained modules

Try taking the CSV file and cutting it in halves

Gah! One could probably find the problem with much, much less effort if one weren't pathologically opposed to the use of debuggers. q-:

But seriously, in this case I'd build a hash of record IDs returned by Text::CSV and then use the method that "works" and report the records where Text::CSV starts/stop seeing records:

#!/usr/bin/perl use DBI; tdbh = DBI->connect("DBI:CS­V:") or die "Cannot connect: " . $DBI::errstr; my $sth = $tdbh->prepare("sele­ct * from ofphl"); $sth->execute(); my %dbi; my $rec; while( $rec= $sth->fetch() ) { $dbi{$rec->{id_field_name}}++; }; open FILE, "< file.csv" or die "Can't read file.csv: $!\n"; my $has= 1; $|= 1; while( <FILE> ) { my $id= ( split(/,/) )[0]; # Assuming ID is first field; if( !$has != !$dbi{$id} ) { print "DBI ", ( $has ? "stopped" : "started" ), " at record $id.\n"; $has= !$has; } }

Note that you might need to concatenate more than one field if there isn't a unique ID field.

                - tye