in reply to Re: DBD::CSV limitation versus aging/unmaintained modules
in thread DBD::CSV limitation versus aging/unmaintained modules
Try taking the CSV file and cutting it in halves
Gah! One could probably find the problem with much, much less effort if one weren't pathologically opposed to the use of debuggers. q-:
But seriously, in this case I'd build a hash of record IDs returned by Text::CSV and then use the method that "works" and report the records where Text::CSV starts/stop seeing records:
#!/usr/bin/perl use DBI; tdbh = DBI->connect("DBI:CSV:") or die "Cannot connect: " . $DBI::errstr; my $sth = $tdbh->prepare("select * from ofphl"); $sth->execute(); my %dbi; my $rec; while( $rec= $sth->fetch() ) { $dbi{$rec->{id_field_name}}++; }; open FILE, "< file.csv" or die "Can't read file.csv: $!\n"; my $has= 1; $|= 1; while( <FILE> ) { my $id= ( split(/,/) )[0]; # Assuming ID is first field; if( !$has != !$dbi{$id} ) { print "DBI ", ( $has ? "stopped" : "started" ), " at record $id.\n"; $has= !$has; } }
Note that you might need to concatenate more than one field if there isn't a unique ID field.
- tye
|
---|
Replies are listed 'Best First'. | |
---|---|
Re: Re: DBD::CSV limitation versus aging/unmaintained modules (lazy)
by tilly (Archbishop) on Jan 15, 2004 at 21:59 UTC | |
Re: Re: DBD::CSV limitation versus aging/unmaintained modules (lazy)
by Eyck (Priest) on Jan 16, 2004 at 10:53 UTC | |
by tilly (Archbishop) on Jan 16, 2004 at 15:27 UTC | |
by Eyck (Priest) on Jan 16, 2004 at 16:11 UTC | |
by tilly (Archbishop) on Jan 19, 2004 at 05:43 UTC | |
by Eyck (Priest) on Jan 19, 2004 at 08:29 UTC |
In Section
Meditations