Hello Monks

I think I hit the wall with DBD::CSV module, it seems like it ignores most of rows in my file.

It all started as a success-story with DBD::CSV - I had some data in CSV format, and to access it I used loop and split

while (<FILE>) { my ($one,$two,$three..etc)=split(/,/); }

This works... but some/most of the entries are now enclosed in "", so instead of 1 I've got "1". So I go and check if someone solved the problem... and I recall using DBD::CSV long time ago..

And it's magical, it works, the code is shorter, and works correctly... all rejoice and I go on writing app built on top of DBD::CSV..until I hit some strange problems.. It looks like some selects return empty resultset when they shouldn't... So i:

##!/usr/bin/perl use DBI; tdbh = DBI->connect("DBI:CSV:") or die "Cannot connect: " . $DBI::errstr; my $sth = $tdbh->prepare("select * from ofphl"); $sth->execute(); while ($sth->fetch()) {$i++;}; print ": $i \n";
and it says: 3494 but
wc -l ofphl
says 164375. Something is obviously wrong here... I looked at 3494 row and few around it, and they look fine.. So after a while I returned to my old code, added
map ( {s/\"(.*)\"/$1/;$_}) split(/,/)
and went on with coding...

I'm not sure that's how one's supposed to use map though...

Anyhoo... DBD::CSV looks like it's been dead for quite some time now.. mailing lists mentioned in documentation moved etc...

Update I received reply from the author, mailinglist users now sit at ( could've found it myself with google help, but still it's nice if documentation points to existing datasources ).

Text::CSV_XS seems like it solves my problem ( I just needed a convienient way of reading CSV file, not a way to work on it ).

Replies are listed 'Best First'.
Re: DBD::CSV limitation versus aging/unmaintained modules
by tilly (Archbishop) on Jan 15, 2004 at 16:13 UTC
    Don't rule this a bug in DBD::CSV until you do more investigation. It is quite possible that there is a single mis-matched " in the file, and DBD::CSV is interpreting that oddly, and you lose lots of lines. Try taking the CSV file and cutting it in halves, see if each half is handled normally, and then do a binary search to find the data confusing DBD::CSV. After you find that, then decide whether it is doing the wrong thing, and if it is, then you can now send a useful bug report.

    If there is a data problem, running through the file with Text::xSV might spot it very quickly since it doesn't try to DWIM around errors and gives fairly detailed error reporting.

    As for your code, you do not handle commas, returns or double-quotes properly in a quoted field. Doing it right is surprisingly tricky. I would suggest using a module. Here are some options. Text::CSV is simple to use, but doesn't handle embedded returns properly. Text::CSV_XS is much faster and does handle embedded returns, but it has to be compiled. Text::xSV also handles embedded returns, but is much slower. Unfortunately no two of these have compatible APIs.

      Try taking the CSV file and cutting it in halves

      Gah! One could probably find the problem with much, much less effort if one weren't pathologically opposed to the use of debuggers. q-:

      But seriously, in this case I'd build a hash of record IDs returned by Text::CSV and then use the method that "works" and report the records where Text::CSV starts/stop seeing records:

      #!/usr/bin/perl use DBI; tdbh = DBI->connect("DBI:CS­V:") or die "Cannot connect: " . $DBI::errstr; my $sth = $tdbh->prepare("sele­ct * from ofphl"); $sth->execute(); my %dbi; my $rec; while( $rec= $sth->fetch() ) { $dbi{$rec->{id_field_name}}++; }; open FILE, "< file.csv" or die "Can't read file.csv: $!\n"; my $has= 1; $|= 1; while( <FILE> ) { my $id= ( split(/,/) )[0]; # Assuming ID is first field; if( !$has != !$dbi{$id} ) { print "DBI ", ( $has ? "stopped" : "started" ), " at record $id.\n"; $has= !$has; } }

      Note that you might need to concatenate more than one field if there isn't a unique ID field.

                      - tye
        For those who don't know what tye is joking about, see Are debuggers good?.

        On the debugging suggestion, you're right that that is a faster approach. It wasn't the one that immediately came to mind for me, but that's life.

        Of course I still suspect that running through the file with Text::xSV, once, will find your error pretty fast if there is an error in the file.

        Hmm I thought that I already found where the problem is - record 3964, this is the one after which DBD::CSV stops noticing more records.

        I just can find what exactly DBD::CSV finds wrong about that record/line, and more importantly - why doesn't it emit any kind of warning when it finds those 'corrupted' lines.

      Unfortunately no two of these have compatible APIs.

      Been a while since I used any of these (although for speed I used CSV_XS and for flexibility I use your xSV), but I seem to recall that CSV_XS and CSV are interface compatible. What am I forgetting?


        First they ignore you, then they laugh at you, then they fight you, then you win.
        -- Gandhi
 memory was bad. With Text::CSV you pass in the line to parse() while with Text::CVS_XS you pass in the filehandle to getline(). However looking at it I notice that Text::CSV_XS also offers the parse() interface. So I was wrong, they are compatible after all.

        However naive code using the parse() interface will break on embedded newlines. If you want to use Text::CSV, then I'd strongly recommend writing your own getline() function, and then using that.

Re: DBD::CSV limitation versus aging/unmaintained modules
by thraxil (Prior) on Jan 15, 2004 at 16:14 UTC