http://www.perlmonks.org?node_id=321815


in reply to Re: Re: DBD::CSV limitation versus aging/unmaintained modules (lazy)
in thread DBD::CSV limitation versus aging/unmaintained modules

Ok, Thanks tilly, after reading your reply I finally started to see what is wrong about that line, it contains something like this:

,"Description description "hi world" rest of description",
And overly-smart modules fail to parse that(not surprisingly).

While it's easy to state that such file is badly formatted, it've been emitted from large Oracle-based system and there's nothing I can do about it ( not that I would pursue such noble cause now that I solved the problem on my side ).

  • Comment on Re: Re: Re: DBD::CSV limitation versus aging/unmaintained modules (lazy)

Replies are listed 'Best First'.
Re: Re: Re: Re: DBD::CSV limitation versus aging/unmaintained modules (lazy)
by tilly (Archbishop) on Jan 19, 2004 at 05:43 UTC
    But you have not actually solved the problem from your side. You have just hidden it - guaranteed that if any fields anywhere have a comma in it, then you will silently give wrong results.

    I would suggest having your code at least put in some highly visible check for, for instance, an unexpected number of fields. And escalate the formatting issue a level or two. Because if their output doesn't correctly format CSV, then at some point there is nothing that you can do to work around the breakage.

      Ok, thanks.

      But even in case of comma in one of fields I end up with one broken line, not with whole datafile ignored.