http://www.perlmonks.org?node_id=1007949


in reply to Best Way To Parse Concordance DAT File Using Modern Perl?

I gather that the "CRLF" pairs that serve to terminate records are not enclosed in any kind of quotes, whereas data fields that include "CRLF" as content must be quoted (using the U+00FE string delimiter). If that's not true, then parsing the input would be pretty tough.

Apart from that, I'm not sure I understand what you're saying about the BOM (U+FEFF)... What in particular needs to be done to "handle it properly"? (In UTF-8 data, it's sufficient to just ignore/delete it without further ado, or perhaps include it at the beginning of one's output, if one expects that a downstream process will be looking for it.)

Anyway, I'd go with the suggestion in the first reply.

Replies are listed 'Best First'.
Re^2: Best Way To Parse Concordance DAT File Using Modern Perl?
by Jim (Curate) on Dec 09, 2012 at 18:43 UTC

    Thanks for your reply.

    I gather that the "CRLF" pairs that serve to terminate records are not enclosed in any kind of quotes, whereas data fields that include "CRLF" as content must be quoted (using the U+00FE string delimiter).

    Yes. Concordance DAT records are ordinary, well-formed CSV records. The <CR><LF> pairs that serve to terminate the records are outside any quoted string. Literal occurrences of <CR>, <LF> and <CR><LF> pairs are inside quoted strings.

    The only thing special about the CSV records in Concordance DAT files is the peculiar metacharacters.

    Apart from that, I'm not sure I understand what you're saying about the BOM (U+FEFF)... What in particular needs to be done to "handle it properly"? (In UTF-8 data, it's sufficient to just ignore/delete it without further ado, or perhaps include it at the beginning of one's output, if one expects that a downstream process will be looking for it.)

    It must be handled as specified in the Unicode Standard. Upon reading the UTF-8 data stream, it must be treated as a special character and not as part of the text. In the specific case of a CSV file, it must not be wrongly treated as a non-delimited string that is the leftmost field in the first record.

    UPDATE:  This Perl script…

    use Encode qw( decode_utf8 ); use Text::CSV_XS; my $csv_bytes = qq/\x{feff}"Field One","Field 2",3,4,"Field 5"\r\n/; my $csv_record = decode_utf8($csv_bytes); my $csv = Text::CSV_XS->new( { auto_diag => 1 } ); $csv->parse($csv_record);

    …fails with this error message:

    # CSV_XS ERROR: 2034 - EIF - Loose unescaped quote @ pos 4

    Jim

      Thanks for the info. If the problem demonstrated by your snippet there is the one that is causing Text::CSV to fail on the real data of interest, then it would seem that you have to impose "proper handling" of the BOM yourself. Delete it from the input before passing the data to Text::CSV.

      Since U+FEFF is (a) unlikely to be present anywhere else besides the beginning of the input file and (b) interpreted as a "zero width non-breaking space" if it were to be found anywhere else in the file (i.e., it lacks any linguistic semantics whatsoever), it should suffice to just delete it - here's a version of your snippet that doesn't produce an error:

      use Encode qw( decode_utf8 ); use Text::CSV_XS; my $csv_bytes = qq/\x{feff}"Field One","Field 2",3,4,"Field 5"\r\n/; my $csv_record = decode_utf8($csv_bytes); $csv_record =~ tr/\x{feff}//d; ## Add this, and there's no error. my $csv = Text::CSV_XS->new( { auto_diag => 1 } ); $csv->parse($csv_record);
      I suppose it's kind of sad that you need to do that for Text::CSV to work, but at least it works.

        Thanks, graff!

        The problem with having to handle the BOM oneself is that, though it works with Text::CSV_XS-parse(), it doesn't work with Text::CSV_XS->getline().

        Suppose we have this multi-line CSV record. There's a literal newline in field five.

        my $csv_record = qq{\N{BYTE ORDER MARK}"Field One","Field 2",3,4,"Fiel +d Five" };

        How would one parse this record using Text::CSV_XS?

        (See the companion thread titled Peculiar Reference To U+00FE In Text::CSV_XS Documentation for more information about this topic.)

        Jim