Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses
 
PerlMonks  

Re^2: Best Way To Parse Concordance DAT File Using Modern Perl?

by Jim (Curate)
on Dec 09, 2012 at 18:43 UTC ( #1007997=note: print w/ replies, xml ) Need Help??


in reply to Re: Best Way To Parse Concordance DAT File Using Modern Perl?
in thread Best Way To Parse Concordance DAT File Using Modern Perl?

Thanks for your reply.

I gather that the "CRLF" pairs that serve to terminate records are not enclosed in any kind of quotes, whereas data fields that include "CRLF" as content must be quoted (using the U+00FE string delimiter).

Yes. Concordance DAT records are ordinary, well-formed CSV records. The <CR><LF> pairs that serve to terminate the records are outside any quoted string. Literal occurrences of <CR>, <LF> and <CR><LF> pairs are inside quoted strings.

The only thing special about the CSV records in Concordance DAT files is the peculiar metacharacters.

Apart from that, I'm not sure I understand what you're saying about the BOM (U+FEFF)... What in particular needs to be done to "handle it properly"? (In UTF-8 data, it's sufficient to just ignore/delete it without further ado, or perhaps include it at the beginning of one's output, if one expects that a downstream process will be looking for it.)

It must be handled as specified in the Unicode Standard. Upon reading the UTF-8 data stream, it must be treated as a special character and not as part of the text. In the specific case of a CSV file, it must not be wrongly treated as a non-delimited string that is the leftmost field in the first record.

UPDATE:  This Perl script…

use Encode qw( decode_utf8 ); use Text::CSV_XS; my $csv_bytes = qq/\x{feff}"Field One","Field 2",3,4,"Field 5"\r\n/; my $csv_record = decode_utf8($csv_bytes); my $csv = Text::CSV_XS->new( { auto_diag => 1 } ); $csv->parse($csv_record);

…fails with this error message:

# CSV_XS ERROR: 2034 - EIF - Loose unescaped quote @ pos 4

Jim


Comment on Re^2: Best Way To Parse Concordance DAT File Using Modern Perl?
Download Code
Re^3: Best Way To Parse Concordance DAT File Using Modern Perl?
by graff (Chancellor) on Dec 10, 2012 at 09:48 UTC
    Thanks for the info. If the problem demonstrated by your snippet there is the one that is causing Text::CSV to fail on the real data of interest, then it would seem that you have to impose "proper handling" of the BOM yourself. Delete it from the input before passing the data to Text::CSV.

    Since U+FEFF is (a) unlikely to be present anywhere else besides the beginning of the input file and (b) interpreted as a "zero width non-breaking space" if it were to be found anywhere else in the file (i.e., it lacks any linguistic semantics whatsoever), it should suffice to just delete it - here's a version of your snippet that doesn't produce an error:

    use Encode qw( decode_utf8 ); use Text::CSV_XS; my $csv_bytes = qq/\x{feff}"Field One","Field 2",3,4,"Field 5"\r\n/; my $csv_record = decode_utf8($csv_bytes); $csv_record =~ tr/\x{feff}//d; ## Add this, and there's no error. my $csv = Text::CSV_XS->new( { auto_diag => 1 } ); $csv->parse($csv_record);
    I suppose it's kind of sad that you need to do that for Text::CSV to work, but at least it works.

      Thanks, graff!

      The problem with having to handle the BOM oneself is that, though it works with Text::CSV_XS-parse(), it doesn't work with Text::CSV_XS->getline().

      Suppose we have this multi-line CSV record. There's a literal newline in field five.

      my $csv_record = qq{\N{BYTE ORDER MARK}"Field One","Field 2",3,4,"Fiel +d Five" };

      How would one parse this record using Text::CSV_XS?

      (See the companion thread titled Peculiar Reference To U+00FE In Text::CSV_XS Documentation for more information about this topic.)

      Jim

        Ah. What a pisser. I wonder if you could make Text::CSV_XS work by reading from STDIN... If so, you would just filter out all the BOM characters before feeding the data to your script:
        perl -CS -pe 'tr/\x{feff}//d' < source_file.dat | your_csv_parser ...
        Either that, or else redirect the output of that one-liner to create a cleansed version of the DAT file that has all the BOMs stripped out, and use that "bastardized" version of the data as input to the parser. (I assume that getting the data parsed is more important that preserving its obtuse fixation with BOM characters.)

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1007997]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others examining the Monastery: (16)
As of 2014-12-18 09:42 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Is guessing a good strategy for surviving in the IT business?





    Results (48 votes), past polls