http://www.perlmonks.org?node_id=1007942

Jim has asked for the wisdom of the Perl Monks concerning the following question:

A Concordance DAT file is simply a CSV text file that uses the following metacharacters:

What's the best way to parse a Concordance DAT file using Modern Perl?

Assume the following:

Thanks!

Jim

Replies are listed 'Best First'.
Re: Best Way To Parse Concordance DAT File Using Modern Perl?
by graff (Chancellor) on Dec 09, 2012 at 03:53 UTC
    I gather that the "CRLF" pairs that serve to terminate records are not enclosed in any kind of quotes, whereas data fields that include "CRLF" as content must be quoted (using the U+00FE string delimiter). If that's not true, then parsing the input would be pretty tough.

    Apart from that, I'm not sure I understand what you're saying about the BOM (U+FEFF)... What in particular needs to be done to "handle it properly"? (In UTF-8 data, it's sufficient to just ignore/delete it without further ado, or perhaps include it at the beginning of one's output, if one expects that a downstream process will be looking for it.)

    Anyway, I'd go with the suggestion in the first reply.

      Thanks for your reply.

      I gather that the "CRLF" pairs that serve to terminate records are not enclosed in any kind of quotes, whereas data fields that include "CRLF" as content must be quoted (using the U+00FE string delimiter).

      Yes. Concordance DAT records are ordinary, well-formed CSV records. The <CR><LF> pairs that serve to terminate the records are outside any quoted string. Literal occurrences of <CR>, <LF> and <CR><LF> pairs are inside quoted strings.

      The only thing special about the CSV records in Concordance DAT files is the peculiar metacharacters.

      Apart from that, I'm not sure I understand what you're saying about the BOM (U+FEFF)... What in particular needs to be done to "handle it properly"? (In UTF-8 data, it's sufficient to just ignore/delete it without further ado, or perhaps include it at the beginning of one's output, if one expects that a downstream process will be looking for it.)

      It must be handled as specified in the Unicode Standard. Upon reading the UTF-8 data stream, it must be treated as a special character and not as part of the text. In the specific case of a CSV file, it must not be wrongly treated as a non-delimited string that is the leftmost field in the first record.

      UPDATE:  This Perl script…

      use Encode qw( decode_utf8 ); use Text::CSV_XS; my $csv_bytes = qq/\x{feff}"Field One","Field 2",3,4,"Field 5"\r\n/; my $csv_record = decode_utf8($csv_bytes); my $csv = Text::CSV_XS->new( { auto_diag => 1 } ); $csv->parse($csv_record);

      …fails with this error message:

      # CSV_XS ERROR: 2034 - EIF - Loose unescaped quote @ pos 4

      Jim

        Thanks for the info. If the problem demonstrated by your snippet there is the one that is causing Text::CSV to fail on the real data of interest, then it would seem that you have to impose "proper handling" of the BOM yourself. Delete it from the input before passing the data to Text::CSV.

        Since U+FEFF is (a) unlikely to be present anywhere else besides the beginning of the input file and (b) interpreted as a "zero width non-breaking space" if it were to be found anywhere else in the file (i.e., it lacks any linguistic semantics whatsoever), it should suffice to just delete it - here's a version of your snippet that doesn't produce an error:

        use Encode qw( decode_utf8 ); use Text::CSV_XS; my $csv_bytes = qq/\x{feff}"Field One","Field 2",3,4,"Field 5"\r\n/; my $csv_record = decode_utf8($csv_bytes); $csv_record =~ tr/\x{feff}//d; ## Add this, and there's no error. my $csv = Text::CSV_XS->new( { auto_diag => 1 } ); $csv->parse($csv_record);
        I suppose it's kind of sad that you need to do that for Text::CSV to work, but at least it works.
Re: Best Way To Parse Concordance DAT File Using Modern Perl?
by 2teez (Vicar) on Dec 09, 2012 at 03:02 UTC

    In my considered opinion, one can use these module to parse a CSV file Text::CSV or Text::CSV_XS

    If you tell me, I'll forget.
    If you show me, I'll remember.
    if you involve me, I'll understand.
    --- Author unknown to me
Re: Best Way To Parse Concordance DAT File Using Modern Perl?
by space_monk (Chaplain) on Dec 09, 2012 at 06:48 UTC
    Furher to what 2teez said, simply use Text::CSV or one of its close relatives to parse the file. You can change the field/end of line separators simply by specifying them when you instantiate the object. e.g. in the case of Text::CSV:
    $csv = Text::CSV->new ({ quote_char => '"', escape_char => '"', sep_char => ',', eol => $\, always_quote => 0, quote_space => 1, quote_null => 1, binary => 0, keep_meta_info => 0, allow_loose_quotes => 0, allow_loose_escapes => 0, allow_whitespace => 0, blank_is_undef => 0, empty_is_undef => 0, verbatim => 0, auto_diag => 0, });
    It also has support for encoding format etc.
    A Monk aims to give answers to those who have none, and to learn from those who know more.

      Unfortunately, Text::CSV_XS doesn't work. It can't parse Concordance DAT files like the one I described.

      Is there another way?

      Jim

Re: Best Way To Parse Concordance DAT File Using Modern Perl?
by space_monk (Chaplain) on Dec 10, 2012 at 14:46 UTC

    If it's a UTF-8 file, isn't it meant to have a 3 byte BOM? Your BOM indicates that it's a UTF-16 file, not UTF-8.

    Anyway UTF-8 text files with Byte Order Mark discussed this, and the comments in that node may be helpful.

    See the module File::BOM which was mentioned in there as a means of opening files which may contain a BOM.

    A Monk aims to give answers to those who have none, and to learn from those who know more.
      If it's a UTF-8 file, isn't it meant to have a 3 byte BOM? Your BOM indicates that it's a UTF-16 file, not UTF-8.

      It is a Unicode BOM encoded in three bytes in the UTF-8 character encoding scheme. But it's just one character (one Unicode code point), represented in Perl as \x{FEFF} or \N{BYTE ORDER MARK}. In a decoded, abstract Unicode string, distinctions between various encodings (serializations) of the string don't exist.

      Jim

        I realize it is probably impossible because the file contains evidence and attorney work product, but can you isolate and anonymize a few exemplar records that would cause the CSV or CSV_XS modules to fail in a properly formatted file somewhere?