http://www.perlmonks.org?node_id=1008141


in reply to Re^5: Peculiar Reference To U+00FE In Text::CSV_XS Documentation
in thread Peculiar Reference To U+00FE In Text::CSV_XS Documentation

Thank you, Tux, for your reply.

If otoh 0xAE is just a placeholder for embedded newlines, that is easy to do (see below).

Yes, U+00AE (®, REGISTERED SIGN, 0xC2 0xAE in UTF-8) is used as a placeholder for literal newlines in quoted strings. The CSV records in Concordance DAT files are ordinary ones with standard EOL characters:  <CR><LF> pairs.

Another point of care is that Text::CSV_XS does not deal with BOM's, so you'll need File::BOM or other means to deal with that.

This would be a nice feature to add to Text::CSV_XS:  proper handling of Unicode byte order marks in UTF-8, UTF-16 and UTF-32 CSV files.

Note that the encoded U+00FE is 0xC3BE, which is two bytes, and two bytes cannot be used as a sep_char in Text::CSV_XS, which parses the data as bytes, so the stream has to be properly coded before parsing.

This settles it. It's not the answer I'd hoped for, but I'm glad to know now with certainty that Text::CSV_XS cannot parse a UTF-8 Concordance DAT file. I'll stop trying hopelessly to make it work. ;-)

How difficult would it be to enhance Text::CSV_XS to handle metacharacters in Unicode CSV files that are outside the Basic Latin block (i.e., not ASCII characters)? The Concordance DAT file is a de facto standard format for data interchange in the litigation support and e-discovery industry. As I've explained, the only thing special about it is the unusual and unfortunate characters it uses for metacharacters:  U+0014, which is a control code; U+00FE, which is word constituent character; and U+00AE, which is a common character in ordinary text.

Jim

  • Comment on Re^6: Peculiar Reference To U+00FE In Text::CSV_XS Documentation

Replies are listed 'Best First'.
Re^7: Peculiar Reference To U+00FE In Text::CSV_XS Documentation
by Tux (Canon) on Dec 10, 2012 at 18:02 UTC
    1. Newlines

      If U+00AE is just a placeholder for newlines *inside* fields, my proposed solution works fine.

    2. BOM

      I have been playing with thoughts about BOM handling quite a few times already, but came to the same conclusion time after time: the advantage is not worth the performance penalty, which is huge.

      Text::CSV_XS is written for sheer speed, and having to check BOM on every record-start (yes, eventually that is what it turns out to be if one wants to support streams) is not worth it. It is relatively easy to

      • Do BOM handling before Text::CSV_XS starts parsing
      • Write a wrapper or a super-class that does BOM handling

    3. Non-ASCII characters for sep/quote/escape

      Any of these will imply a speed penalty, even if I would allow it and implement it. That is because the parser is a state machine, which means that the internal structure should change to both allowing multi-byte characters and handling them (1st check on start of each of them, then read-ahead if the next is part of the "character" and so on. I already allow this on eol up to 8 characters, which was a pain in the ass to do safely. I'm not saying it is impossible, but I'm not sure if it is worth development time.

      You can still use Text::CSV_XS if you are sure that there are no U_0014 characters inside fields, but I bet you cannot be (binary fields tend to hold exactly what causes trouble).


    Enjoy, Have FUN! H.Merijn
    character

      Thank you again, Tux, for your thoughtful reply.

      The newline placeholder convention is unique to the Concordance DAT file and doesn't fall within the scope of ordinary CSV parsing. In hindsight, I shouldn't have mentioned it here. You're right:  it's trivial to convert REGISTERED SIGN characters to newlines after the CSV records are parsed.

      Imagine a fully Unicode-based finite state machine that only operates on Unicode code points (better) or Unicode extended grapheme clusters (best). It would tokenize only true Unicode strings, notionally like this in Perl:

      for my $grapheme ($csv_stream =~ m/(\X)/g) { ... }

      This probably isn't easily done in C, is it?

      You can still use Text::CSV_XS if you are sure that there are no U_0014 characters inside fields, but I bet you cannot be (binary fields tend to hold exactly what causes trouble).

      In the particular case of the Concordance DAT records I'm working with right now, I'm simply using split. The CSV records are being generated by our own software, so I know they will always be well-formed, they'll never have literal CR or LF characters in them, and every string is enclosed in the "quote" character U+00FE. I expect it will be a decade or two before I'm unlucky enough to encounter a CSV record in a Concordance DAT file that the following Perl code won't handle correctly enough:

      use utf8;
      use charnames qw( :full );
      use open qw( :encoding(UTF-8) :std );
      use English qw( -no_match_vars );
      
      # ...
      
          if ($INPUT_LINE_NUMBER == 1) {
              $record =~ s/^\N{BYTE ORDER MARK}//; # Remove Unicode BOM...
      
              # ...
      
              $record =~ s/^/\N{BYTE ORDER MARK}/; # Restore Unicode BOM...
          }
      
      # ...
      
      sub parse {
          my $record = shift;
      
          chomp $record;
      
          $record =~ s/^þ//;
          $record =~ s/þ$//;
      
          return split m/þ\x{0014}þ/, $record;
      }
      
      sub combine {
          my $record = join "þ\x{0014}þ", @{ $_[0] };
      
          $record =~ s/^/þ/;
          $record =~ s/$/þ\n/;
      
          return $record;
      }
      

      Thanks again.

      Jim