http://www.perlmonks.org?node_id=1008144


in reply to Re^6: Peculiar Reference To U+00FE In Text::CSV_XS Documentation
in thread Peculiar Reference To U+00FE In Text::CSV_XS Documentation

  1. Newlines

    If U+00AE is just a placeholder for newlines *inside* fields, my proposed solution works fine.

  2. BOM

    I have been playing with thoughts about BOM handling quite a few times already, but came to the same conclusion time after time: the advantage is not worth the performance penalty, which is huge.

    Text::CSV_XS is written for sheer speed, and having to check BOM on every record-start (yes, eventually that is what it turns out to be if one wants to support streams) is not worth it. It is relatively easy to

    • Do BOM handling before Text::CSV_XS starts parsing
    • Write a wrapper or a super-class that does BOM handling

  3. Non-ASCII characters for sep/quote/escape

    Any of these will imply a speed penalty, even if I would allow it and implement it. That is because the parser is a state machine, which means that the internal structure should change to both allowing multi-byte characters and handling them (1st check on start of each of them, then read-ahead if the next is part of the "character" and so on. I already allow this on eol up to 8 characters, which was a pain in the ass to do safely. I'm not saying it is impossible, but I'm not sure if it is worth development time.

    You can still use Text::CSV_XS if you are sure that there are no U_0014 characters inside fields, but I bet you cannot be (binary fields tend to hold exactly what causes trouble).


Enjoy, Have FUN! H.Merijn
character

Replies are listed 'Best First'.
Re^8: Peculiar Reference To U+00FE In Text::CSV_XS Documentation
by Jim (Curate) on Dec 10, 2012 at 21:24 UTC

    Thank you again, Tux, for your thoughtful reply.

    The newline placeholder convention is unique to the Concordance DAT file and doesn't fall within the scope of ordinary CSV parsing. In hindsight, I shouldn't have mentioned it here. You're right:  it's trivial to convert REGISTERED SIGN characters to newlines after the CSV records are parsed.

    Imagine a fully Unicode-based finite state machine that only operates on Unicode code points (better) or Unicode extended grapheme clusters (best). It would tokenize only true Unicode strings, notionally like this in Perl:

    for my $grapheme ($csv_stream =~ m/(\X)/g) { ... }

    This probably isn't easily done in C, is it?

    You can still use Text::CSV_XS if you are sure that there are no U_0014 characters inside fields, but I bet you cannot be (binary fields tend to hold exactly what causes trouble).

    In the particular case of the Concordance DAT records I'm working with right now, I'm simply using split. The CSV records are being generated by our own software, so I know they will always be well-formed, they'll never have literal CR or LF characters in them, and every string is enclosed in the "quote" character U+00FE. I expect it will be a decade or two before I'm unlucky enough to encounter a CSV record in a Concordance DAT file that the following Perl code won't handle correctly enough:

    use utf8;
    use charnames qw( :full );
    use open qw( :encoding(UTF-8) :std );
    use English qw( -no_match_vars );
    
    # ...
    
        if ($INPUT_LINE_NUMBER == 1) {
            $record =~ s/^\N{BYTE ORDER MARK}//; # Remove Unicode BOM...
    
            # ...
    
            $record =~ s/^/\N{BYTE ORDER MARK}/; # Restore Unicode BOM...
        }
    
    # ...
    
    sub parse {
        my $record = shift;
    
        chomp $record;
    
        $record =~ s/^þ//;
        $record =~ s/þ$//;
    
        return split m/þ\x{0014}þ/, $record;
    }
    
    sub combine {
        my $record = join "þ\x{0014}þ", @{ $_[0] };
    
        $record =~ s/^/þ/;
        $record =~ s/$/þ\n/;
    
        return $record;
    }
    

    Thanks again.

    Jim