Thank you again, Tux, for your thoughtful reply.
The newline placeholder convention is unique to the Concordance DAT file and doesn't fall within the scope of ordinary CSV parsing. In hindsight, I shouldn't have mentioned it here. You're right: it's trivial to convert REGISTERED SIGN characters to newlines after the CSV records are parsed.
Imagine a fully Unicode-based finite state machine that only operates on Unicode code points (better) or Unicode extended grapheme clusters (best). It would tokenize only true Unicode strings, notionally like this in Perl:
for my $grapheme ($csv_stream =~ m/(\X)/g) {
...
}
This probably isn't easily done in C, is it?
You can still use Text::CSV_XS if you are sure that there are no U_0014 characters inside fields, but I bet you cannot be (binary fields tend to hold exactly what causes trouble).
In the particular case of the Concordance DAT records I'm working with right now, I'm simply using split. The CSV records are being generated by our own software, so I know they will always be well-formed, they'll never have literal CR or LF characters in them, and every string is enclosed in the "quote" character U+00FE. I expect it will be a decade or two before I'm unlucky enough to encounter a CSV record in a Concordance DAT file that the following Perl code won't handle correctly enough:
use utf8;
use charnames qw( :full );
use open qw( :encoding(UTF-8) :std );
use English qw( -no_match_vars );
# ...
if ($INPUT_LINE_NUMBER == 1) {
$record =~ s/^\N{BYTE ORDER MARK}//; # Remove Unicode BOM...
# ...
$record =~ s/^/\N{BYTE ORDER MARK}/; # Restore Unicode BOM...
}
# ...
sub parse {
my $record = shift;
chomp $record;
$record =~ s/^þ//;
$record =~ s/þ$//;
return split m/þ\x{0014}þ/, $record;
}
sub combine {
my $record = join "þ\x{0014}þ", @{ $_[0] };
$record =~ s/^/þ/;
$record =~ s/$/þ\n/;
return $record;
}
Thanks again.
Jim
|