I'm trying to figure out how to filter/blank out bad UTF8 chars.
This particular snippet works perfectly to prevent croakage on bad UTF8 (ie, to identify bad UTF8 input prior to further processing):
use Encode qw(is_utf8);
# check if $str is UTF8 and contains bad UTF8.
print "bad UTF8\n"
if is_utf8($str) and not is_utf8($str, 1);
Then, I may want to salvage what I can from $str (ie, filter/remove the bad UTF8 chars) by running it through iconv (which I read somewhere *may* remove bad UTF8 chars):
iconv -c --from UTF-8 --to UTF-8
However, that involves a slow shell call, so I tried Text::Iconv
my $conv = Text::Iconv->new("utf8", "utf8");
$str = $conv->convert($str);
But that does not filter out bad UTF8 chars.
Is there a fast/efficient cpan module/way to strip out any bad UTF8 chars?
If not a selective filter, then as a last resort, is there a brute-force method of simply removing *all* UTF8 chars?