Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister
 
PerlMonks  

Re^2: Safely removing Unicode zero-width spaces and other non-printing characters

by mldvx4 (Friar)
on Dec 04, 2019 at 09:30 UTC ( [id://11109651]=note: print w/replies, xml ) Need Help??


in reply to Re: Safely removing Unicode zero-width spaces and other non-printing characters
in thread Safely removing Unicode zero-width spaces and other non-printing characters

The source of the data is a large number of RSS feeds used which point to an even larger number of individual web pages. The latter are what are harvested and processed with a few scripts. So normalizing the data at the source is not an option, since so few webmasters even publish mail addresses let alone fix their sites.

Maybe there is a CPAN module or simple method to forcibly convert the incoming data (or outgoing data) to UTF? Just calling it UTF-8 fails, too: binmode(STDOUT, ":encoding(utf8)"); Is there a way to find out if it should be labeled UTF-16 instead? If so then how to force that mode?

$ apt-cache policy perl | head -n 3 perl: Installed: 5.28.1-6 Candidate: 5.28.1-6

Replies are listed 'Best First'.
Re^3: Safely removing Unicode zero-width spaces and other non-printing characters
by haj (Vicar) on Dec 04, 2019 at 10:37 UTC

    It is input decoding which matters here. There is no way to convert incoming data to UTF without treating the original encoding of each individiual input. The issue with harvesting from different sites is that the encoding of these sites can be 1) different and 2) just broken for a few of the sites.

    Your code snippet s/\x{00A0}/ /gm; just works if all input has been properly decoded into to Perl's "character" semantics (I avoid to call it UTF-something because this is misleading), protected by the error handling of the Encode module.

    Of course, you need to encode your output, too. binmode(STDOUT, ":encoding(utf8)"), converts Perl's characters into a valid UTF-8 stream.

Re^3: Safely removing Unicode zero-width spaces and other non-printing characters
by haukex (Archbishop) on Dec 04, 2019 at 19:21 UTC
    The source of the data is a large number of RSS feeds used which point to an even larger number of individual web pages.

    Well, RSS is XML, and XML files should specify the encoding in the XML declaration, and XML parsers such as XML::LibXML do respect that declaration. However, it's possible that the XML declaration is missing or incorrect. In cases like that, one thing you might try is Encode::Guess, keeping in mind that it's just a guess. Or, if you're getting these feeds from web servers, you might look at the response headers for a hint.

      Yes, the RSS reads fine of course.

      The problem is with the pages which the RSS points to. HTML and XHTML is a hot mess. Even when a respectable CMS is used, the authors can still paste in something weird. It is looking like I may have to treat each site individually and making individual filters might not be worth the effort. However, I am hoping for an automated way to normalize incoming text.

        I am hoping for an automated way to normalize incoming text.

        Well, my suggestions for guessing encoding still apply, plus looking at the meta tags in the HTML might help (with the same caveat that it might be wrong). But again, for specific help with the specific issue that you wrote about in the root node, you'll have to show us some debug output.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11109651]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others meditating upon the Monastery: (12)
As of 2024-04-23 14:59 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found