Best way to parse/evaluate HTML page contents for apparent image sizeby Anonymous Monk
|on Dec 14, 2012 at 00:49 UTC||Need Help??|
Anonymous Monk has asked for the
wisdom of the Perl Monks concerning the following question:
What libraries do the PMs suggest to determine the size of an image that a graphical web browser would actually render? I'm aware of Selenium and WWW::Selenium, but only that one. Are there better, pure-perl ways to do this, or alternative libraries to that one?
I'm trying to extract images from HTML pages to build a "zeitgeist" of what's happening on a subset of very specific web pages that are not built by me. For the most part, it's working well enough just using the CPAN module HTML::TokeParser and determining image sizes either from the actual image dimensions or from html attributes.
However, my approach suffers from a couple of drawbacks, namely those webmasters who use huge images and scale them down in the browser, and those webmasters who don't specify width/height attributes or only provide one of them making it hard to know what the actual rendered size actually is.
If I can avoid having to call an external graphical browser and stay "pure perl" that would be ideal. Lacking that, I could consider WWW::Selenium, but am hoping for the broader wisdom of the PerlMonks to see if there are any alternatives to it that are recommended.