Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?
 
PerlMonks  

Re^3: how to quickly parse 50000 html documents? (Updated: 50,000 pages in 3 minutes!)

by ww (Bishop)
on Nov 26, 2010 at 04:49 UTC ( #873769=note: print w/ replies, xml ) Need Help??


in reply to Re^2: how to quickly parse 50000 html documents? (Updated: 50,000 pages in 3 minutes!)
in thread how to quickly parse 50000 html documents?

appaling (sic), you say?

Well, the nested tables are awkward and the use of various outdated or deprecated tags is unfortunate; the lack of quotes and the like can certainly be labeled "mistakes." But "appalling" is a pretty strong word. Perhaps "dated" or similar would be better.

...so bad as to be practically of no use.

Even harsher (and IMO, excessive), particularly since what we know about the html fails to support any inference that OP bears any responsibility.

There is, however, a valuable nugget that saves your post from a quick downvote -- the notion that future changes could break a regex solution. OTOH, any solution we can readily offer today would also be broken were the html converted to 100% compliant xml.


Comment on Re^3: how to quickly parse 50000 html documents? (Updated: 50,000 pages in 3 minutes!)
Re^4: how to quickly parse 50000 html documents? (Updated: 50,000 pages in 3 minutes!)
by aquarium (Curate) on Nov 28, 2010 at 23:18 UTC
    i take the criticism for using strong words.
    the html as provided is reminiscent of 80's websites. the html provides no data structure elements, merely look/feel html elements/attributes.
    hence as the tags are superficial in terms of data, it would be easier and less likely to break if the scrape is based on the html converted to text. basing the regexes on the well defined terms followed by a collon. basing the regexes instead on largely irrelevant (look/feel) html, I see as not an effective design.
    i'm not criticising the html just for the sake of being critical...but i believe in basing programs on best variant of input data. so if one has no control over the website, then at least making best attempt at getting non-breaking data is better (imho) to just scraping the worst and hoping for the best. so instead of merely criticising, it's basing decisions on likely factors...deciding to use text form of data instead of anchoring on outdated or likely-to-be-not-well-formed html. that's all.
    the hardest line to type correctly is: stty erase ^H

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://873769]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others exploiting the Monastery: (11)
As of 2014-07-28 11:53 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My favorite superfluous repetitious redundant duplicative phrase is:









    Results (196 votes), past polls