in reply to
Re^3: how to quickly parse 50000 html documents? (Updated: 50,000 pages in 3 minutes!)
in thread how to quickly parse 50000 html documents?
i take the criticism for using strong words.
the html as provided is reminiscent of 80's websites. the html provides no data structure elements, merely look/feel html elements/attributes.
hence as the tags are superficial in terms of data, it would be easier and less likely to break if the scrape is based on the html converted to text. basing the regexes on the well defined terms followed by a collon. basing the regexes instead on largely irrelevant (look/feel) html, I see as not an effective design.
i'm not criticising the html just for the sake of being critical...but i believe in basing programs on best variant of input data. so if one has no control over the website, then at least making best attempt at getting non-breaking data is better (imho) to just scraping the worst and hoping for the best. so instead of merely criticising, it's basing decisions on likely factors...deciding to use text form of data instead of anchoring on outdated or likely-to-be-not-well-formed html. that's all.
the hardest line to type correctly is: stty erase ^H