|Perl Monk, Perl Meditation|
Re: how to quickly parse 50000 html documents? (Updated: 50,000 pages in 3 minutes!)by BrowserUk (Pope)
|on Nov 25, 2010 at 22:43 UTC||Need Help??|
For counterpoint, I ran your snippet through the following one liner and got pretty much what you need:
Your description says each page consists of two same-sized set of these values, so just discard the first half. You don't want the dollar signs, commas or percentages, so post process to remove them.
People will tell you that this is fragile, and will break if the page is changed. But any solution will break if the pages change, but given how simple this is, it'll will probably be quicker to fix this, than any solution that relies upon fuzzy parsing of a whole heap of stuff that you have no interest in whatsoever.
Just as I don't bother reading the stories on the newspaper my fish&chips comes wrapped in before eating; I don't bother parsing a bunch of html I've no interest in. Ie. Don't parse; simply extract.
It will certainly be a whole heap faster. Given your description of the size of the files,
Update: I revise my estimate to just over 3 minutes based upon running this code:
Over 1000 copies of a mocked up file containing 10 copies of your snippet (5 as the reference; 5 as the wanted) in 4 seconds:
Even if the page layout changes, the 27 hrs 57 minutes you saved each time you need to do this, should cover the 5 minutes it will take to re-write it :)
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.