If, as it sounds from your post, you seeking to code your algorithm to be entirely independent of the pages and their content that you are intending to parse, you are on a hiding to nothing.
To demonstrate the difficulty, first construct a short set of data that you might hope to be able to extract. Say:
Now consider all the hundreds of different ways you could wrap that up in html in order to display it.
Then consider the effects of adding in images; filler; adverts; links to customer reviews; pagination controls; 'you might also like to consider' and 'other customers also brought" lists; and all the other irrelevances and annoyances that you routinely encounter on websites.
You end up with trying to consider how to extract those same 12 pieces of data from 100s of thousands of different formats, before even considering the possibilities of different languages or deliberate obfuscation to prevent scraping. You could spend months attempting to write such a generic parser only to be foiled when they revamp their websites.
It would be much better to tailor simple front-end scrappers to each of your specific target pages, and only get generic once you have extracted data you require. That way, when one page format changes, or a new page format needs to be handled, you are only faced with modifying (or writing anew) a small front-end script, not trying to adapt your entire parser to the new without breaking the existing.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.