in reply to Scraping HTML: orthodoxy and reality
So I brought HTML::TreeBuilder to bear on the task. It wasn't quite as easy. It was no simple matter to find a reliable part in the tree from whence to direct my search. The HTML contains deeply nested tables, with a high degree of repetition for each kit and cartridge. The various pieces of information in scattered in different elements and collecting and collating it made for some pretty ugly code.
From a logical standpoint, HTML::TableExtract seems to be a perfect choice for this. It might not be a perfect choice for efficiency, for which you seem to have a requirement (though I'm not sure what total run time you were shooting for...how often is this tool going to be run?)
Could you give an example HTML page and some numbers such as how many of them you are expected to handle and how often? For purposes of discussion, let's say your parallel fetch more or less delivers all of the pages simultaneously.
(Despite my bias, I am not automatically anti-regexp parsing and I see both sides of that particular scuffle)