Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer
 
PerlMonks  

Re^3: extracting data from HTML

by bitingduck (Friar)
on Jun 04, 2012 at 04:24 UTC ( #974220=note: print w/ replies, xml ) Need Help??


in reply to Re^2: extracting data from HTML
in thread extracting data from HTML

but ofcourse my test website had to come back with an error

One tip for developing scrapers: it's both convenient for you and polite to the site you're scraping to save a local copy that you can hammer at all you want without bothering their server. If you're scraping a lot of pages and doing a lot of tweaking on your code, you have the potential of really hammering someone's server. Once your extractor works, then you can put back the Mechanize calls to the site, which are probably not the hard part

In the example I gave upthread, it would have been ok for me to hammer the site, but I ended up cloning it with wget and running it locally.

Update: You might also want to see if the site you're scraping has an API that hands you structured data. I recently had to pull down the links for about 140 books from the Apple site, and they have a nice API that lets you search by ISBN. Amazon also tends to have an API for a lot of things. Other sites often do as well if you dig through the fine print at the bottom of the page.


Comment on Re^3: extracting data from HTML
Re^4: extracting data from HTML
by Jurassic Monk (Acolyte) on Jun 04, 2012 at 18:01 UTC

    This whole thing of extracting data from a HTML source is about populating a web-shop. My plan was to harvest as much data as needed and do some 'magick' with it and then use Rpc-XML to update the Magento Database. Guess none of the websites will be friendly in allowing me access to their source, mainly because they do not own their data, they license a web-shop and the data is being provided by another party.

    is this theft ? - don't answer

    not every html-source has the same underlying database and some web-sites do provide additional, meaningfull data, that the 'big-player' does not have. So yes, I will make different scrapers for each and every web-site, and even for different product types.

    The whole process will roughly be something like the following:

    1. Enter a product ID
    2. get html and save a coache
    3. proces data on disk and create source.xml
    4. do something meaningful with the sources
    5. ask user for confirmation or missing information where needed
    6. do something meaningful with the sources and save productinfo.xml

    7. take the product.xml and turn it into a magento.xml with XSLT
    8. feed the magento database

    I did a nice job with dirty programming, but the moment I encountered the cp1252 rubbish in my (suposedly) iso-8859-1 I gave up and wanted to start from scratch again, using proper XML modules, no longer relying on XML::Simple, discoverd XSLT that would help me out with processing the different sources and translating them from one (general) data-model to the magento model

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://974220]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others contemplating the Monastery: (13)
As of 2014-10-20 10:55 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    For retirement, I am banking on:










    Results (75 votes), past polls