note
Anonymous Monk
I have done a couple of web sites like this. I did not use WWW::Mechanize, but instead LWP::Simple or LWP::UserAgent, and HTML::TokeParser; HTML::LinkExtractor; no one showed me how to do this, I just did some reading and tried it.</br>
</br>
I also have no problem breaking it up into steps. As in, get some data in a format, then use my trusty text editor with Grep expressions to reform the data lines, then back to a perl script to put it somewhere.</br>
</br>
In several cases, I looked at the URL that gets me a page of data, then made a list of values to substitute into that URL, then ran a perl script to do it and catch the result. e.g. www.academicsFlow.edu/greatBooks.php?item=1</br>
then looped with values 1 through 99 inserted in that line
</br>
If there is a nested link(s) I want to get, I might write a loop that uses the simple mode of LinkExtractor, iterating from one HREF to the next, and saving the results in seqentially named text files.</br>
</br>
Then I run through the text files, open each one, and write a simple state machine loop that has a bunch of hard coded knowledge of the format of the page to look for and extract the data fields. Not generalized, not pretty, but works for the task. I write out each of the values in say a tab-delimitted format I have defined. </br>
</br>
Lastly, I read in the tab delimited file into whatever target DB, like Outlook I suppose. THough I have not gone there :-)</br>
</br>
hth</br>
546107
546107
3