|Think about Loose Coupling|
Parsing Gutenberg Catalog Indexby hacker (Priest)
|on Aug 30, 2004 at 05:04 UTC||Need Help??|
hacker has asked for the
wisdom of the Perl Monks concerning the following question:
I've been thrown a curveball in a community project I signed myself up for.. a linguist contacted me with the intent of rolling through all of Project Gutenberg, importing every ebook found, and doing some analysis of the contents of the texts, for his paper.
He found me, because I suggested a project to convert all of Project Gutenberg to Plucker format, in a professional, scalable, automated fashion.
The combined talents of a "Professional Screen Scraper" and a Linguist, would be ideal here, which is how the two of us managed to hook up on this project.
But we ran into a snag.. there is ZERO consistency in the Project Gutenberg etexts, after editing them by hundreds of volunteers, each with their own ideas. Project Gutenberg's Distributed Proofreaders is a good first step, but it isn't quite there yet.
How does this relate to Perl? Well, that would be our engine, to roll through each book, unzip it, catalog it, store it in MySQL, and output it into a format we can grok with other tools.
Our first step is to take the Gutenberg Master Index, and parse it for the title, filename, document number, and so on, to populate the initial tables in the database. From there, we'll query the db and fetch each ebook in succession to perform our tests, analysis, import, and conversion of the documents.
But we're stuck on the index. The only thing that seems to remain constant in there, is the document number, a 5-digit number in the right-most column. It is always at the end of the very first line of text describing a new book entry.
What is the best way to approach parsing this? I don't even know where to begin to scan this file for the right textual members in a way that doesn't overlap another previous or following entry, during our import.
Pseudo-examples or real examples we can use as a starting point would be great. Thanks again, everyone.