in reply to Re: CLucene module for perl
in thread Simple Text Indexing

Several points from some similar work I have been dabbling in, on an off, for some time now...

If you have a file which allows comments or anchors (HTML etc.) I've found it really is easiest for indexing to set up a two-pass process... the first to set up appropriate markers, at reasonable intervals, the second to pull your wordlist out, ideally with a hash of words pointing to lists of markers or tags etc. Alternatively, a process which uses paragraph numbers, line numbers, or simply file offsets useable by a seek may suffice.

You will need a more extensive stop-list for large bodies of text -- in fact, for really large ones you need to develop your own, suited to the text concerned. Some frequency analysis may assist here. Also see perlindex, which uses the __DATA__ area as a store for a longer list.

My preferred technique with a corpus of plain text is actually to convert it (using perl, naturally) into HTML, inserting copious anchors for indexed points. This means I can view segments in a browser for context checking.

(I assume you can always convert back, recording say, para numbers, if you need to have text back.)

Frankly, the above for me is the easy bit. The hard bit is the establishment of context for an index marker, and the correct addition of synonyms to the index for extra terms not otherwise included in the text. That's why I find the HTML conversion and viewing really works best for me. There's still no substitute for human judgement on the context indexing question...

WordNet modules may be the answer to the synonym problem here. That's the bit I'm looking at now.