Beefy Boxes and Bandwidth Generously Provided by pair Networks
Think about Loose Coupling

Re: Help on building a search engine (design / algorithms / parsing / tokenising / database design)

by inman (Curate)
on Jun 07, 2004 at 11:28 UTC ( [id://361945]=note: print w/replies, xml ) Need Help??

in reply to Help on building a search engine (design / algorithms / parsing / tokenising / database design)

The problem that you are trying to solve is traditional word based search which is probably more suited to a search engine than a SQL database. A typical search engine will allow you to index content such that an efficient word index is created in addition to a relational style table to store the meta-data for each document.

The search engine approach involves indexing the files as a preparation process followed by searching. The indexing part will require that the files are read (spidered) and filtered to extract content and metadata. Once the index is prepared, the search engine's query interface can be used to get reults.

Links to a couple of sites:

  • Comment on Re: Help on building a search engine (design / algorithms / parsing / tokenising / database design)

Replies are listed 'Best First'.
Re^2: Help on building a search engine (design / algorithms / parsing / tokenising / database design)
by bobtfish (Scribe) on Jun 07, 2004 at 12:35 UTC
    Yes. That's what my code does.

    It builds an inverted index of all the content I want to search and then queries that.

    It's the searching this index that is the part I need to optimise.

    Thanks for the links, however I can't find anything helpfull and low level enough that doesn't only cover the problems I've already solved.. (TBH, I can't find anything with actual code / alogrithms that isn't a back of cigarette packet style demonstration.. My code can already do complex and/or/not searches with arbitary nesting using () in the search and any number of search terms.)
      Try Perlfect Search. A full search engine implemented in Perl so you get the source code and everything!

      If you have the space, one very effective way of speeding up the searching of your inverted index is to index it!

      Once you have created your inverted index, you then create a second index from the first. This indexes pairs of words. The keys are pairs of words from your primary index. The values are the pages that contain the pairings. This vastly reduces the number of pages associated with each key. The cost is the huge number of keys.

      A partial solution is to only pair unusual (low hit count words) with common (high hit count words) once you have excluded all the really common words ('a', 'the', 'it' etc.).

      If the search doesn't include any uncommon words, the secondary index doesn't help, but you find that out very quickly, and there is no alternative than going through all the hits.

      If the search consists of only uncommon words, then the results from the primary index will be minimal anyway.

      But when the search incldes one or more common and one or more uncommon, the process of intersecting the huge list from the common and the small list of uncommon at runtime is expensive. Pre-processing these can substantially reduce the runtime costs.

      It's fairly easy to setup but requires a substantial amount of (pre-)processing power to maintain.

      Examine what is said, not who speaks.
      "Efficiency is intelligent laziness." -David Dunham
      "Think for yourself!" - Abigail

Log In?

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://361945]
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others musing on the Monastery: (7)
As of 2024-05-20 16:32 GMT
Find Nodes?
    Voting Booth?

    No recent polls found