http://www.perlmonks.org?node_id=1011632

elef has asked for the wisdom of the Perl Monks concerning the following question:

Dear fellow monks, I'm looking for some general guidance on how feasible a project idea of mine is.
What the software would need to do is: do lookups in a hell of a lot of text data and display the hits in a GUI. Fuzzy matching is not essential, but I would need string (word) search, exact term search "all of these words in any order or position" search, "includes this string but doesn't include this string" and similar things. There are ~15 million records, each record containing a couple of hundred UTF-8 characters in 4 or 5 fields. The data takes up about 8GB in a tab separated text file. This wouldn't be a web service, the software would need to be able to run on any random windows computer.
So, how difficult would this be for a relative novice to write? I have a reasonable handle on perl itself, and I could write the GUI in Tk without too much trouble, but I don't know how involved the database stuff would be. I have exactly zero experience with databases. What sort of performance can I expect from whatever database engine I would end up using? How much time would it take to import 8GB of text into a database format and how much space would it take up? Most importantly, how much time would a lookup take on a run-of-the-mill laptop? Could the whole app be packaged up into a reasonably-sized .exe file with PAR::Packer? (excluding the actual data, of course)
Thanks for any help. Also, if there already is an open source lookup tool out there that I could adapt for the purpose, please let me know. Perhaps even LibreOffice Base could be used?

Replies are listed 'Best First'.
Re: Writing a database lookup tool
by tobyink (Canon) on Jan 04, 2013 at 14:06 UTC

    Unless you absolutely need SQL support, take a look at Apache Lucene.

    perl -E'sub Monkey::do{say$_,for@_,do{($monkey=[caller(0)]->[3])=~s{::}{ }and$monkey}}"Monkey say"->Monkey::do'
      That sounds a lot like what I need. It's in Java though, which I am completely ignorant about. The perl port ( http://search.cpan.org/~tmtm/Plucene-1.25/lib/Plucene.pm ) "is not, in and of itself, a functional search engine - you are expected to subclass and tie all the pieces together to suit your own needs". Does the same apply to the java version? I.e. it's not a full-on graphical lookup application, more like a toolkit for making your own, right?
      If so, what does it offer that all the other database modules on CPAN don't provide?

        Apache provides Solr which is a web service wrapping Lucene. This can be queried using Apache::Solr or WebService::Solr.

        What does it provide? Speed. Pure, uncut speed.

        perl -E'sub Monkey::do{say$_,for@_,do{($monkey=[caller(0)]->[3])=~s{::}{ }and$monkey}}"Monkey say"->Monkey::do'
Re: Writing a database lookup tool
by marto (Cardinal) on Jan 04, 2013 at 14:16 UTC

    The answer to a lot of your questions is, it depends. How fast something runs on a machine of unknown spec depends on many factors, including your how you've written the app, where the database will live (a remote server over a slow network?), the staste of the machine, and so on.

    "What sort of performance can I expect from whatever database engine I would end up using?"

    Many modern databases are capible of some fancy text searching capabilities 'out of the box'. The performance depends to a great extent on your database of choice, how you confirugre it, how you index the data and you query the data.

    "How much time would it take to import 8GB of text into a database format and how much space would it take up?"

    Importing the data into a database shouldn't take too long, and it's a one time thing. For size it depends on the database and it's data compression.

    "Could the whole app be packaged up into a reasonably-sized .exe file with PAR::Packer?"

    Depending on your defination of reasonable, yes.

    If your goal is to package an application to allow users to remotely query a database, consider the alternatives, for example a web based search tool running on the same server as the database. Consider also that other open source products already exist for text searching, for example Solr (note, it's not Perl) and the Perl module Solr.

    Update: fixed typo.

      The data would be offline, i.e. on the user's computer.
      I realize that speed depends on the specs and the implementation, but it should be possible to give a ballpark estimate of some sort. I.e. let's assume a there are 15 million records with a 100 characters in each (in the field that we're searching). I look up a 10-character string. There are 1000 hits. How much time would it take for those 1000 hits to be found if the database design and implementation is not particulary well optimized? 0.01 second? 1 second? 5 seconds?

      Regarding file size, sure, it depends, but again, I'm looking for a ballpark. If the source data is 8GB of UTF-8 text, what are we looking at? More than the 8GB or less (due to some internal compression the DB format might use). Could one throw away the original text files after importing?

      Re: Solr, it has a lot of the features I would want (optimized for text search, regex and sounds-like filters, hit highligting), but it looks like it's designed to run on a server, not offline.

        Forget your quaint "offline" concept.

        The laptop is the server (and the client).

        perl -E'sub Monkey::do{say$_,for@_,do{($monkey=[caller(0)]->[3])=~s{::}{ }and$monkey}}"Monkey say"->Monkey::do'

        Please clearly mark updates to your posts. I suggest you actually take the time to try this out for yourself, the learning experience will be worth while and you'll soon relise how vague your questions actually are, and how essentially meaningless it would be to give you a result of a query running on X million records within my tuned environment for a database platform you'll never use.

        Your concept of server and offline is flawed. Your laptop would be the server in that it would host the database, webserver, Solr instance or whatever.

        Update: consider someone asking a piano maker "I have no experience in building a piano, no knowledge of wood and little knowledge of metal work. Roughly how long will it me take to build a piano by hand?"

        Update: fixed typo.

Re: Writing a database lookup tool
by NetWallah (Canon) on Jan 04, 2013 at 15:18 UTC
    As a practical starting point, I suggest you : Oh yes .. perl has DBI and DBD::SQLite modules to interface with a sqlite Database.

                 "By three methods we may learn wisdom: First, by reflection, which is noblest; Second, by imitation, which is easiest; and third by experience, which is the bitterest."           -Confucius

      I see to recall some limits to database size for sqllite, weird things happening if the filesize exceeded 4GB (note not a platform file size limit) seems to ring a bell, however this was some time ago this may no longer be a problem.

        Yes - there is a file size limit documented at the Sqlite site.

        It is approximately 140 Terabytes.

                     "By three methods we may learn wisdom: First, by reflection, which is noblest; Second, by imitation, which is easiest; and third by experience, which is the bitterest."           -Confucius

      Thanks, NetWallah.
      I've heard about the firefox squlite manager addon before and took a mental note to try it sometime. I might try that. It looks like it won't be able to handle anything like my full dataset or even a tenth of it, but I could use it to play with databases.
      Do you have any input on the pros and cons of sqlite+DBI (or any other perl module) versus Solr? For instance, how about portability? Can either be packed into an exe with par packer? Would every user need to install sqlite or other software separately?
        I don't think you will be limited by Sqlite's database size capabilities - see my reply to marto above.

        FYI - current versions include FTS (Full text search) capabilities.

        Sqlite and perl run on every platform I have heard of.

        I'm not familiar with Solr - it certainly looks interesting - and may map better to your problem domain. However, I got the impression you wanted a stand-alone (not web-server based) solution. Sqlite may be better for that option. Yes - you will need to install each stand-alone instance, but installation is trivial - just copy the file in. Also - it is possible for multiple sqlite clients to access a single shared sqlite database file.

                     "By three methods we may learn wisdom: First, by reflection, which is noblest; Second, by imitation, which is easiest; and third by experience, which is the bitterest."           -Confucius

        "Do you have any input on the pros and cons of sqlite+DBI (or any other perl module) versus Solr"

        Apache Solr isn't comprable to SQLite/DBI. It's not a RDBMS, why are you trying to make comparisons where there are none to be made?

        "Can either be packed into an exe with par packer?"

        DBD::SQLite can be packaged with pp. If you'd actually read what Solr is you wouldn't consider asking if it could be packaged. It simply doesn't make sense to do so.

        "Would every user need to install sqlite or other software separately?"

        No.

Re: Writing a database lookup tool
by Anonymous Monk on Jan 05, 2013 at 09:25 UTC

    The data takes up about 8GB in a tab separated text file.

    It might take about 8.2GB-10GB in sqlite -- not bad at all :)

      sqlite3 -- don't mess with sqlite2

        Why not spend some time familiarizing yourself with Catalyst or Dancer, and learn some MySQL? Using the DBI for MySQL it turns out isn't all that much more work than messing with SQLite...