good chemistry is complicated, and a little bit messy -LW |
|
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
The data would be offline, i.e. on the user's computer.
I realize that speed depends on the specs and the implementation, but it should be possible to give a ballpark estimate of some sort. I.e. let's assume a there are 15 million records with a 100 characters in each (in the field that we're searching). I look up a 10-character string. There are 1000 hits. How much time would it take for those 1000 hits to be found if the database design and implementation is not particulary well optimized? 0.01 second? 1 second? 5 seconds? Regarding file size, sure, it depends, but again, I'm looking for a ballpark. If the source data is 8GB of UTF-8 text, what are we looking at? More than the 8GB or less (due to some internal compression the DB format might use). Could one throw away the original text files after importing? Re: Solr, it has a lot of the features I would want (optimized for text search, regex and sounds-like filters, hit highligting), but it looks like it's designed to run on a server, not offline. In reply to Re^2: Writing a database lookup tool
by elef
|
|