in reply to Question about properly laying out a database
BerkeleyDB, you've got exactly one index per file.
You could fake extra indexes, but you'll have come up
with "one to many" relationships - you may find "chevy"
in the "brands" index, but it can't refer to any single
other record - it has to refer to multiple other records -
which is ok with a RDBMS, but a pain with .db files.
If you make a linear search through the .db file every time,
performance should be ok for no more then a hit a second,
but will be slower then doing a linear search through a
flat file. With a hashed file, you access the records
randomly when you do a "linear" search, but with a flat file,
a linear search really is linear. I wouldn't have any
reservations at all about doing...
EACHREC: foreach my $rec (keys %tiedhash) {
foreach my $criteria (split / /, @searchkeys) {
next EACHREC unless $rec =~ m/$criteria/;
}
print qq{ $rec $tiedhash{$rec}\n};
}
or something thereabouts...
|