Beefy Boxes and Bandwidth Generously Provided by pair Networks
XP is just a number

Re: Performance quandary

by dws (Chancellor)
on Feb 26, 2002 at 08:31 UTC ( #147507=note: print w/replies, xml ) Need Help??

in reply to Performance quandary

Here's a thought, based on incomplete evidence (and guessing at code you aren't showing (and based on having read some DBM source a long time ago))...

When dealing with a hash tied to a DBM,   exists $hash{$key}
does disk I/O. The larger the underlying database grows, the greater the number of page reads needed to check that the key exists. Getting the corresponding value requires additional page reads.

Since a single script is creating the file, and you don't need to worry about concurrent access while the file is being created, it might make sense to short-circuit existing testing by adding an in-memory hash of valid keys.

You might also consider using md5_base64(), which returns a 22 byte string instead of the 32 byte returned by md5_hex(). Given the number of records you're dealing with, that'll save you space and time.

Replies are listed 'Best First'.
Re: Re: Performance quandary
by SwellJoe (Scribe) on Feb 26, 2002 at 13:33 UTC
    Thanks for your comments dws,

    I suspect you're probably right about an in-memory hash to speed lookups of existing keys. I will try this, in addition to the database structure changes suggested by tilly. I'm still spending some think time on how exactly to structure the db to minimize lookups while still keeping each object tiny and very simple to parse--everything I come up with seems to lead to more seeks/fetchs than I already have, but the objects do get smaller. An in-memory hash of existing keys would probably remove a lot of extraneous seeks.

    As for the MD5, you haven't read all of my posts! The MD5 we generate is a key requirement of the program, not an option, or just another way to generate a unique key. We have to match the Squid key for a given object--there is no way around that one. Besides, generating the MD5 is a miniscule part of CPU time being used. Since I haven't posted any profiles lately, I'll do so here (as good a place as any):

    Total Elapsed Time = 36923.77 Seconds User+System Time = 35132.22 Seconds Exclusive Times %Time ExclSec CumulS #Calls sec/call Csec/c Name 40.2 14127 47435. 543237 0.0260 0.0873 main::add_entry 23.1 8134. 8133.2 543237 0.0150 0.0150 BerkeleyDB::Common::db_pu +t 21.2 7461. 7460.7 277763 0.0269 0.0269 BerkeleyDB::Common::db_ge +t 14.5 5121. 5123.0 109679 0.0047 0.0047 File::QuickLog::print 0.56 198.3 35089. 265678 0.0007 0.1321 main::process_file 0.18 62.28 105480 10282 0.0061 10.258 main::recurse_dir 0.11 37.37 36.272 279524 0.0001 0.0001 main::find_parent 0.08 27.52 25.366 543427 0.0001 0.0000 Digest::MD5::md5_hex 0.01 2.889 2.849 10302 0.0003 0.0003 File::QuickLog::_datetime 0.01 2.169 2.129 10300 0.0002 0.0002 IO::File::open 0.00 0.180 0.817 5 0.0360 0.1634 main::BEGIN 0.00 0.150 0.150 1 0.1500 0.1500 main::get_cache_dirs 0.00 0.150 0.478 1 0.1498 0.4783 IO::import 0.00 0.100 0.100 64 0.0016 0.0016 Exporter::import 0.00 0.060 0.100 6 0.0100 0.0166 IO::Socket::BEGIN
    This is a full build of a quarter million object cache (the real world will have 1-3 million objects but a much faster and more memoried machine). MD5 generation is 25 seconds out of 36923.77 seconds...I'm not worried about MD5 time. ;-)

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://147507]
[Corion]: Meh - anybody signed into Github? A wild spammer has appeared.
[Corion]: Otherwise, I'll open a ticket to get them killed in the evening

How do I use this? | Other CB clients
Other Users?
Others examining the Monastery: (7)
As of 2017-08-24 07:43 GMT
Find Nodes?
    Voting Booth?
    Who is your favorite scientist and why?

    Results (365 votes). Check out past polls.