|Perl: the Markov chain saw|
Re: Daily Countersby BrowserUk (Pope)
|on May 06, 2013 at 16:55 UTC||Need Help??|
I had a similar requirement a few years ago and (back then) the fastest mechanism available to me that provided shared access and fast lookup, was to use the file system.
For sake of discussion, assuming that your userids consist of mixed case ANSI alphanemerics -- ie. 62 chars. If you have 10 million users and use the first 3 characters in their names as an index into a first level of subdirectories, you'll have (on average) 42 users in each second level subdirectory -- so lookup is fast.
The directory structure looks like this:
And the process of lookup/increment is:
If your data is to persist, you are going to have to do at least one directory lookup to find the DB file -- and usually more than one -- so the directory look is effectively free. And as rename is atomic, the shared data problems are taken care of without the need for time-costly, locking and polling.
The more characters in the alphabet available for your userids, the more well spread your directory structure and the faster the lookups. The only real restriction is that the alphabet must be compatible with your file systems naming conventions, which isn't usually a problem.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.