![]() |
|
Clear questions and runnable code get the best and fastest answer |
|
PerlMonks |
Re: Google like scalability using perl?by toma (Vicar) |
on Oct 11, 2009 at 18:21 UTC ( #800559=note: print w/replies, xml ) | Need Help?? |
Another way to do this is to run Apache and mod_perl on different machines, and split the large hash between them. The hash and code stays in memory with mod_perl. You might also try memcached, as suggested above. The combination of memcached and mod_perl performs better than I expected. For doing a large batch job by farming work out to lots of nodes, the trick is to use something like Amazon's Simple Queue Service, which keeps the work from overlapping. I don't know of a perl module that implements this, so if your code has this feature it would make a great CPAN module by itself.
It should work perfectly the first time! - toma
In Section
Meditations
|
|