|Perl: the Markov chain saw|
Applying the brakesby Ryszard (Priest)
|on May 08, 2008 at 11:10 UTC||Need Help??|
Ryszard has asked for the wisdom of the Perl Monks concerning the following question:
I have a (legitimate) need to send up to around 1.5million emails twice a month (notification of a billing cycle). I ultimately want the solution to scale to about twice (and a bit) what it is initially spec'd at to compensate for future requirements.
I will be getting a flat file list of email addresses, and the idea is that i will connect to an SMTP server for delivery.
In order to increase scale, i figure parsing and chopping up the the list and putting it in a database would allow multiple hosts to access the list and process their part all at the same time.
The issue i have is a requirement to throttle the requests on a per second basis (so as to not choke the mail system).
My thoughts on the matter are to use Net::SMTP and threads, and some kind of counter to track the requests sent, and stop sending if the limit has been reached for that period.
If someone has experiences on this, i'd love to hear it
it seems as tho' i have actually jumped the gun on this. after chatting with the mail admins (i'm an apps guy), it appears the most scalable solution is to dump stuff straight into the mail queue, and (in our instance) tune exim to work it out (as moritz as mentioned)
Still, as a point of interest, how could one throttle socket to actions/second, i guess a counter, and in the case of using threads, perhaps a shard variable? are perl threads safe enuff for this?