| [reply] |
Exactly, server-side.
Squid is interesting I think, but I don't just want to proxy connections, I will need to do some programming.
One of the problems I have is that the PERL modules for S3 don't stream data, which means that if my application is sending a 2GB file to S3 then that 2GB file gets loaded into memory before it gets sent, that's bad. So I will store files over 500 MB locally and keep that info in a table.
But while I'm doing tables I might as well cache the most used files locally on my server instead of sending users to pick them up from S3 and incurring the bandwidth charge, say cache up to 100GB (the size of the local drive) before rotating out of the cache, and then how to determine which files should and shouldn't be in the cache.
This all seems to me to be something that somebody has to have done before, maybe not with S3, but just storing content and caching it. Maybe not...
| [reply] |
Cache::Cache looks like it has some of the functionality you need, specifically memory caching some of the smaller sizes or even caching on the local drive based on a size limit. I haven't used it before, so someone else can comment on its stability and their experiences with it.
---
echo S 1 [ Y V U | perl -ane 'print reverse map { $_ = chr(ord($_)-1) } @F;'
Warning: Any code posted by tuxz0r is untested, unless otherwise stated, and is used at your own risk.
| [reply] |