http://www.perlmonks.org?node_id=301094


in reply to Re: Re: Re: "The First Rule of Distributed Objects is..."
in thread Multi tiered web applications in Perl

Sometimes, you don't know ahead of time what gets used more than naught. Usually products evolve. And using mod_proxy or the likes is a cludge. What happens when you move something more than once?

It's not a kludge. Reverse proxying is widely used, and so is hardware load-balancing. IBM will be glad to sell you a commercial reverse proxy server if you prefer, but it's all the same idea. It's also trivial to change how URLs are being handled. You can send all requests for /foo/ to a special set of servers with a single mod_rewrite line, and even if you change it a million times no one on the outside will ever know or have to update a bookmark.

Things like static stuff doesn't need to hit db resources ever.

Of course. That's why I said you should keep your static web content separate from your dynamic requests. But this doesn't have much to do with logical tiers vs. physical tiers.

Ok, for instance, let's say your login page is realy fast, and so is your page after auth. Now let's say your preferences page is REALLY slow. It takes up lots of resources since it gets pegged a lot. Seperating out the logic that is so slow because it gets hit so much can be moved to it's own pool. Now you have one set { login, homepage} and another {preferences} which can be in two different pools.

Okay, what did we gain from that? If these were sharing resources on two machines before and were slow, we will now have one under-used machine and one overloaded machine. The balance is worse than it was.

A pooled connection vs an in-machine IPC call's speed is a magnatitude faster, but in terms of user experience, it is so small, that you can hardly notice.

If every request takes a tenth of a second longer than it did, no single user will have a slow experience but the scalability (in terms of requests that can be handled per second) will suffer in a big way.

No it doesn't. They are called transfer objects. Just a basket where you say, I want NN and it returns back in one request.

Forcing every communication between objects to be something that can be handled in one call just isn't a good design. Ideal OO design involves many small objects, not a few monolithic ones.

Ah.. that's the thing. evenly. You don't want everything running evenly. If slashdot could seperate out say, it's front page logic from its comment logic, then the front page will always be speedy and the comments section be its relative speed. As more people do commenty stuff, the home page stays right quick.

You keep talking about putting separate pages on different machines, but this conversation was originally about tiers, i.e. page generation, business logic, database access objects. Most dynamic pages will need all of these for any given request.

It sounds like you are saying that you want to be able to sacrifice parts of a site and let them have rotten performance as long as you can keep other parts fast. I don't think that's a common goal, and I wouldn't call that scalability (how can you say the site is scaling if part of it is not scaling?), but it can easilly be done with mod_rewrite or a load-balancer directing the comments requests to some specific servers. (Incidentally, Slashdot caches their front-page and serves it as a static page unless you are logged in.)

b bogs down t to the point of "slow". You add another server. Things get "better" but imagine if you tier'ed it. You have three machines. One that handles s, one that handles b and one that handles t. the t-machine will alwyas run fast. And as more people use b, you add more resources for b. But as b continuously gets more and more poeple, T NEVER slows down. THAT is what you want to avoid.

The only way this could actually be an advantage is if you are willing to let b get overloaded and slow, as long as t does not. That is not a common situation at the sites where I've worked.

You don't want to add to the entire pool and have to speed up everything in one fell-swoop. It's the same reason you have a 3d video card and a cpu completely seperate.

The difference is that those are not interchangeable resources, i.e. splitting your rendering across the two of them doesn't work well since one of them is much better at it than the other is. In the case of identical servers with general resources like CPU and RAM, each one is equally capable of handling any request.

But you can't refute that if T stays simple and fast, and B gets more complex, that T would be unaffected. :)

I agree, but I think that if you added the necessary resources to keep B running fast (as opposed to just letting it suck more and more), then T would be unaffected in either scenario.

Better be nice to the GF! That's one area where load-balancing is extremely problematic...