|No such thing as a small change|
Example: say you are implementing business logic for your application which has multiple frontends (CLI, web and GUI). This part of your application encounters an error (let say a database connection error). What should it do? Print HTML page with error? Produce plain text formated error message for CLI?
You said this was modular code, right? Right?
Doesn't sound like it to me. If it was, you'd just call the routine that produces an error message, and let that routine use its Big Switch Statement to decide how exactly to do that. (If the routine that prints an error message encounters an error while doing so, you'd probably consider that fatal and die.)
This sounds pretty similar to the exception-throwing way of doing things if all you're doing is printing an error message, but the more common case would be where you have to actually do something, such as ask the user for better input. In that case, you're calling a routine that asks the user for input, and it's using a switch statement to decide which routine to call -- the one that puts up a dialog box, the one that prints the question on STDOUT and gets the answer on STDIN, the one that prints up a web form and waits for the form processing script to signal it with an answer (identifying this instance by matching a hidden field token in the form against a list of such tokens and corresponding PSIDs), or the one that posts the question to usenet and checks periodically for a response. Whichever routine you call, it will have its own ideas about how to make as many attempts as necessary until it gets valid data, at which point it will return that answer to the caller.
I'm starting to think maybe we're doing the same thing in opposite ways. You're reducing complexity by moving all knowledge of what's going on into the caller, and I'm reducing complexity by leaving the caller with knowledge only of what it wants to do with the data and moving all knowlege of where the data comes from into the subroutine. It may be a different paradigm.
In some ways, your approach reminds me very much of my brief experiments with the event-oriented paradigm when I took courses in two "fifth-generation" languages in college. As you probably know, event-orientation turns everything around by making user input the caller and the program's logic the callee. I have a bad taste in my mouth for this approach, possibly because the only languages I've used that do things that way are VB and Lingo, both of which I loathe, especially Lingo. Though now that I think of it, it might be possible to do something like it in Perl, sort of, and that might not be so bad. [ponders this]
I'm not sure how to fall off the end of the main code block (which presumably would initialise the objects and stuff) while leaving the various objects in place, however. It might be necessary to use some kind of big loop (which I suppose is probably what VB and Lingo do under the hood anyway). Come to think of it, the Inform standard library works something like that; I never thought of it as event-oriented, but now that I think about it, it really is. I'm sure I could do something like that in Perl (though I could never write anything as unbelievably complex as the Inform standard library; I'm convinced Graham Nelson is a genius).
Am I completely out in left field here, or am I beginning to understand?