http://www.perlmonks.org?node_id=496326


in reply to pissed off about functional programming

I really don't understand what you're trying to get at in #4. Sure, you can view a changing variable as a function of time, keeping a list of the changes. But the point of FP is that it makes explicit that you've got that list of changes; you can no longer ask for "the value of x" at get one of the many values; you have to ask for "the value of x at time t," which forces you to change your design, sure, but gives you the benefit that "the value of x at time t" will always be the same, everywhere in the program. It's basically brought out into the open what was once a piece of hidden state about x (time).

I sense from your comments about integers and Church's thesis that you're pointing out that something different goes on under the hood than what you really see. If that's the case, sure, but that's the *whole point*.

To go back to the simple example, yes, we all know what happens in C when you try to assign 2^32+1 to a 32-bit unsigned integer. That's bad. Thats why in many, many languages (I'll use Ruby as an example, since I know it reasonably well) we don't usually deal with what's actually going on inside the box when we maniuplate integers. If in ruby I say "3 + 2" I get "5", and it's probably doing 32-bit integer arithmetic. If I say "2 << 35", the numbers may well be 32-bit integers internally, but certainly the result isn't. Yet everything continues to work; I get the right answer and I can use it in further calculations. It's all kept under the hood. I can reason much more easily about my program because I don't have to worry about integers being bigger or smaller than a certain size.

That, it seems to me, is the whole point of FP; it's to make it easy to reason about your programs by putting certain constraints on them and hiding certain things you don't want to worry about. This process was started a long time ago by doing things like replacing dynamic with lexical scoping (BTW, hello? Who, besides perl, uses dynamic scoping nowadays?).

So I really don't see the point of this rant. When you take out the "it's hidden but I'll look at it anyway," stuff, the dynamic scoping straw man (nobody forces it, and neither Haskell nor Scheme nor many others even support it!), and the "this was made explicit but I'll pretend that that wasn't the whole point" stuff, there doesn't seem to be much left here.

Replies are listed 'Best First'.
Re: #4 Confuses Me
by mstone (Deacon) on Oct 01, 2005 at 19:17 UTC

    But the point of FP is that it makes explicit that you've got that list of changes

    No it doesn't.. at least not inescapably so. The list can be a series of evaluation contexts from nested function calls. That list of contexts isn't a first-order programming construct like a variable or a function ref, so you can't step back through it to see your previous values. Nor is the index 't' a first-order construct, since it's represented implicitly by the level of nesting. So what was hidden in an lvalue can remain hidden in FP, it just hides in the call stack rather than in a register.

    the dynamic scoping straw man (nobody forces it, and neither Haskell nor Scheme nor many others even support it!)

    No, but they do support closures and/or currying. Both of those create a chain of evaluation contexts that's basically a non-first-order list of previous values with a non-first-order implicit index. They offer all the benefits and disadvantages of regular lvalues and/or dynamic scoping, just wrapped in a different kind of syntactic sugar. And both of them shoot the analytic simplicty of lexical/static scoping right through the head, because once again you have values that can only be determined in the appropriate runtime context, rather than from direct inspection of the code.

    It's precisely that kind of "we don't have that problem" tunnel vision that set me to ranting in the first place. I want people to be aware of the operational realities of programming instead of comparing checklists of syntactic sugar features without taking the time to think about what those features actually do.

    The reason #4 confuses you is that you're too wrapped up in the superficial features of the immediate example to think about the fundamental issues of programming they represent, and then think about how those issues can pop up in whatever syntactic system you happen to use. That kind of superficiality is unfortunately common in some parts of FP culture, and it has the capacity to really piss me off if you expose me to it long enough.

      Ok: then how is the - at compile time unknown - state of a closure different from the state kept in an objekt? That you have to look for the the value outside the lexical context is part of the intended behaviour and noone who can write closure would be surprised or expect a different behaviour. I think your argument is a little bit trampling on a strawman. Correct me if I'm wrong, but it seems you are claiming FP would promise a much simpler behaviour, by deliberately misunderstanding FPs design principles. There is a clear difference between constructing closures and mutable variables, even if the behaviou of a function/closure is determined non-(source)-locally at runtime. If somebody exaggerated or misrepresented FP to you, thats not FPs problem, but yours, because you obviously seem to know better

        Ok: then how is the - at compile time unknown - state of a closure different from the state kept in an objekt?

        I think we've officially reached the point of confusion. You've just paraphrased part of my own argument back to me as a refutation of what I said.

        In answer to your direct question, the value stored in a closure is topologically bound to the execution tree more tightly than the value stored in an object. You can make an object, pass it to a function, and have it come back with a different value than it held when the function was invoked. You can't do that with a properly functional closure, though, because FP only allows information to flow up the execution tree in the form of return values. With a little thinking, you can also find differences between the ways closure values and object values move from one branch of the execution tree to another.

        More generally, the closure obeys the FP constraint that all identifiers will remain substitutible within the same frame of reference, and the object doesn't.

        Correct me if I'm wrong, but it seems you are claiming FP would promise a much simpler behaviour, by deliberately misunderstanding FPs design principles.

        Okay, allow me to correct you: I don't claim that FP expects simpler behavior from closures.

        I did note that closures, currying, and so on break the expected simplicity of lexical scoping, but those claims of simplicity aren't original to me. The idea that you should be able to learn the value of a variable, or predict the behavior of a program, from direct inspection of the code is one of lexical scoping's major selling points. That isn't a comment about FP itself, though.

        I also tried to point out that closures and currying have enough power to simulate dynamic scoping, which makes it a mistake to say that languages like Haskell and Scheme -- which use lexically-scoped variables, but also support things like closures and currying -- are automatically immune to the problems of dynamic scoping. No, you don't get dynamic-scope problems from the lexically-scoped variables, but you can get them from closures and curried functions. The syntactic sugar will look different, but the underlying problem will be the same.

        Of course, you can avoid rolling your own versions of those problems, but to do that, you have to admit they exist. A person who says, "I don't have to worry about dynamic scope issues because my language is statically scoped, '(the dynamic scoping straw man (nobody forces it, and neither Haskell nor Scheme nor many others even support it!))'," is pretty much begging to discover -- the hard way -- that yes, Haskell and Scheme do give you the power to roll your own version of those problems by other means.

        Most people don't realize how few concepts it takes to make a Turing-complete language, or how many different times and ways those concepts appear in a high-level programming language. They don't understand how things that look different -- like variable scoping strategy, closures, and currying -- can be different implementations of the same basic idea. Some, like the AM who made the original post in this thread, write the whole business off as, 'the "it's hidden but I'll look at it anyway," stuff'.

        Those are the people who exaggerate and misrepresent FP, and there are a lot of them. Enough, in fact, that it's hard for someone trying to learn about FP to avoid them. They keep parroting the misinformation back and forth to each other and passing it on to the newcomers, who pick up the mistakes and the zeal based on the time they've spent in the echo chamber, instead of their own direct knowledge of programming.