Perl: the Markov chain saw PerlMonks

### pissed off about functional programming

by mstone (Deacon)
 on Apr 24, 2005 at 11:46 UTC Need Help??

Okay.. fair warning: I'm venting.

I just had an long and very frustrating conversation with a young programmer who recently discovered functional programming, and thinks it can solve every problem in the world.

Before I go any farther, let me make one thing clear: I do not hate functional programming. On the contrary, I agree with every guru out there who says that you can't become a Real Programmer without learning functional programming. FP, in my never humble opinion, doth rock.

But I have to call bullshit on some of the misleading things that are commonly said about FP, because I've just had my head slammed against them in their most clueless and trivial form.

## Myth #1 - Functional programming is Lambda Calculus

This is true for a specific value of 'true'.. the same one we use for the statement, 'computers do integer math'.

The limits of this truth become obvious when you try to store the value (2^N)+1 in an N-bit integer. The pedantically correct statement is, 'computers do integer math for operations whose values fall within a certain range'. We generally leave off the qualifier for convenience, but its importance can be summed up in three letters: Y2K. Or in terms of more recent events, 'Comair database'.

Functional programming approximates lambda calculus the way computers approximate integer math. It works just fine in the range where it's defined to work just fine, and blows chunks everywhere else. This is hardly a fatal limitation, but it does mean we should know where the limits are, and what they mean.

For functional programming, the limits mean you have to be aware of the simultaneity constraints inherent in lambda calculus, and the way they interact with the lazy evaluation techniques that are fundamentally necessary to implement FP in any kind of real-world computer.

Okay.. that's two pieces of vocabulary..

Simultaneity means that we assume a statement in lambda calculus is evaluated all at once. The trivial function:

```    ^f(x) ::= x f(x)

defines an infinite sequence of whatever you plug in for x (I'm using the caret as a substitute for lambda because I'm not in a mood to mess with non-7-bit ASCII values at the moment). The stepwise expansion looks like this:

```    0   - f(x)
1   - x f(x)
2   - x x f(x)
3   - x x x f(x)
...

and so on.

The point is that we have to assume that the 'f()' and 'x' in step three million have the same meaning they did in step one.

At this point, those of you who know something about FP are muttering "referential transparency" under your collective breath. I know. I'll beat up on that in a minute. For now, just suspend your disbelief enough to admit that the constraint does exist, and the aardvark won't get hurt.

The problem with infinite expansions in a real-world computer is that.. well.. they're infinite. As in, "infinite loop" infinite. You can't evaluate every term of an infinite sequence before moving on to the next evaluation unless you're planning to take a really long coffee break while you wait for the answers.

Fortunately, theoretical logic comes to the rescue and tells us that preorder evaluation will always give us the same results as postorder evaluation.

More vocabulary.. need another function for this.. fortunately, it's a simple one:

```    ^g(x) ::= x x

Now.. when we make the statement:

```    g(f(x))

Preorder evaluation says we have to expand f(x) completely before plugging it into g(). But that takes forever, which is.. inconvenient. Postorder evaluation says we can do this:

```    0   - g(f(x))
1   - f(x) f(x)
2   - x f(x) x f(x)
3   - x x f(x) x x f(x)
...

which is a heck of a lot nicer.

The technique of lazy evaluation takes the theoretical correctness of postorder evaluation and applies it to the execution of code. The basic idea is that the language interpreter won't execute the code necessary to evaluate expansion step 300 until some part of the program actually calls for that value. This technique amortizes quite nicely, since it saves us, on average, an infinite amount of labor for each function.

Now.. with those definitions in place, you can see how the relationship between simultaneity and lazy evaluation isn't quite as simple as it appears at first glance. It's theoretically possible that the 'f()' or 'x' might change between the lazy evaluation of step one and the lazy evaluation of step three million. Trying to prevent that is what we programmers call a 'hard' problem.

Lambda calculus doesn't have this problem, the way integer math doesn't have register overflow issues. In lambda calculus, the infinite expansion of every function occurs instantly, and there's no way that any of the functions can possibly change because there's no time for them to change. Lazy evaluation brings time into the picture, and introduces order-of-execution issues. Trying to run functional logic in two or more simultaneous threads can raise serious problems, for instance.

In point of fact, the simultaneity constraints inherent in lambda calculus have been shown to make it unsuitable for issues involving concurrency (like multithreading), and for problems like that, it's better to use pi-calculus.

## Myth #2 - Functional programming is 'different from' imperative programming.

There's some operator overloading in this issue, because 'imperative programming' has two different meanings.

On one hand, 'imperative' means 'a list of operations that produce a sequence of results'. We contrast that with 'declarative programming', which means 'a list of results which are calculated (somehow) each step of the way'. Mathematical proofs and SQL queries are declarative. Most of the things we think of as programming languages are 'imperative' by this meaning, though, including most functional languages. No matter how you slice it, Haskell is not declarative.

On the other hand, 'imperative' is also used as a contrast to 'functional', on the basis of really poorly defined terms. The most common version runs something like, "imperative languages use assignment, functional languages don't".. a concept I'll jump up and down upon later. But I'm getting backlogged, so we'll queue that one up and move on.

Getting back to the current myth, this is another one of those statements that's true for a given value of 'true'. In this case, it's the same value we use for the statement, "red M&Ms are 'different from' green M&Ms."

Church's thesis, upon which we base our whole definition of 'computing', explicitly states that Turing machines, lambda calculus, while programs, Post systems, mu-recursive grammars, blah-de-blah-de-blah, are all equivalent.

For those quick thinkers out there who've noticed that this puts Myth #2 in direct contradiction with Myth #1, KA-CHING! you've just won our grand prize! Please contact one of the operators standing by to tell us how you'd like your sheepdog wrapped.

Yes, the syntactic sugar of functional programming is somewhat different from the syntactic sugar of non-functional programming. Yes, functional programming does encourage a different set of techniques for solving problems. Yes, those techniques encourage different ways of thinking about problems and data.

No, functional programming is not some kind of magic pixie dust you can sprinkle on a computer and banish all problems associated with the business of programming. It's a mindset-altering philosophy and/or worldview, not a completely novel theory of computation. Accept that. Be happy about it. At least friggin' COPE.

## Myth #3 - Functional programming is referentially transparent

This one is just flat-out wrong. It's also the myth that pisses me off the most, because the correct statement looks very similar and says something incredibly important about functional programming.

Referential transparency is a subtle concept, which is founded on two other concepts: substitutability and frames of reference.

More vocabulary..

Substitutability means you can replace a reference (like a variable) with its referent (the value) without breaking anything. It's sort of the opposite of removing magic numbers from your code, and is easier to demonstrate in violation than in action:

```    my \$x = 3;
my \$y = 3;

if (3 == \$x) {                  # substitutability works
print "\$x equals \$y.\n";
}

\$x++;

if (3 == \$y) {                  # substitutability fails
print "\$x equals \$y.\n";
}

Mutable storage (aka: assigning values to variables) offers an infinite variety of ways to make that kind of mistake, each more heavily obfuscated than the last. Assigning values to global variables in several different functions is a favorite. Embedding values (using \$x to calculate some value that gets stored in \$z, changing \$x, and forgetting to update \$z) is another. These problems are so common, and are such a bitch to deal with, that programmers have spent decades searching for ways to avoid them.

Frames of reference have a nice, clear definition in predicate calculus, but it doesn't carry over to programming. Instead, we have two equally good alternatives. On one hand, it can mean the scope of a variable. On the other hand, it can mean the scope of a value stored in a variable:

```    my \$x = 3;      # start frame of reference for variable \$x
# start frame of reference for value \$x==3

my \$y = 3;      # start frame of reference for variable \$y
# start frame of reference for value \$y==3

if (3 == \$x) {
print "\$x equals \$y.\n";
}

# end frame of reference for value \$x==3
\$x++;           # start frame of reference for value \$x==4

if (3 == \$y) {
print "\$x equals \$y.\n";
}

# end frame of reference for value \$x==4
# end frame of reference for variable \$x
# end frame of reference for value \$y==3
# end frame of reference for variable \$y

The notation above shows exactly why substitutabiliy fails in the second conditional. The frame of reference for the value \$x==3 ends and a new one begins, but it happens implicitly, which means it's hard to see.

But there's another problem. If you look at that list of 'end' statements at the bottom, you'll notice that the frames of reference for \$x and \$y are improperly nested. The frame of reference for \$x comes into existence before \$y, and goes out of existence before \$y. Granted, I did that specifically so I could talk about the problem, but you can do all sorts of obscene things to the nesting of value frames of reference with three or more variables.

Stepping back for a second, it's clear that substitutability is only guaranteed to work when all the values are in the same frame of reference as when they started. As soon as any value changes its frame of reference, though, all bets are off.

This is one of the biggest kludge-nightmares associated with mutable storage. Frames of reference for values pop in and out of existence, can be implicitly created or destroyed any time you add/delete/move/change a line of code, and get munged into relationships that defy rational description.

And don't even get me started on what conditionals do to them. It's like Schrodinger's cat on bad acid.

Functional programming changes that by declaring that the frame of reference for a variable and the frame of reference for its value will always be the same. The frame of reference for a value equals the scope of its variable, which means that every block is a well-defined frame of reference and substitutability is always guaranteed to work within a given block.

This is what functional programming really has over mutable storage. But it isn't referential transparency.

Y'see, referential transparency is a property that applies to frames of reference, not to referents and references. To demonstrate this in terms of formal logic, let me define the following symbols:

```    p1  ::= 'the morning star' equals 'the planet venus'
p2  ::= 'the evening star' equals 'the planet venus'
p3  ::= 'the morning star' equals 'the evening star'

(+) ::= an operator that means 'two things which equal
the same thing, equal each other'

=>  ::= an operator which means 'this rule produces this result'

With those, we can generate the following statements:

```    p1 (+) p2 => p3
p1 (+) p3 => p2
p2 (+) p3 => p1

which is all well and good. But, if we add the following symbols:

```    j1  ::= john knows 'the morning star' equals 'the planet venus'
j2  ::= john knows 'the evening star' equals 'the planet venus'
j3  ::= john knows 'the morning star' equals 'the evening star'

we can not legally generate the following statements:

```    j1 (+) p2 => j3
p1 (+) j2 => j3
j1 (+) p3 => j2
p1 (+) j3 => j2
j2 (+) p3 => j1
p2 (+) j3 => j1

The word 'knows' makes the 'j' frame of reference referentially opaque. That means we can't assume that the 'p' statements are automatically true within the 'j' frame of reference.

So what does this have to do with functional programming? Two words: dynamic scoping. Granted the following is Perl code, but it obeys the constraint that every value remains constant within the scope of its variable:

```    sub outer_1 {
local (\$x) = 1;
return (inner (\$_[0]));
}

sub outer_2 {
local (\$x) = 2;
return (innner (\$_[0]));
}

sub inner {
my \$y = shift;
return (\$x + \$y);
}

The function inner() is referentially opaque because it relies on a dynamically scoped variable. With a little work, we could replace \$x with locally scoped functions and get the same result from code that passes the 'doesn't use assignment' rule with flying colors.

And that's just the trivial version. There's also the issue of quoting and defining equivalence between different representations of the same value. The following function defines equivalence between numbers and strings:

```    sub equals {
my (\$str, \$num) = @_;

my %lut = (
'one'   =>  1,
'two'   =>  2,
'three' =>  3,
...
);

if (\$num == \$lut{ \$str }) {
return 1;
} else {
return 0;
}
}

but even though equals('three',3) is true, length('three') does not equal length(3). Once again, we've killed referential transparency in a way that has nothing to do with assigning values to mutable storage.

The fact of the matter is, referential transparency isn't all that desirable a property for a programming language. It imposes such incredibly tight constraints on what you can do with your symbols that you end up being unable to do anything terribly interesting. And propagating the myth that functional programming allows universal substitutability is just plain evil, because it sets people up to have examples like these two blow up in their faces.

Functional programming defines the block as a frame of reference. All frames of reference are immediately visible from trivial inspection of the code. All symbols are substitutable within the same frame of reference. These things are GOOD! Learn them. Live them. Love them. Rejoice in them. Accept them for what they are and don't try to inflate them into something that sounds cooler but really isn't worth having.

## Myth #4 - Functional programming doesn't support variable assignment

It's been said that C lets you shoot yourself in the foot.

It's said that C++ makes it harder, but when you do, you blow your whole leg off.

I'm personally of the opinion that functional programming makes it even harder to shoot yourself in the foot, but when you do, all that's left are a few strands of red goo dangling from the shattered remains of your brain pan.

Statements like the one above are the reason I hold that opinion.

Let's go back to Myth #2, shall we? According to Church's thesis, all programming languages are computationally equivalent. They all do the same things, and only the syntactic sugar is different.

In this case, the syntactic sugar is so different that people can end up using variable assignment without knowing they're doing it, all the while smugly assuming they're free from the evils of imperative programming.

To understand how that can happen, let's take a good hard look at what 'variable assignment' actually means. For the sake of discussion, I'm going to use the term 'lvalue' instead of 'variable', because an lvalue is explicitly something to which you can assign values. Using that term, we can restate this myth as "functional programming doesn't support lvalues."

So.. what's an lvalue? In terms of implementation, it's a register where values are stored, but what is it functionally? How does the rest of the program see it?

Well, what the rest of the program sees is a symbol that gets associated with a series of different values.

So, theoretically, we could collect all the values that are stored in an lvalue during its life in the program, and store them in a list. Instead of saying just plain \$x, we could look up the value \$x[\$t], where \$t indicates the number of changes the value goes through before it reaches the value we want.

That fact defines the bridge between functional and imperative programming. We can simulate any lvalue with a list that contains the same sequence of values, and never violate the functional 'doesn't use assignment' rule.

But we can make our simulation even more interesting by realizing that each new value in the list is usually the result of a calculation applied to the previous value. The idea of building a new list by sequentially applying a list of operations to an initial value is a very functional programming idea. The following code uses that idea to implement an integer counter:

```    my \$inc  = sub { return (\$_[0] + 1) };
my \$dec  = sub { return (\$_[0] - 1) };
my \$zero = sub { return (0) };

sub apply {
my (\$val, \$func, @etc) = @_;

if (@etc) {
return (\$val, apply (\$func->(\$val), @etc));
} else {
return (\$val, \$func->(\$val));
}
}

sub counter {
return apply (0, @_);
}

my @ops = (
\$inc, \$inc, \$inc, \$dec,
\$inc, \$dec, \$dec, \$inc,
\$zero, \$inc, \$inc, \$inc
);

print join (' ', counter (@ops));

----

output == '0 1 2 3 2 3 2 1 2 0 1 2 3'

in about as functional a way as you can manage in Perl. A decent functional programmer would have no trouble porting that idea over to their functional language of choice.

Thing is, most people wouldn't think of code like that when they think '++', '--', '=0'.

Now, the list returned by counter() does have the bookkeeping-friendly feature that all the values are there simultaneously, in the same frame of reference. We could change that by dynamically scoping the list @ops, and distributing its contents across a whole series of evaluation contexts.

Even if all the values do exist in the same frame of reference, though, we still run into the same problems that make lvalues such a nuisance. If we tweak apply() so it only returns the final calculated value:

```    sub apply {
my (\$val, \$func, @etc) = @_;

if (@etc) {
return (apply (\$func->(\$val), @etc));
} else {
return (\$func->(\$val));
}
}

this code:

```    my @x = (\$inc, \$inc, \$inc);
my @y = (\$inc, \$inc, \$inc);

if (3 == counter (@x)) {
printf "%d equals %d.\n", counter (@x), counter (@y);
}

do {
my @x = (@x, \$inc);

if (3 == counter (@y)) {
printf "%d equals %d.\n", counter (@x), counter (@y);
}
}

demonstrates the same failure of substitutability that I used above, but in functional style. This version does have the advantage that the change from one frame of reference to another is immediately visible, but you can still screw around with the values in ways that aren't immediately obvious. This is the simple form of the problem. As with the lvalue version above, you can obfuscate it until your head explodes.

And that brings me to the way functional languages politely abuse the principle of lazy evaluation to handle user input. They pull basically the same kind of trick by treating the user input stream as a list where all the values are theoretically defined as soon as the program starts, but we don't have to prove it for any specific item of input until the user actually types it in. We can define the @ops list in terms of the (theoretically) defined user input list, and produce what amounts to an integer counter that responds to user input.

I don't object to the fact that functional programming allows these things to happen. I JUST WANT PEOPLE TO BE AWARE OF IT, AND STOP TREATING FP LIKE SOME KIND OF MAGIC BULLET BECAUSE IT SUPERFICIALLY APPEARS NOT TO DO THINGS THAT IT ACTUALLY DOES QUITE HAPPILY, THANK YOU VERY MUCH.

## Winding down

Okay.. I feel better, now.

So that's my current spin on functional programming. 'Blocks equal frames of reference'? Way cool. Big thumbs up. 'Capacity to hide value storage in places where even the gurus don't go if they don't have to'? Not so cool. Beware of arbitrarily multicontextual dog. As a worldview and way of thinking about programming? Love it. As a body of advocacy? Please, please, puh-leeeease learn enough about it to explain it to the newbies without lying to them in ways that will make their heads go boom some time in the future.

Make their heads go boom now. It's more fun to watch, and it keeps them out of my hair.

Considered by bradcathey - Tone down language, like title: FP, pros and cons
Unconsidered by castaway - Keep/Edit/Delete: 34/11/0

Replies are listed 'Best First'.
Re: pissed off about functional programming
by cog (Parson) on Apr 24, 2005 at 12:52 UTC
a young programmer who recently discovered functional programming, and thinks it can solve every problem in the world

Back where I studied, the first language of the course is now Haskell. As a result, most students get to think that that is the only way of coding, and keep on doing it that way.

I remember the teacher who taught me Perl complaining to other student: "You're coding in Perl as if it were Haskell!!!"

Which is kind of like what C-programmers do when they start programming Perl, but with a different language.

Hum... I wonder if it also happens with other languages? :-)

It obviously happens with human languages!

And because we call that "speaking with an accent", I use that same term with some Perl coders. "You're speaking Perl with a C accent." "I'm coding Perl with a LISP." (The word "accent" there is optional.)

-- Randal L. Schwartz, Perl hacker
Be sure to read my standard disclaimer if this is a reply.

In your opinion, is there a "Perl accent" that one might write other languages using?

Caution: Contents may have been coded under pressure.
I used to speak perl with a very C accent. But lately (many years down the road), my perl is becoming much purer, and full of more native perl idioms, while my C still looks like C. I think this is a sign that I'm beginning to be a nearly-native speaker, as opposed to some guy who took a berlitz course just before a trip overseas.
Writing in language A with an accent of language B is a curious phenomenon (to detect and to analyze). I pondered it a while ago - "Native Perlish", an interesting thread evolved.
Re: pissed off about functional programming
by spurperl (Priest) on Apr 24, 2005 at 13:43 UTC
It appears that in the world of programming something comes up once in a few years and is believed by many to be the "silver bullet". OOP, Agile technologies, etc. But generally, a lot of hype is generated for a small merit.

There is no silver bullet, but from each such hype we learn. OOP isn't a silver bullet, but we do gather a few useful techniques and design ideas from it. Agile/XP isn't a silver bullet, but it also has many interesting ideas.

Lets just say that if a new "silver bullet" is on its way now, I sure hope it will be FP - it has marvelous lessons to teach any programmer.

Update: I forgot to bring your attention to the fact the Church's thesis certainly doesn't say that "all computer languages are equivlent". A more or less formal defintion would be "any real-world computation can be translated into an equivalent computation involving a Turing machine".

It appears that in the world of programming something comes up once in a few years and is believed by many to be the "silver bullet".

The problem is that we had one. The shift from programming in assembly language to programming with high level languages produced an order-of-magnitude increase in programmer productivity. And since it happened once, there's always the vague hope that it might happen again.

Lets just say that if a new "silver bullet" is on its way now, I sure hope it will be FP - it has marvelous lessons to teach any programmer.

Sorry. We've already been using FP long enough to know that it isn't the next silver bullet. There is a well-defined body of problems which FP handles really well, but there's also a well known body of problems FP doesn't handle so well.

More to the point, FP hasn't lived up to the vision John Backus stated in 1978, of creating a sort of algebra for software.. a world where we could build ever more complicated programs by hooking existing programs together in well-defined ways. There seems to be an upper limit on how much stuff we can integrate before the result becomes too domain specific to be useful for further integration. There's also a problem with each new level of integration leaving us handcuffed to a new form of data representation, which also limits further combination. They're bascially the same problems that kept OOP from being the next silver bullet. There may be a way to crack those barriers, but neither FP nor OOP has done it so far.

Doesn't mean they're useless.. far from it. They just aren't the conceptual nuke that will boost us to the next 10x level of productivity.

I forgot to bring your attention to the fact the Church's thesis certainly doesn't say that "all computer languages are equivlent". A more or less formal defintion would be "any real-world computation can be translated into an equivalent computation involving a Turing machine".

Okay, I admit to doing a hand-wave with regard to Church's thesis. What actually happened is this:

In 1931, Kurt Godel defined the theory of mu-recursive functions. It's an elegant theory that explains how to create new functions by combining existing ones. It uses three basic functions: zero(), next(), and project() (selecting a specific item from an N-tuple), and three operations: function composition, primitive recursion, and mu-recursion. The set of functions that can be produced by applying those operations to the base functions is called 'the set of general recursive functions'. In mathematical terms, this was basically a Theory of Everything.

In 1933, Alonzo Church published the rules for lambda calculus. In '36, he published a paper{1} where he speculated, but couldn't prove, that the set of functions generated by lambda calculus was actually the set of general recursive functions. About the same time, Kleene{2} and Post{3} published papers approaching the same problem from different angles, and Alan Turing came out with a paper{4} that proved the Turing machine was computationally equivalent to lambda calculus.. or actually, 'idempotent', meaning you can define either one in terms of the other. That's the standard mathematical way of showing that two systems are fundamentally the same, despite any apparent differences in their crunchy candy shells.

{1} - with the whizzer of a title, _A note on the Entscheidungsproblem_

{2} - _General recursive functions of natural numbers_

{3} - _Finite combinatory process-formulation_

{4} - _On computable numbers with an application to the Entscheidungsproblem_ -- I think they just liked saying that word.

Church is the one who defined what he called 'effective calculabilty' (which we call 'computability' today) and linked lambda calculus to the mu-recursive functions. Turing was the one who actually linked Turing machines to lambda calculus. His paper served as the template for bringing all the pieces together into a unified theory of computation.

So.. it was wrong of me to say that Church's thesis 'explicitly' links lambda calculus to Turing machines. I might've gotten away with it if I'd said 'Church-Turing thesis', though. "The theory of computability says ..." would definitely have worked.

Sorry. We've already been using FP long enough to know that it isn't the next silver bullet. There is a well-defined body of problems which FP handles really well, but there's also a well known body of problems FP doesn't handle so well.

I don't think that's the point :-) I quite agree that FP is no more a silver bullet than any other development style. However, IMHO, FP has yet to have it's "silver bullet" moment in the limelight - which means that the really nice things that the FP style can give you still haven't been absorbed by the world at large.

Before the Java "silver bullet" I was always having arguments with people who said automatic memory management was impractical, and how VM based platforms would never scale for real world use. Despite the fact that people had been using them for years - decades even. Post-Java these discussions have become a rarity.

In fact I think that this will be the lasting legacy of Java - it'll be the language that made garbage collection and VMs popular.

I still encounter people who think that useful bits from the FP toolbox, like lambdas, map. currying, etc. are a complete waste of time since they're not in their toolbox and they've never used them in anger. An FP language could do with a bit of the silver bullet treatment just to get useful concepts out there and in peoples heads.

Let me get this straight. You're saying that Perl isn't 100x more productive than assembly?
The shift from programming in assembly language to programming with high level languages produced an order-of-magnitude increase in programmer productivity.
...snip...
Sorry. We've already been using FP long enough to know that it isn't the next silver bullet.
Hmm. Are you sure there isn't another 10x productivity boost from functional languages? Or are you just refusing to see it? Take a web browser. Code one up in C. Then do the same thing in O'Caml. There you go.
Re: pissed off about functional programming
by jdporter (Canon) on Apr 25, 2005 at 16:39 UTC
(I'm using the caret as a substitute for lambda because I'm not in a mood to mess with non-7-bit ASCII values at the moment)
What kind of mood do you have to be in to not want to mess with HTML? (λ = &lambda;)

The kind where gritting my teeth any harder would have sent smoking bits of enamel flying across the room. I was not in a frame of mind to check the entity tables at that particular moment. ;-)

Re: pissed off about functional programming
by adrianh (Chancellor) on Apr 26, 2005 at 12:29 UTC
The limits of this truth become obvious when you try to store the value (2^N)+1 in an N-bit integer.

Most of the time I'd like my language to allocate a nice new N+1 bit bigint off the heap so I can worry about my integer overflows in the same place I worry about running out of memory :-)

Trying to prevent that is what we programmers call a 'hard' problem

No! Problems are never "hard" - they're just non-trivial!

(or - even better - they're left as a problem for the reader and/or an unfortunate postdoc assistant :-)

So what does this have to do with functional programming? Two words: dynamic scoping.

I remember many years ago, when such creatures were more common, explaining the difference between dynamic and lexical scoping to a non-programming mathematician.

Complete blankness for some time then she suddenly said "oh - it's like frames of reference". Not having a mathematical background I went "Huh?". She proceeded to scribble stuff on the whiteboard which I failed to understand. However since we both ended up with the right answer when we went through some example code I guess it made sense to her that way :-)

Re: pissed off about functional programming
by aufflick (Deacon) on Apr 27, 2005 at 00:00 UTC
Excellent discussion! This meditation should be required reading straight after you have read Why Functional Programming Matters :)

I also liked mstone's clarification of Church's paper and the relationship to Gödel, Turing and the Entscheidungsproblem. Anyone who likes that kind of thing should be sure to read Engines of Logic by Martin Davis.

Re: pissed off about functional programming
by BrowserUk (Pope) on Apr 24, 2005 at 14:25 UTC

Nice++.

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco.
Rule 1 has a caveat! -- Who broke the cabal?
Re: pissed off about functional programming
by skyknight (Hermit) on Apr 25, 2005 at 13:03 UTC
Actually, it turns out that if a functional programming language is Turing complete that it can solve every problem in the world. ;-)
*sigh* I'll bet you're no fun at parties.
Not quite "halting", but in nearby news:
Stevan Little has been playing with push, pop, shift, and unshift on infinite lists. He thinks he has found a bug, although maybe he just hasn't let it run long enough... Larry provided answers as to the correct semantics.

-- Randal L. Schwartz, Perl hacker
Be sure to read my standard disclaimer if this is a reply.

Re: pissed off about functional programming
by crenz (Priest) on Apr 30, 2005 at 11:10 UTC

There are some interesting replies at Lambda the Ultimate... from the perspective of people that have quite a good background in Computer languages and functional programming.

#4 Confuses Me
by Anonymous Monk on Sep 30, 2005 at 03:15 UTC
I really don't understand what you're trying to get at in #4. Sure, you can view a changing variable as a function of time, keeping a list of the changes. But the point of FP is that it makes explicit that you've got that list of changes; you can no longer ask for "the value of x" at get one of the many values; you have to ask for "the value of x at time t," which forces you to change your design, sure, but gives you the benefit that "the value of x at time t" will always be the same, everywhere in the program. It's basically brought out into the open what was once a piece of hidden state about x (time).

I sense from your comments about integers and Church's thesis that you're pointing out that something different goes on under the hood than what you really see. If that's the case, sure, but that's the *whole point*.

To go back to the simple example, yes, we all know what happens in C when you try to assign 2^32+1 to a 32-bit unsigned integer. That's bad. Thats why in many, many languages (I'll use Ruby as an example, since I know it reasonably well) we don't usually deal with what's actually going on inside the box when we maniuplate integers. If in ruby I say "3 + 2" I get "5", and it's probably doing 32-bit integer arithmetic. If I say "2 << 35", the numbers may well be 32-bit integers internally, but certainly the result isn't. Yet everything continues to work; I get the right answer and I can use it in further calculations. It's all kept under the hood. I can reason much more easily about my program because I don't have to worry about integers being bigger or smaller than a certain size.

That, it seems to me, is the whole point of FP; it's to make it easy to reason about your programs by putting certain constraints on them and hiding certain things you don't want to worry about. This process was started a long time ago by doing things like replacing dynamic with lexical scoping (BTW, hello? Who, besides perl, uses dynamic scoping nowadays?).

So I really don't see the point of this rant. When you take out the "it's hidden but I'll look at it anyway," stuff, the dynamic scoping straw man (nobody forces it, and neither Haskell nor Scheme nor many others even support it!), and the "this was made explicit but I'll pretend that that wasn't the whole point" stuff, there doesn't seem to be much left here.

But the point of FP is that it makes explicit that you've got that list of changes

No it doesn't.. at least not inescapably so. The list can be a series of evaluation contexts from nested function calls. That list of contexts isn't a first-order programming construct like a variable or a function ref, so you can't step back through it to see your previous values. Nor is the index 't' a first-order construct, since it's represented implicitly by the level of nesting. So what was hidden in an lvalue can remain hidden in FP, it just hides in the call stack rather than in a register.

the dynamic scoping straw man (nobody forces it, and neither Haskell nor Scheme nor many others even support it!)

No, but they do support closures and/or currying. Both of those create a chain of evaluation contexts that's basically a non-first-order list of previous values with a non-first-order implicit index. They offer all the benefits and disadvantages of regular lvalues and/or dynamic scoping, just wrapped in a different kind of syntactic sugar. And both of them shoot the analytic simplicty of lexical/static scoping right through the head, because once again you have values that can only be determined in the appropriate runtime context, rather than from direct inspection of the code.

It's precisely that kind of "we don't have that problem" tunnel vision that set me to ranting in the first place. I want people to be aware of the operational realities of programming instead of comparing checklists of syntactic sugar features without taking the time to think about what those features actually do.

The reason #4 confuses you is that you're too wrapped up in the superficial features of the immediate example to think about the fundamental issues of programming they represent, and then think about how those issues can pop up in whatever syntactic system you happen to use. That kind of superficiality is unfortunately common in some parts of FP culture, and it has the capacity to really piss me off if you expose me to it long enough.

Ok: then how is the - at compile time unknown - state of a closure different from the state kept in an objekt? That you have to look for the the value outside the lexical context is part of the intended behaviour and noone who can write closure would be surprised or expect a different behaviour. I think your argument is a little bit trampling on a strawman. Correct me if I'm wrong, but it seems you are claiming FP would promise a much simpler behaviour, by deliberately misunderstanding FPs design principles. There is a clear difference between constructing closures and mutable variables, even if the behaviou of a function/closure is determined non-(source)-locally at runtime. If somebody exaggerated or misrepresented FP to you, thats not FPs problem, but yours, because you obviously seem to know better
Re: pissed off about functional programming
by Anonymous Monk on Apr 29, 2005 at 21:05 UTC
So is there a point to this semi-coherent rambling? I kept thinking there was going to be a snippet of Haskell which displayed what the OP thought was a subtle bug. Did I miss it?
Re: pissed off about functional programming
by octopusgrabbus (Novice) on Aug 04, 2014 at 22:52 UTC

The big craze about functional programming reminds me of when Java was first introduced, and we were collectively told in subtle and not so subtle ways that programmers were not smart enough to be responsible for their own memory management (malloc/free) and/or doing so was dangerous. Perhaps, some of us were not smart enough to do that.

In the past three years, I've implented three small projects in Clojure. The thing I liked about programming in Clojure was that I had to think differently about solving a problem. I still have to think differently in Perl, because it isn't C.

Anyway, this post was interesting.

Thanks.

One thing I'd caution you against is taking a young programmer's enthusiasm a bit too seriously. I think there's an aspect to the naiveté that winds up being tremendously helpful in long-term programming language acquisition. You sort of surrender to styles and modes of thought when you're learning new paradigms, and IMO this period winds up being something you get to repeat maybe 3-5 times in your entire career. It's a gift, the way I see it.

Is FP a "Silver bullet"? No, but it's a tremendous one to have in your arsenal. As Alan Kay once put it: "Perspective is worth 80 IQ points."

My only feedback for the OP is that I'm generally more interested in the flavors of FP than I am in FP for its own sake. If you've ever tried to build concurrent programs in C or Java, it's reasonably clear in some FP languages why things are the way they are. For instance, there are some other things worth mentioning w/r/t Clojure regarding referential transparency and the nature of values. Hash array mapped tries, the STM, concurrency primitives, persistent data structures, the ability to ignore things like lock order acquisition, and so on are also probably worth mentioning.
Re: pissed off about functional programming
by zby (Vicar) on Mar 26, 2011 at 22:56 UTC
What I remember the uni was that functional languages don't have state. That is you have names (i.e. variables) and values - but you don't have memory locations (for the variables). This makes it a distinct computing model from languages that do have the notion of state, maybe calling them 'imperative' is kind of semantic overloading - but it is quite clear distinction. This distinction does not nullify their Church equivalence (i.e. that they can compute the same class of functions), but it might mean that algorithms written in them have different computational complexity.

As to assignments - same thing. Yes you can simulate a turing machine using a functional language, in terms of what they can compute they are equivalent, but they are not equivalent in terms of how you write programs in them. In functional languages you can simulate state with arrays - but it is easier to just use the language as it is. You can dig the same hole with a spoon as with a spade - but this does not make them equivalent, because with the spoon it would take you much more time.

Re: pissed off about functional programming
by Roboguy (Initiate) on Aug 13, 2014 at 00:24 UTC

Is this post about functional programming in general or functional programming in Perl? The way its worded implies that it is about FP in general, but its content suggests that it is specifically about Perl. Many FP languages are not dynamically scoped for example. Also, there are FP languages where variable reassignment is (at least) heavily restricted (Haskell for instance). I'm going to use Haskell to discuss a few points here because I am pretty familiar with it and it will be very uncontroversial to call it a functional programming language.

Regarding the 1st myth, I'm not sure that I've heard anyone argue that FP is lambda calculus. Now, I'm not familiar with the term "simultaneity" in the context of lambda calculus and I'm not sure I completely understand your description. I would like to point out, however, that there are many different ways that a lambda calculus expression can be reduced and there is no default reduction strategy if you just refer to "lambda calculus" in general. There are many potential reduction strategies that can be (and are) used. As a result, you can't really make a statement like "This is how reduction is always done in lambda calculus".

For the 2nd myth, I believe you are saying that the two paradigms are the same because they are both Turing equivalent. Again, I don't think I've ever heard anyone argue that this is not the case, and this is not what is meant when someone says FP is different from IP (in my experience, anyway). It would be tough to argue that programming in assembly isn't any different from programming in Perl. This is the sort of difference I hear people talk about (not to imply that IP is like assembly when compared to FP, I just want to illustrate the difference that I believe is often being discussed). The way that you program in FP is generally very different than IP (recursion is rarely used in IP compared to FP, for instance).

Now for the 3rd myth, this particular issue can be a bit heated, but I feel that referential transparency is something that is particularly misrepresented here so please hear me out (not to say that I believe you are intentionally trying to misrepresent it, but I think that the description you gave has some important inaccuracies). Referential transparency only means that, with respect to scope, a binding can always be replaced by the value it was bound to and visa versa (this is the basis of some very important optimization rules in Haskell compilers, such as stream fusion, that would otherwise have the potential to break working programs and would change their meaning if referential transparency didn't hold). There's nothing really more to it than that. I feel that dynamic scoping is a red herring here because referential transparency always only holds (when it does hold) with respect to a certain scope. It doesn't have much meaning if you don't take the appropriate scope rules into account. It also doesn't hold in the presence of unrestrained mutation. Also, I don't think that you'll find many people that will claim that Perl is always referentially transparent (the Perl example you gave doesn't have a Haskell counterpart that demonstrates the same non-referential transparency). There are languages that are referentially transparent. Haskell almost always is, even in the presence of IO. There are very few exceptions to this in Haskell, but they are very frowned upon and there is no major code base that I'm familiar with that takes advantage of this in a visible way (except for some very low-level bindings to C libraries). These few exceptions are necessary to interface with C code and to achieve certain kinds of efficiency goals (FP and efficiency is another interesting topic, but that's for another time). If you feel that this invalidates my argument, I would also like to point out that you can enable an option in the GHC Haskell compiler to prevent any of these unsafe things from being used. So, at the very least, when this option (Safe Haskell) is enabled, the language is a complete example of this.

The 4th myth has some interesting nuances to it. Let's start by discussing what variable assignment means. The first thing I'm going to do is completely throw out the idea that FP doesn't have any form of assignment. You can (in the vast majority of FP languages) assign a value to a name. I doubt that anyone would argue this and I don't think that this is what you were referring to in this section, but I want to be specific and as unambiguous as possible here. The real issue here is reassignment. Okay, so the next thing we have to talk about is what does it mean for something to get reassigned? I don't think it would be a stretch to say that this means that the value stored at the memory location associated with a given name gets changed to some other value so that, from that point forward, that name refers to the new value until it gets changed again. This is possible in Haskell. There are some extremely important stipulations with this though. This is not possible with the way that names are normally bound in Haskell, you have to go through a special type in order to do this (either ST or IO). Not only that, but the variable can never escape this type and interact with any "pure" code. This is partly how referential transparency is maintained in Haskell.
Let me explain my view of the other part of referential transparency (as it exists in Haskell) by specifically examining the IO type. When you write a sequence of IO actions, it can be viewed as referentially transparent in that this sequence of IO values (that is, IO actions) can be seen as producing a series of instructions which are then interpreted by the computer. This is in contrast to the idea of an IO value itself actually performing an action. It is a subtle distinction and I can see how it could be debated as to whether it is useful to interpret it in this way (you could argue that you can view C, for instance, in the same way and that it is a pointless philosophical difference). The important thing to realize about this, however, is that this second part of Haskell's referential transparency is given more meaning by the first part. The first part is really the important part, the part you can actually see in practice that actually matters in a real sense. A function in Haskell (Safe Haskell, if you are insistent on that distinction) will evaluate to the same value every time it is called with the same argument. Incidentally, this is a very clear, practical difference that you can see between Haskell and imperative languages.
One last thing that I would like to say about this last item is that, as I was hinting at in the start, you can have apparent reassignment that is actually not reassignment in the sense of physically changing the value at the location of an already-assigned variable. You can simulate it with a completely "pure" internal implementation, though this isn't as efficient as actually changing the value. If this technique is implemented in Haskell, you would again notice that the apparent mutation is (in a very natural way) separated entirely from pure code. This separation is more or less the same as the separation that occurs when using ST or IO (which do use "real" mutation internally). Because of this, you can start to get some decent intuition about this separation by thinking of it as working in this way.

Full disclosure: I have worked with functional programming for a while now so I am a bit biased towards it, but let me finish up by saying what I'm not trying to suggest: That FP is somehow fundamentally better than IP or that it is some kind of magical silver bullet. That's just not true and I believe that when people say that kind of thing it actually hurts the credibility of FP pretty badly in addition to just being outright wrong, so I am very against that sort of statement. It is different though, and I hope I've demonstrated that (and some other things) in this post.

PS: If you would like specific Haskell examples of anything that I discussed above, any other Haskell examples you'd like me to provide or any clarification, feel free to ask and I'll do my best to write something up!

Sorry, this is a little more long winded than I hoped it'd be. Thanks for reading!

Re: pissed off about functional programming
by Anonymous Monk on Jul 30, 2010 at 16:35 UTC
A lot of the stuff you are saying rings true in my ears... Especially, for point #4, check out my article "Purely functional structured programming" at the arXiv: arxiv.org/abs/1007.3023
Referential transparency and FP
by Apocalisp (Initiate) on Mar 28, 2011 at 13:12 UTC
Some things here are spot on, but other things are very confused. What you're lacking is a good handle on what functional programming is (rather than what it is not). Functional programming is programming with (pure) functions. That's it. Nothing more or less. Programming is therefore functional to the extent that it is referentially transparent (by definition). If it isn't that, then it's nothing at all. You can wave your hands and talk about "functional style" but it doesn't matter with how much style you violate referential transparency. It's not functional programming, to the extent that you violate it.
Re: pissed off about functional programming
by freegnu (Initiate) on Aug 12, 2014 at 11:13 UTC

Python automatically switches to unlimited number size internal storage for numbers bigger than the registers of the CPU as any good object oriented programming language should.

J let's you specify the precision you want or full precision which is probably what you mean.

You may want to introduce yourself to the tacit programming of the functional programming language J at jsoftware.com. Tacit programming infers the variables based on the position of the functions so that the input can be the result of another function or the original input based on positioning. It's a bit more complicated than that but Haskell as the type constricted mess it is right now is a poor representative of what functional programming is. Haskell makes the simple verbose. I'm currently trying to get my head around Gerunds in J as a way of performing all the operations before the branching logic or refactoring the logic flow to find the use case to use Gerunds. I think I have to step up my J to using adverbs and performing my actions on verbs first. Verbs = functions.

At it's root functional programming is y = f(x) where x can be an array or a single item. Some programming languages like R simplify this by allowing the use of mathematical notation and forcing single items to be 1 dimensional arrays of length 1.

Create A New User
Node Status?
node history
Node Type: perlmeditation [id://450922]
Approved by Joost
Front-paged by ghenry
help
Chatterbox?
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others wandering the Monastery: (8)
As of 2017-06-22 16:59 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
How many monitors do you use while coding?

Results (525 votes). Check out past polls.