good chemistry is complicated, and a little bit messy -LW |
|
PerlMonks |
MOPT-04 - identification, part 2by mstone (Deacon) |
on Jan 07, 2003 at 01:39 UTC ( [id://224813]=perlmeditation: print w/replies, xml ) | Need Help?? |
Meditations on Programming Theory
In MOPT-03, I said that the theoretical difference between functions and variables was arbitrary, but useful. This time I'll explain why, and show how functions and variables work together. Functions:functions and variables:A function has all the same features a variable does: A function is an entity with an abstraction barrier, and that barrier defines a scope. Every function can be bound to an identifier, and every function produces a value. The set of all values a function can produce defines a type. Function notation actually makes those pieces easier to see than variable notation. For a simple function declaration: sub function { return ("value"); }
Since the function above always returns the same value, it's more or less equivalent to a variable.. at least to the kind of variable we saw back in MOPT-03. In theory, you should be able to replace any variable with an appropriately trivial function, but Perl doesn't quite support that idea. Theoretically, the code: sub func { return "original value.\n" } sub print_func { print func(); } sub redefine_func { sub func { return ("local value.\n"); } print_func(); } print_func(); redefine_func(); could produce the output: original value. local value. which would make func() behave like a dynamically-scoped variable. It doesn't, though. The actual output is: local value. local value. which shows us that the second definition completely replaces the first. Perl's function bindings are statically scoped (1), not dynamically scoped, so we can't bind the name to a new entity in a different evaluation context. If we want to play with dynamically scoped entities in Perl, we have to stick to local variables. Theoretically, though, there's nothing to stop us from creating a language that binds names to functions dynamically. And in such a language, there would be no difference between a trivial function and a variable. (1) - BIG HONKIN' CORRECTION: adrianh correctly showed, below, that you can dynamically redefine functions by assigning an anonymous subroutine to a local typeglob:
makes a function behave exactly the same way as a local variable. Kudos and thanks adrianh! Officially, variables are degenerate functions. The term 'degenerate' indicates the limiting case of an entity, which is equivalent to some simpler entity. We usually find degenerate entities by setting some attribute to zero. A point is a degenerate circle, for instance: a circle with radius zero. In the case of functions and variables, the thing we set to zero is the function's parameter list. functions, parameters, and formal structures:A parameter is a variable whose value affects a function's value, but not its formal structure. A formal structure is the set of operations that make a function do whatever it does. Those operations will always boil down to a pattern of substitutions, and we can always build a formal system (see MOPT-02 for more about formal systems) that uses those substitutions as rules. A formal structure isn't a complete formal system, though. It's just a computation that happens within a formal system. Officially, a formal structure describes a family of 'generally similar' computations that can take place in a given formal system, and a function's parameters are axioms on which those computations are performed. Which is great, as long as we know what 'generally similar' means. To explain that, we need more vocabulary.
The rules that make up a formal system fall into one of two categories.. equivalence rules and transformation rules:
The concepts of equivalence and meaning are equipotent: they have equivalent power. We can define either one in terms of the other, so if we have one, we automatically have both:
The process of assigning meanings to symbols is called interpretation. Interpretation is central to programming, and indeed to most of mathematics, and is a subtle, often infuriatingly complex subject. The problem boils down to the fact that different representations can embody different sets of assumptions, and a substitution that makes perfect sense for one representation might be meaningless for another: print ( "2 + 2 ", (2+2 == 4) ? "is" : "is not", " equal to 4.\n" ); print "- but -\n"; print ( "'two plus two' ", ('two plus two' eq 'four') ? "is" : "is not", " equal to 'four'.\n" ); It can be very difficult to build a set of equivalence rules for different representations of the same meaning, and still maintain a coherent sense of overall meaning. A set of narrowly defined equivalences, each of which makes sense in its own way, is almost guaranteed to produce contradictions when taken as a whole.
We use both ideas when programming, but only certain substitutions count as equivalence rules. The others are just transformations where both sides of the rule happen to have similar interpretations. Formal equivalence is transitive. The transitive property says that if 'a' equals 'b' and 'b' equals 'c', then 'a' equals 'c'. In other words, we can condense any sequence of equivalence rules into a single rule. In programming, we call that process reduction and say that two strings are reducible if:
The set of all strings we can derive from a single axiom by repeatedly applying a single rule is called a transitive closure, or just a closure. An equivalence rule breaks a language into a set of mutually-exclusive closures called partitions. Any two strings from the same partition are formally equivalent, and no string in a partition is reducible to any string in any other partition. Those partitions mark the difference between equivalence rules and transformation rules. Only an equivalence rule can partition a language. The closure of a transformation rule contains the entire language. We use the term 'reduction' because formal equivalence is slightly different from logical equivalence. Logical equivalence is symmetric: if 'a' equals 'b', then 'b' equals 'a'. Substitution rules aren't symmetric, though. They only work in one direction. To make a formal equivalence symmetric, we'd need two rules, each going the opposite direction. Only the second reduction principle, above, is symmetric. The first and the third are asymmetric. That asymmetry is what makes the whole subject of interpretation so tricky. A pair of strings can be formally equivalent, even if the derivation only works one way. Worse yet, the partitions for two different languages might not line up with each other, even though the interpretations are logically equivalent. We can say that the strings 'two' and 'TWO' are both logically equivalent to the number 2, but those strings might not be formally equivalent to each other. To get back to functions, with those new terms in our arsenal, we can say that:
In practice, that means code tends to gather in chunks where we change something, then rearrange the result. That change, then rearrange sequence is a low-level design practice called an idiom. Idioms represent a layer of logical structure between syntax and functions. We use syntax to build idioms, and we use idioms to build functions. There are no hard-and-fast rules about idiomatic programming, just a handful of low-level structures that pop up often enough to be familiar. Idioms give us a framework from which to hang code, and code tends to be easier to read, write, and use if we use that framework consistently.
A variable, being a degenerate function, takes no parameters. Any axioms necessary are hardwired into the function, so the formal structure produces the same value every time. Parameters share the same strange, half-way existence as values, since both parameters and values can cross an abstraction barrier. Parameters are visible in two different scopes at the same time, and making that happen takes special machinery. functions and bound variables:Within the scope of a function, a parameter is just another free variable. It has a name, but that name isn't bound to any entity defined in the same scope. A parameter doesn't receive its binding from the global context, or from the function's evaluation context, either. It gets bound to an entity defined in the statement that calls the function.
Parameter passing mechanisms come in three basic flavors: positional, named and defaulted:
Perl happens to use an interesting variation on positional parameters. Every function takes exactly one parameter: a list. The contents of that list are defined in the invocation context. At first glance, that seems like a cheap way to avoid building a 'real' parameter passing mechanism, like C or Java have, but it's actually quite elegant from a theoretical standpoint. A Perl function can also return a list, so 'the list' forms a standard interface between any two functions. When we add the rest of Perl's list-handling tools, that interface makes Perl very good at handling signal processing logic, where each function acts like a filter on an arbitrarily long stream of input: sub sieve { my ($prime, @list) = @_; if (@list) { return ($prime, sieve (grep { $_ % $prime } @list)); } else { return ($prime); } } print join (' ', sieve (2..50)), "\n"; The code above implements the Sieve of Eratosthenes with signal processing logic. The function sieve() assumes that the first element of the list is a prime, and it filters any multiple of that prime out of the remaining list. The first element of the new list will always be prime (because any composite of smaller primes has already been filtered out), and the function recurs with that list. The result is a list of all the prime numbers in the original list: 2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 Signal processing is another idiom, and functional programmers use it heavily. Signal processing programs tend to be easy (okay.. easier) to analyze mathematically, so you can prove that the program will behave properly if you're willing to do the work. Perl also makes it easy to simulate named parameter passing: sub use_pseudo_named_parameters { my (%args) = @_; ... } use_pseudo_named_parameters ( 'arg1' => 'val1', 'arg2' => 'val2', 'arg3' => 'val3', ); And defaulted parameter passing: sub use_pseudo_defaulted_parameters { my (%args) = @_; my %params = ( 'param1' => 'default 1', 'param2' => 'default 2', 'param3' => 'default 3', ); @params{ keys %params } = @args{ keys %params }; undef (%args); ... } use_pseudo_defaulted_parameters ( 'param1' => 'invocation value 1', 'param2' => 'invocation value 2', ); so instead of sticking us with a single parameter passing mechanism, Perl makes it reasonably simple to simulate any mechanism we want. functions and identification:Now in purely theoretical terms, there's no reason to worry about name bindings at all. Instead of thinking of a function as a machine that converts parameters into values, like a machine in a factory, we can think of a function as a family of entities that all have the same formal structure. Each member of that family would have a different set of values hardwired in as parameters, making them variables according to the terms I mentioned earlier. By that reasoning, the string 'func(1)' would be the name of a specific variable. the '(1)' part would just be a naming convention, not an invocation context that establishes a binding. I used a similar naming convention for hash keys in the code samples, above. Yes, the 'machine in a factory' version is generally easier to implement in a real computer, but the 'family of variables' version is also possible, and is occasionally useful. I'll explain how in next week's meditation, which will cover lvalues. For now, the point is that there's no theoretical difference between passing arguments to a function and selecting a specific variable from a set. As a matter of fact, the 'family of variables' model is actually closer to the mathematical definition of a function than the 'machine in a factory' version. And that brings us all the way back to the concept of identification. The ability to select a specific item from a set gives the power to do anything a function can do. When we get right down to it, functions are just a convenient fiction that make it easier to describe the kinds of things we can do with identification.
Back to
Meditations
|
|