good chemistry is complicated, and a little bit messy LW 

PerlMonks 
Comment on 
( #3333=superdoc: print w/replies, xml )  Need Help?? 
Meditations on Programming Theory This is the first in a series of articles about programming theory. I'm writing them in response to requests other monks have made for such information, and as a personal test to see how much of this stuff I can make accessible to an intelligent and motivated audience. I'll try to post a new article every monday. Between posts, I welcome questions and discussion. And now, the (hopefully) good stuff.. meditation #1: Human assumptions and information spaces: When selftaught programmers start to explore programming theory, they tend to find the vocabulary daunting. That makes sense, because it's a dense, layered vocabulary, full of ideas that seem obvious at first glance, but turn out to be surprisingly complicated when you really start thinking about them. And you have to pile a whole bunch of those ideas together before you get anything that even vaguely ressembles a highlevel language like Perl. To make matters worse, most books on programming theory jump to the fun parts, like the implications of the pumping lemma on finite automata. They do that because it's a long walk from there back to the kind of fundamental concepts an ordinary person can understand. For this series, I'm going to go all the way back to the basics, starting with one of those deceptively obvious ideas that everyone can agree with at first glance:
Everyone knows that computers don't think, but most people would be hard pressed to tell you what computers actually do. And they'd have an even harder time trying to describe the difference between 'thinking' and whatever it is that computers do. What computers really do is manipulate symbols according to a predefined set of rules. Humans, on the other hand, associate symbols with meanings. Again, it's easy to nod and agree with that, but even good programmers can get caught by the difference a dozen times a day. Every time we write 'Foo_package' instead of 'FooPackage', or 'if ($string == "foo") {...}' rather than 'if ($string eq "foo") {...}', we cross the boundary between symbols and meanings. The problem is so common that we've built ourselves special tools (i.e.: syntaxcoloring text editors) to make sure we're spelling our symbol names properly. That brings us to our second deceptively simple idea:
The tricky bit here is that we humans are the ones who assign the meaning. Remember: computers don't think. They have no concept of 'meaning'. They just manipulate symbols according to a predefined set of rules. But in the human frame of reference, we expect to move from one meaning to another, logically connected meaning. The problem is that, even in the human frame of reference, it's hard to pin down the concept of 'meaning'. Philosophers have been fighting over it for centuries. Personally, as a programmer meditating about programming theory, I've found it easiest to assume that the term 'meaning' describes all the logical connections we can make between one idea and another. To put that another way, we can define meaning in terms of human assumptions. We assume that a symbol with a certain meaning will behave a certain way. And that brings us to our first basic concept of programming theory per se:
Programmers define the rules by which computers manipulate symbols. We (and the users) assign meanings to the symbols, and expect the software to give us new symbols whose meanings make sense. That means the programmer's job is to track down all the connections between meanings, then model that graph of connections with a set of rules that manipulate symbols. More poetically, you could say that programmers build bridges between the human frame of reference and the machine frame of reference. To do that job properly, you need to know several things: you need to know what kinds of assumptions people usually make, you need to know what kind of behavior you get from a given set of rules for manipulating symbols, and most of all, you need to do a whole lot of bookkeeping to make sure you haven't left anything out. That last one is important. Humans inherit a vast, richlyconnected set of associations between meanings just by being human. Some of it seems to be built into the very structure of our brains. The rest of it comes from exposing a functioning nervous system to a large and complex world for a few dozen years. Computers, on the other hand, only make the connections we tell them to. That echoing lack of connections between concepts is probably the hardest thing for humans to truly understand. But you have to be aware of it if you want to be a good programmer. I use two catchphrases to remind myself of that idea:
Programming theory teaches humans how to think like computers. It teaches you how to simulate a computing machine in your head, rather than shuffling a bunch of rules together and hoping they do the right thing. (As an aside, I've spent a fair amount of time watching other programmers work, and have read lots of code. When I'm in an especially nasty mood, I'll tell you that most coders don't actually program, they just act as the selection function in a very slow genetic algorithm that evolves a set of rules around a desired set of features) Before we start looking at how rulesystems work, though, I want to lay out a set of very common human assumptions. They're so common, in fact, that we call them information spaces. There are seven basic kinds of information spaces, each of which revolves around a certain assumption:
A nominal space defines the concept of identity. The word 'nominal' comes from the latin word for 'naming'. Nominal spaces let you perform two basic kinds of operation:
Things like interrupt symbols (SIGHUP, SIGQUIT, SIGKILL, etc), file control flags (O_RDONLY. O_WRONLY, O_RDRW, O_APPEND, etc), and similar 'valueless' constants form nominal spaces. In C, programmers use enumerations to define nominal spaces. In Perl, we generally use hash keys and ignore the associated values. When you see a snippet of code like:
it's a sign that you're working with a nominal space. Mathematically, nominal spaces are unordered sets, and you can perform all the usual set operations (subset, union, intersection, etc) on them. An Ordinal space uses the concept of identity, and adds the concept of order. Ordinal spaces let you perform the threevalue 'before, after, or same' operation that Perl represents with the spaceship (<=>) operator. The most common operation you perform on an ordinal space is sorting. Strings, especially filepaths and URLs, tend to be treated as ordinal spaces. An Interval space uses the concepts of identity and order, and adds the concept of distance. Any two adjacent elements in an interval space are the same distance apart, which means you can make meaningful comparisons between ranges in that space. That was a bit heavy on the mathematical jargon, so let's look at a couple of examples: We already know that nominal spaces don't have a concept of distance. It makes no sense to ask whether SIGHUP is closer to SIGQUIT or SIGKILL. The days of the week are an interval space, though, and it does make sense to ask whether Tuesday is closer to Monday or Saturday. Time is by far the most common interval space you're likely to work with, although geometric calculations run a close second if you happen to be working with graphics ("does this shape overlap that shape?", for instance). Perl doesn't have any builtin operators or data types that support interval spaces, and neither do most other commonly used languages. Databases, OTOH, almost always support a set of time operations, and some (like postgresql) also support geometric operations. A Rational space uses the concepts of identity, order, and distance, and adds the concept of an origin, and by extension, the number zero. Rational spaces support the mathematical operations of addition, subtraction, multiplication, and integer division. almost all the numerical work programmers do takes place in a rational space defined by the language. An Irrational space supports all the concepts a rational space does, but also supports the existence of numbers that can't be expressed as the ratio of two integers, like Pi and the square root of two. Irrational spaces support operations involving fractional exponents (i.e.: taking the roots of numbers), and logarithms. Technically, computers can't handle irrational spaces at all. The decimal expansion of an irrational number is infinitely long, and a computer only has a finite amount of memory. There are also more irrational numbers than rational ones, so trying to represent them symbolically leads to problems, too. In most cases, we approximate irrational numbers with rationals, and agree to ignore the rounding error. The field of numerical mathematics deals with the headaches of trying to keep that rounding error low, for situations where you really have to care about accuracy. A Complex space is an irrational space that includes a special symbol called the imaginary number, which is defined as the square root of negative one. A complex space lets you perform any mathematical operation at all, including those that seem really wierd, like finding the square root of negative seven. Complex numbers are vectors, with a 'real' part and an 'imaginary' part. You'll probably never run into them unless you're doing advanced scientific work, and then you'll probably know enough math to write your own complex number package. Absolute spaces are a bit strange because they define the concept of a unit by negation. An absolute space is a space where the distances have no units. Again, let's look at an example: For normal programming purposes, the Farenheit and Celcius temperature scales are both rational spaces. They both have items with names (the integers and integer fractions); they both have concepts of greater, less than, and equal; they both have an origin, and they both have a concept of unit distance between entities. The problem is that the unit distance between Farenheit degrees is smaller than the unit distance between Celcius degrees. If we want to convert Farenheit to Celcius or viceversa, we need to multiply by a conversion factor. Now let's look at probabilities: If I roll a fair 6sided die, I have a 50% chance of rolling a 1, 2, or 3. If I flip a fair coin, I have a 50% chance of getting heads. Those two 'fifty percent' values are equivalent, even though I'm generating them two different ways. Probabilities, and most other statistics, are absolute in the sense that I can compare any two percentages, regardless of how they were obtained. The whole point of descriptive statistics, in fact, is to characterize data in terms of numbers that don't make you worry about conversion factors. Physics, electronics, and most other scientific trades also have formulas that produce numbers without units.  The first step in thinking clearlylikeaprogrammer is to know what kind of space you're working with at any given time. The second step is to use tools that are appropriate to the space in question. Programmers who haven't figured out what kind of space they're working with can waste endless time one of two ways:
If you want a nominal space, use hash keys. Don't use integers, and for heaven's sake don't 'add one' to go from one entity to another. Study the file control flags (which do use integers as constants), and see how the boolean AND and OR operations make a byte act like an eightelement unordered set of bits. Then go back to using hash keys for nominal spaces, and rejoice in how much easier everything is. By the same token, don't screw around trying to do basic math with your hash keys. Use numbers. That's what they're for. Okay.. that's enough for one week, in case anyone has made it this far and is still awake. Go meditate on the nature of human assumptions, and how you represent them in your code. I'll be back next week to talk about symbols and the rules that manipulate them. In reply to MOPT01  assumptions and spaces by mstone

