http://www.perlmonks.org?node_id=116807


in reply to Re:{2} Maintainable code is the best code -- principal components
in thread Maintainable code is the best code

I think that principle components analysis is the wrong way to think about this problem.

First of all the analogy does not really carry. Principle components analysis depends on having some metric for how "similar" two vectors are which corresponds to the geometric "dot product". While many real world situations fit, and in many more you can fairly harmlessly just make one up, I don't think that code manages to fit this description very well.

But secondly, even if the analogy did carry, the basic problem is different. Principle component analysis is about taking a complex multi-dimensional data structure and summarizing most of the information with a small number of numbers. The remaining information is usually considered to be "noise" or otherwise irrelevant. But a program has to remain a full description.

Instead I think a good place to start thinking about this is Larry Wall's comment about Huffman coding in Apocalypse 3. That is an extremely important comment. As I indicated in Re (tilly) 3: Looking backwards to GO forwards, there is a connection between understanding well, and having mental models which are concise. And source-code is just a perfectly detailed mental model of how the program works, laid down in text.

As observing the results of Perl golf will show you, shortness is not the only consideration for well laid-out programs. However it is an important one.

So if laying out a program for conciseness matters, what does that tell us? Well basic information theory says a lot. In information theory, information is stated in terms of what could be said. The information in a signal is measured by how much it specified the overall message, that is how much it cut down the problem space of what you could be saying. This is a definition that depends more on what you could be saying more than what you are saying. Anyways from information theory, at perfect compression, every bit will carry just as much information about the overall message as any other bit. From a human point of view, some of those bits carry more important information. (The average color of a picture winter scene has more visual impact than the placement of an edge of a snowflake.) But the amount of information is evenly distributed.

And so it is with programming. Well-written source-code is a textual representation of a model that is good for thinking about the problem. It will therefore be fairly efficient in its representation (although the text will be inefficient in ways that reduce the amount of information a human needs to understand the code). Being efficient, functions will convey a similar amount of surprise, and the total surprise per block is likely to be fairly large.

In short, there will be a good compression in the following sense. A fixed human effort spend by a good programmer in trying to master the code, should be result in a relatively large portion of the system being understood. This is far from a compact textual representation. For instance the human eye finds overall shapes easy to follow, therefore it is good to have huge amounts of text be spent in allowing instant pattern recognition of the overall code structure. (What portion of your source-code is taken up with spaces whose purpose is to keep a consistent indentation/brace style?)

Of course, though, some of that code will be high-order design, and some will be minor details. In terms of how much information is passed, they may be similar. But the importance differs...

Replies are listed 'Best First'.
Atoms as a concept for programming analysis
by dragonchild (Archbishop) on Oct 05, 2001 at 01:41 UTC
    Actually, I think that principal components is a horrible way of look at programming. Programming is, essentially, the art of instructing a being as to what to do. This being has an IQ of 0, but perfect recall, and will do actions over and over until told to stop. There is no analysis by the being as to what it's told to do.

    As for a human reader, the analysis is focused on atoms, which can be viewed as roughly analogous to principal components, but they're not.

    The first principal component is meant to convey the most information about the data space/solution space. The next will convey the most of whatever the first couldn't convey, and so on.

    In programming, the goal is for each atom (or component) to convey only as much information as is necessary for it to be a meaningful atom. Thus, the programmer builds up larger atoms from smaller atoms. The goal is to eventually reach the 'topmost' structure, which would be the main() function in C, for example. That function is built completely of calls to language syntax and other atoms, whose names should reflect what that syntax or atom is doing. Thus, we don't have if doing what while would do, and vice versa.

    In data analysis, you want to look at the smallest number of things that give you the largest amount of knowledge of your dataset. But, you're not analyzing data. You're reading algorithms, which do not compose a dataset in the same way that observed waveforms would. To understand an algorithm, you have to understand its component parts, or atoms.

    Think of it this way - when you explain a task to someone else, say a child, you break it down into smaller tasks. You keep doing so until each task is comprehensible by the recipient. At that point, you have transmitted atoms. At no point have you attempted to convey as much information as possible in one task. Each task is of similar complexity, or contains similar amounts of information.

    ------
    We are the carpenters and bricklayers of the Information Age.

    Don't go borrowing trouble. For programmers, this means Worry only about what you need to implement.

      I like the idea of atoms in that it captures the point that functions should be small and simple.

      But I really think it is key that a good programming model shows a good conceptual model which is going to be well-compressed. Among other details, that points out not only why you factor code, but also why you avoid repeating it.

        I'm not sure why the 'but' ... an atom isn't going to be repeated because it already exists. You're going to reuse an atom every time you can. (But, then again, I'm assuming logic will be used here. *shrugs*)

        ------
        We are the carpenters and bricklayers of the Information Age.

        Don't go borrowing trouble. For programmers, this means Worry only about what you need to implement.

Re:{4} Maintainable code is the best code -- principal components
by jeroenes (Priest) on Oct 05, 2001 at 10:30 UTC
    I see what you mean.

    Want to clarify a bit, though, as I didn't say that principal components analysis was a good analogy. I rather said coding should accomplish the opposite, that is spreading to information into the functions, dividing it equally among them.

    However stated, the analogy goes wrong because with principal components we talk about orthogonal vectors in space, while with programming we have hierarchial functions. These create a subspace of each own, and you just can not do a PCA on different subspaces. Is that what you more or less ment, tilly?


    /me notes with a smile in his face how everyone approaches PCA from its own angle...., Masem from the chemical spectra point of view, tilly from a encoding point of view while I think more in pattern deviation scemes.