I was reading grinders reply to TIEOWTTI where he said
Another aspect of this debate is orthogonality. The word gets bandied about a lot, and I'm not sure everyone agrees on what it means, I'm not sure I even understand it myself. But what it means for me is that something you pick up in one corner of the language can successfully be applied to another part of the language... and it will work! Perl excels at this sort of thing, like the notion of the last expression in a block is returned, and lazy evaluation and what not.
This got me thinking because if I had been asked what orthogonality is I would have a used a mathematically based description, probably something like
In math it means that two lines are perpandicular to each other, which means that if those lines are seen as representing variables then those variables are independant. In computing this idea of independance is extended into the abstract realm and means that two ideas or solutions are independant to each other. This sense is usually used when a programmer wants to indicate that some algorithm or solution is independant of another to the extent that changes to one should in way affect the other.
But as grinder said im not so sure if this a common understanding or that I even have it correct.
What do the monks at large think is orthogonality and how would you explain it or define it if you needed to?
Yves / DeMerphq
---
Writing a good benchmark isnt as easy as it might look.
Re: (OT) On Orthogonality
by stefp (Vicar) on Apr 16, 2002 at 12:38 UTC
|
I should cite the unsufficently known fondational text
of the Perl credo: Natural Language Principles in Perl.
Larry is talking there of indeterminate dimensionality, diagonality and
fractal journey:
Most problems, including linguistics problems, are a matter of ``getting from here to there'', and the geography in-between has a heavy influence on which solutions are practical. Problems tend to be solved at several levels. A typical journey might involve your legs, your car, an escalator, a moving sidewalk, a jet, maybe some more moving sidewalks or a tram, another jet, a taxi, and an elevator. At each of these levels, there aren't many ``right angles'', and the whole thing is a bit fractal in nature. In terms of language, you say something that gets close to what you want to say, and then you start refining it around the edges, just as you would first plan your itinerary between major airports, and only later worry about how to get to and from the airport.
Another way to see that is to talk about granularity.
Perl supports both Kleenex programs (small grain) to complex OO
(big grain).
You don't use the same style with a throw-away program and a long lived one.
Also a language is a complex entity where most things are tied
together. So there is little point of talking of orthogonality.
But the designer of a language must avoid needless inconsistencies
while providing many way to do the same thing.
Also, the problem is not so much to design the perfect language
but leaving place to growth in unforeseen dimensions.
Perl has done pretty well until perl5 because references and OO
did not integrate well in existing the language and it had to stay back-compatible.
So came cleant up scripting language like Python or Ruby.
But perl6 will be more than a cleant-up perl5.
--
stefp -- check out TeXmacs
wiki | [reply] |
Re: (OT) On Orthogonality
by FoxtrotUniform (Prior) on Apr 16, 2002 at 15:42 UTC
|
All right! This is one of my favourite topics -- thanks
demerphq for giving me the chance to pontificate a bit.
In some sense, I'd say that orthogonality, in software
engineering terms, is very close to the mathematical
definition: a software component is orthogonal to a set of
(other) components when it does something that none of
the others does, without duplicating any functionality
already present in the set. (If you want to get really
geeky, its "capability vector" is normal to that of each
other component in the set.)
I think that's a good starting place, but it's too
rigid. More pragmatically, two components are orthogonal
when they perform different (but possibly related, and
possibly overlapping) tasks, have no dependencies on each
other (you can call foo() before bar(),
or bar() before foo(), or just call
bar() and not worry about foo()). So my
orthogonal set now consists of components that do one
well-defined thing, do it well, and have no external
dependencies. The point here is that you can pick the
tool that you need without having to pick any others.
A good indicator for orthogonal code is when you're
writing a program, think "I have code to do that", find
the existing code, and start using it in your new program
without modifying it. If you have to tweak the interface
a bit, or include another function that doesn't solve your
problem, or otherwise bring extra crap to the table, your
code isn't orthogonal.
Update: I'm not tilly. But I wish him luck.
:-)
--
Good luck to you, tilly
:wq
| [reply] |
|
I agree with FoxtrotUniform; I also think of orthogonality mostly in terms of the mathematical definition. But I want to expand on those orthogonal vectors. Take a counterexample: say you have two features that interact. Like perl's for loop and the magical while diamond while(<>). Now try to lay those two features out in a plane at right angles to each other, so that they become the x and y axes of a two-dimensional space. Notice how if you use one feature, you can't use the other feature in exactly the same way. You want to use the default $_ variable from the while loop, but oops -- you're inside the for loop and it aliased $_ to something else. You can still use the full power of both constructs, but you have to modify your use of one to accommodate the other. Going back to the axes, moving on one axis also moves you on the other.
It's like describing colors using "redness" and "maroonness". If you increase the maroonness of a color, you affect the redness. You can still get any color you want, but you have to adjust how you use both features at the same time. Visually, they would be two axes that have something other than 90 degrees between them.
One thing I'm noticing from others' comments in this thread is that people seem to think of perl as orthogonal. Bull! Y'all just want to say that because you've been trained that orthogonality is always good, and therefore perl must be orthogonal. Perl is one of the least orthogonal programming languages I know. It's still far more orthogonal than any human language I know, which raises an interesting question: why are human languages, which are much easier to mutate and have much lower backwards-compatibility requirements, not mathematically beautiful and orthogonal?
I venture that orthogonality of expression is unnatural to us meat brains. The mathematical advantages are real enough that we make programming languages as orthogonal as we can tolerate, but those languages that sacrifice intuitive expressibility at the altar of orthogonality are the niche languages that are largely ignored, their fanatical followers notwithstanding.
I'm not saying orthogonality is bad. It's not; it's absolutely necessary for large-scale development, and it's the best way of reducing the raw amount of stuff you have to keep in your head at one time to use a language. (Think of the English grammatical rules for past participles -- is it "have drank"? "have drunk"? "drinken"?) But we don't think of things as tasting 20% salty, 8% bitter, 32% sweet, etc. We just think it tastes like chicken.
Update:Oh, right. You're not tilly, are you? All the colors and blinking lights and voices in my head confused me.
| [reply] [d/l] [select] |
|
while (<>) {
my $hexstr="";
foreach (split //) {
$hexstr.=sprintf"02X",ord;
}
print $hexstr;
}
Then the loops are not orthogonal. I cannot make an arbitrary change to either without considering the effect on the other (at the bare minimun the inside loop affects the outside by changing $_, and obviously the inside is dependant on the <> operator).
But if i rewrite that as such:
sub as_hexstr {
local $_=shift;
my $hexstr="";
foreach (split //) {
$hexstr.=sprintf"02X",ord;
}
return $hexstr;
}
while (<>) {
print as_hexstr($_);
}
Now the loops _are_ orthogonal. I can make arbitrary changes to either without effecting the other in the slightest. So to me a control structure is never explicity orthogonal or not to another control structure. It is only when the two structures are _used_ to achieve a goal that their orthogonality may be discussed with any meaning.
Perhaps im missing something here, if so then hopefully you or one of the other monks who has thought about the orthogonality of languages can straighten me out....
Yves / DeMerphq
---
Writing a good benchmark isnt as easy as it might look. | [reply] [d/l] [select] |
|
|
Re: (OT) On Orthogonality
by erikharrison (Deacon) on Apr 16, 2002 at 12:28 UTC
|
When I am talking about a programming language, what I mean by orthogonal is something close to "tiny". C, for example, is really an itty bitty language. This doesn't mean that you are rigidly set in the number of ways you can do something - it means that you are given the most basic tools, rudimentary access to things, the compiler / interpreter assumes nothing. You are given tight control over every aspect of the problem - and are expected to maintain all of those little aspects.
An orthogonal language doesn't "double up" - meaning that a tool does one thing, well. You don't get eval, AUTOLOAD, and the like. You get variables with strong typing, some mathematic operators, and strings are really arrays of characters, cause the compiler doesn't have a "string" type, but a "char" type. That is the essence of orthogonality - nothing but what you need, or what can build what you need.
Cheers,
Erik
| [reply] |
|
You could also have orthogonality by having a string type and no char type. ;-) I agree that a smaller language is more likely to be orthogonal and vice versa. There's a high degree of correlation there, but it's not a rule without exceptions.
Orthogonality, as I've seen it used, is often very generalized compared to the mathematical definition. It doesn't only mean that the ways to do different things are independent of each other's results (which would be orthgonality specifically of semantics), but also that the language syntax gives you building blocks which are similar in the amount of work they do per block and that there don't tend to be any single building blocks (constructs or keywords) that duplicate the work of a set of smaller building blocks combined in a specific way.
I'd say that Perl is fairly orthogonal semantically, meaning that one operation doesn't generally effect the operations before or after it. The use of some default variables limits this a bit, but not much. Syntactically, I'd say that some orthogonality has been sacrificed for efficiency in both programmer time and in running time. There are some very big, powerful things built in which other languages would leave to be built up while there are other, smaller things which allow those big solutions to be built up another way.
So, this is where I think there's much debate over what makes a language orthogonal. There seem to me to be two different ways in which a language can be called orthogonal, with some languages being further along one curve than along the other. Personally, I think orthogonality in a language is not necessarily good nor bad. Both types effect the efficiency of the language in certain ways. Both effect the programmer's expectations and the fulfillment of those expectations in some way. What's important is that there's a balance struck among the programmer time, system time, ease of beginners learning the language, ease of intermediate users of the language advancing their knowledge without too much surprise, and the other factors that make a language useful. As we all know, if a language is perfectly designed to follow one mantra, then only the most ardent followers of that mantra will use the language. As we also know, if a language makes a few compromises which make it useful in a general population of programmers for general tasks, then it will get wide use an perhaps even benefit from further refinement and support. Ever notice there are no domains called oberonmonks.org or srmonks.org?
| [reply] |
Re: (OT) On Orthogonality
by Dog and Pony (Priest) on Apr 16, 2002 at 12:45 UTC
|
Well, to me it has lots in common with the mathematical definition. Orthogonality in a project, or a program, means to me parts, modules or subroutines or whatever on any scale that does not interact or affect any other such part - it is self-contained. So you can change whatever you like in one, or just remove it and replace it with something new, and everything else will just keep on truckin'.
By having isolated parts of the world like this, your part, or the part for the day is lot's easier to work on. It has some similar concepts to OOP and especially Black Box objects, but it isn't necessarily the same thing. It can be though. :)
Worth noting is that I don't necessarily actually work this way, but this is how I read the word. :) (Note to self: Read the Pragmatic Programmer again very soon.)
Summary: Orthogonality == independence.
You have moved into a dark place.
It is pitch black. You are likely to be eaten by a grue. | [reply] [d/l] |
|
I must bring again the concept of granularity. You are talking
about ortogonality of big components that you can treat as black
box objects that you access thru methods. Signatures allow to
distinguish methods by the same name.
But if you go back to the small building block of a language
the black box approach does not work. The shellish approach
used by (old Bourne shells, tcl) is to treat the langage as an empty
shell that provide little more than flow control.
The cost is the you must fork process. Also there is the problem
of multiple level of interpration agravated by the abscence of
powerful quoting mechanisms like qq||.
Language for compiled program also delegates to black box libraries. But a "real" language like perl tries to integrate
in the syntax common patterns like the use of hash.
The problem is to find enough "dimensions" in the syntax to pack
enough of this patterns in a readable way.
--
stefp -- check out TeXmacs
wiki
| [reply] |
|
I am ashamed to reveal this, but I have no idea what you are talking about.
You are talking about ortogonality of big components that you can treat as black box objects that you access thru methods.
No, I don't. I say that the concepts are similar at times. And I do not limit myself to big components, I even mention subroutines (which is to me usually a small one).
The rest... I am sure that is correct in itself. I don't really see what it has to do with orthogonality, or how that affects the validity of black box designs. Orthogonality to me has nothing to do with how a language is to be constructed, or how it is constructed. It is a way to design programs and projects. Although *writing* an interpreter for a language can of course benefit from this principle.
If I am simply stupid here, feel free to explain and elaborate, so even I can understand. :) And so I know if I should give you ++ or -- *grin*.
You have moved into a dark place.
It is pitch black. You are likely to be eaten by a grue.
| [reply] |
|
|