Paul Graham, of Lisp fame, has a
fascinating article about technology choices. While this article is targeting
Lisp programmers, many of the points are (as usual) very applicable to
Perl.
He points out what should be obvious, but still needs to be clarified for
many: all languages are not created equal. We love Perl, so we know that's
true :) He says some nice things about Perl. In fact, many of his articles
mention Perl. While he is definitely a Lisp advocate, I think there is no
question that he considers Perl to be preferable alternative to bloated, static
languages. He also takes some time to swipe at Design Patterns (he states that
patterns in his programs suggest to him that he is using abstractions that
aren't powerful enough).
What was particularly interesting to me was his side-by-side language
comparisons of generating an accumulator. Specifically, using a closure to do
so. He listed Lisp first, but I'll start with Perl:
sub foo {
my ($n) = @_;
sub {$n += shift}
}
And in Lisp:
(defun foo(n)
(lambda (i) (incf n i)))
Rather that just drool-n-paste all of his code samples, you should go out
and read his article. His Python examples show that this is a bit clumsy in
the language due to Python's incomplete support of lexical variables (to be
fair, I don't know enough Python to comment on that). What was really
fascinating, though, was his pointing out that this particular problem cannot
be easily solved in many, perhaps most, languages. Most languages won't allow
you to return a function. Further, if the language doesn't allow lexical
scope, then a returning a function wouldn't do you any good.
He did show an example of how to accomplish similar functionality in Java,
but Java's strong typing gets in the way. You would have to overload the
method with a separate method for every data type! (take that, you strong typing advocates :)
This is where I think it's interesting for Perl hackers. Yes, our language
doesn't have a large corporation backing it. Our language doesn't necessarily
have the prestige of many others. But when push comes to shove, if you need
something done, Perl is often a fantastic choice. Paul Graham points out that
if another language requires 3 times as much code, coding a similar feature
will take three times as long. Actually, I think he's mistaken here.
Development time, IMHO, does not scale linearly with the size of the program.
I don't have any statistics handy (anyone, anyone, Bueller?), so this is only a
suspicion, but the longer your program, the more bugs you're going to have. This will slow development time even more. When you're competing with
someone else and they're using bloated technologies, this is another example of
a clear potential win for Perl.
Let's assume that you are company A, using Perl, and company B uses
another language whose programs typically are three times as long as Perl
programs. Let's also assume, generously, that this means that development time
for them is three times longer. They've just spent six months adding a new
feature and touting it to the press. You've decided that you want it, you only
take two months to add it. You spend six months adding a new feature. If they
decide they want it, it will take them a year and a half to get it. By that
time, you've left them in the dust.
So, the next time some PHB says "use Foo to program in", ask him or her what
would be done with the extra budget money if you could use Perl.
Cheers,
Ovid
Join the Perlmonks Setiathome Group or just click on the the link and check out our stats.
Re: Productivity and Perl
by vladb (Vicar) on Jun 01, 2002 at 21:30 UTC
|
++ An interesting post and equally interesting article, Ovid.
I certainly agree that Perl is one of only a few excellent tools for productivity. There's a lot of applications out there that could be done easier in Perl and in shorter time span. Take, for example, all the data munching job that Perl can perform. Yes, I also love Perl for it's elegant style. Some could manage to turn their code into a pure noise nightmare, but in my view if approached carefully and with due respect, Perl can turn into a very powerful, yet pleasing (to the eye ;) and easy to use tool.
"Paul Graham points out that if another language requires 3 times as much code, coding a similar feature will take three times as long. Actually, I think he's mistaken here. Development time, IMHO, does not scale linearly with the size of the program. "
Hmmm, this makes me think hard about the credibility of that individual. His statement is all but plain wrong! Siding with you, I too believe that there's more of an exponential dependancy there. The more lines of code that I have to write, the more there's that I'll have to debug, fix, and maintain. Also, it would be harder to add new functionality if the code is huge. In numerous instances, a piece of Perl code that is as much as 10 times smaller can perform exactly same duties as a code in Java or C/C++. I don't make it up as I indeed saw it with my own pair of yes ;-).
UPDATE: hsmyers hmm, certainly he's not 'completely' wrong. However, as Ovid already mentioned, there's more to it than just a linear dependency. My point is only that the time it takes to write N lines of code is N to some power (greater than 1 at least ;). Althought, I don't have hard figures to support this claim (argh, it still complies with common logic actually), I have spent a number of years coding and this is what my experience tells me. As to whom to blame (this may be a too harsh word...), I now believe Fred, and Paul only to an extent that he 'quoted' a poor assumption. In any event, everyone should be entitled to his/her own opinion. Thus I conclude my argument ;)
If your program would be three times as long in another language, it will take three times as long to write-- and you can't get around this
Again, my point is the ratio shouldn't be 3/3 ;o)
_____________________
$"=q;grep;;$,=q"grep";for(`find . -name ".saves*~"`){s;$/;;;/(.*-(\d+)
+-.*)$/;
$_=["ps -e -o pid | "," $2 | "," -v "," "];`@$_`?{print"+ $1"}:{print"
+- $1"}&&`rm $1`;
print$\;}
| [reply] [d/l] |
|
| [reply] |
|
| [reply] |
|
MeowChow
s aamecha.s a..a\u$&owag.print | [reply] [d/l] |
Re: Productivity and Perl
by educated_foo (Vicar) on Jun 02, 2002 at 00:48 UTC
|
I read this article, and like anything written by any language's cheerleader, found it annoyingly unfair. Regarding two points that you summarize:
Most languages won't allow
you to return a function. Further, if the language doesn't allow lexical
scope, then a returning a function wouldn't do you any good.
This is completely unfair -- objects are ugly in Lisp, so you encapsulate your data with closures. Objects are easy in Java, so you bind data to functions with anonymous classes. Yes it's more verbose, but so's everything in Java. The point is that if you can't use closures, it's only going to frustrate you to insist on using them to solve all your problems.
I think the strong typing is mostly a red herring, too. Java's lack of generics (a la C++ templates) does pose a problem, but the static typing seems just fine. Either you want to add related things, in which case they should probably implement the same interface (e.g. "Accumulable"), or you're trying to add unrelated things, in which case you're doomed anyways.
Paul Graham points out that
if another language requires 3 times as much code, coding a similar feature
will take three times as long.
I wish I were smart enough for the rate of my coding to be limited only by how fast I could type, but sadly that doesn't seem to be the case. Code that is denser and more intricate just takes more time (per line, not per unit of functionality) to produce. Take these two examples:
(define (foo x)
(call/cc (lambda (cc)
(/ 7 (if (= x 0) (cc 'bad-value) x)))))
versus
int divideSevenBy(int x)
{
if (x == 0) {
throw new DivideByZerException("bad-value");
}
return 7 / x;
}
While the first is three lines and the second is six, if anything, the second took less time to write, not more. Having to type more characters does take more time, but even in this small example it's not the sole factor, and in a larger project, actual typing time is certainly the least of my worries (It can be easily dwarfed by "time spent figuring what went wrong in nested macro expansions.";)
Don't get me wrong -- the article has plenty of good things to say. But arguments like "my language is best because I can't make yours do things my language's way" deserve a quick trip to /dev/null.
/s
| [reply] [d/l] [select] |
|
educated_foo wrote: arguments like "my language is best because I can't make yours do things my language's way" deserve a quick trip to /dev/null.
Yes and no. Any time I see a superlative like "any", "none", "everbody", etc., I tend to be suspicious. However, this doesn't mean the argument is completely invalid, just suspect. I think a point that Paul Graham would agree with is that a given tool is likely a superior choice for a problem if it solves that problem more succintly than the available alternatives. Let's consider a very simplistic example.
Imagine that I want to know what a given person might steal. I might assume that they will steal stuff given the following conditions:
- That person is a thief.
- The stuff is valuable.
- The stuff is owned by someone (how do you steal something if it's not owned by anyone?).
- And the person doesn't know the person who owns the stuff they might steal.
If I were programming in Prolog, I might have the following program:
steals(PERP, STUFF) :-
thief(PERP),
valuable(STUFF),
owns(VICTIM,STUFF),
not(knows(PERP,VICTIM)).
thief(badguy).
valuable(gold).
valuable(rubies).
owns(merlyn,gold).
owns(ovid,rubies).
knows(badguy,merlyn).
It's fairly easy to read, once you know Prolog. :- is read as "if" and a comma is read as "and".
I can then as what a given person might steal:
?- steals(badguy,X).
X = gold
Yes
?- steals(merlyn,X).
No
So, we can see from this example that the badguy might steal gold and that merlyn will steal nothing (given the information available). Note that at no point did we state that the badguy would actually steal gold. The program was merely able to infer this from the available information. Now, try to program that in Perl, Java, C, etc. You can do it, but it's not going to be nearly as easy or efficient as programming in Prolog.
From this, I think it is safe to assume that an appropriate lesson should be "my programming language is a good choice for a given problem because I can use the tools it provides to solve the problem faster and easier than most other choices". Thus, we can take your quote and go further and say "my language is a superior choice for a particular type of problem because I can't make yours do things my language's way". Then it comes down to problem spaces and the tools that are appropriate for them. Javascript is often the best choice for client-side Web form validation because it's so widely supported. Java is often the best choice for Web applets for the same reason. Want to write a device driver? Put down Perl and pick up C.
I think you would agree with that conclusion as you wrote "objects are ugly in Lisp, so you encapsulate your data with closures. Objects are easy in Java, so you bind data to functions with anonymous classes.". While I could be misreading you, I took that to mean that different languages have different ways of naturally arriving at solutions. This implies to me that if a given language's approaches are more suitable for a given problem, then that language is more suitable for said problem. Rebuttals welcome :)
The danger, of course, lies in believing that "foo" is an appropriate solution for every problem. If we're unwilling to take the time to learn what else is out there, we naturally limit ourselves in how we can conceive of solutions to problems. However, I doubt that Paul Graham believes that Lisp is better for everything. Of course, just as we tend to write for a Perl audience, he writes for a Lisp audience and this makes it easy to misread what is said.
Cheers,
Ovid
Join the Perlmonks Setiathome Group or just click on the the link and check out our stats. | [reply] [d/l] [select] |
|
| [reply] |
|
One thing I think it would be fair to note is the concept of "problem space". There are many areas for which Perl would be a stupid choice. There are many areas for which Perl would be a good choice, yet other languages would be an even better choice. I suspect that most of what Perl programs typically do right now might be better served by the cleaner syntax of Python, for example.
Much of the strength in Perl lies in learning the Byzantine modules, scoping issues, context rules, etc. If you're not willing to make that commitment to the language, other choices are superior, period. However, since this is a Perl forum, it doesn't serve me well to bash Perl, which I still love, warts and all.
Cheers,
Ovid
Join the Perlmonks Setiathome Group or just click on the the link and check out our stats.
| [reply] |
|
Actually the code-length point isn't Paul Graham's. It is Fred Brooks', Paul merely happened to agree with it.
The point appears in The Mythical Man-Month and was based on audits that showed that programmers in different languages were getting the same amount of code written per day in the end, even though that code did very different amounts of stuff.
The cause appeared to be that higher-order languages offer higher-order concepts, and as a result the programmers menally "chunk" at a higher level at first, and then find it easier to debug later because there is less code to understand. For a trivial example, how much do Perl's memory management and hashes reduce the amount of code needed when going from C to Perl, and speed up the programmer?
While it is easy to find plenty of counter-examples (such as your Scheme continuation versus a loop), as a rule of thumb for real projects it really does seem to work.
| [reply] |
|
Actually, your 2 examples are misleading. Common Lisp (which is what Graham usually writes about) does have catch and throw. On the other hand, it doesn't have <samp>call-with-current-continuation</samp>; that's Scheme.
Of course, even with Scheme, you wouldn't be using <samp>call/cc</samp> in your code directly; you'd be using the famed macros to have something nicer.
| [reply] |
|
You caught me -- my "lisp" experience such as it is consists of Scheme and Emacs Lisp. My example may have been inaccurate, but I wouldn't say it is misleading. The point is just that Java expresses the same things in more words (or lines). While writing a function in either language, you spend some time deciding to use a particular approach, then some more typing it in. I'd say the "thinking about it" part takes about the same amount of time for similar things. Then you go to type them in, and the Java one has more lines, but it's probably faster per-line to write. Last time I programmed in Java, I even had a macro for
} catch (IOException e) {
// Loathe Java
System.out.println("IO error: " + e);
}
Which gave me 4 lines (ok, 3 lines) almost for free. Certainly, these 4 lines are much more quickly generated than 4 lines of quasiquoted macro glop in Lisp.
/s | [reply] [d/l] |
|
(MeowChow - Rant ON) Re: Productivity and Perl
by MeowChow (Vicar) on Jun 02, 2002 at 03:02 UTC
|
<rant style="Dennis Miller">
Now, I don't want to get off on a rant here, but if you intend to prove that your pet language is superior to mine for real-world work, at least back it up with a real-world example of a problem that your language solves more naturally and elegantly. And that doesn't mean you should spin some apocryphal anecdote about how Widgets Incorporated turned to your pet language to turn their business around and turned a quick buck while their competition are now taking turns at the unemployment counter turnstyle. Sure, this sort of anti-FUD may collide with the particles of real FUD bouncing around a PHB's hollow cranium and help to illuminate that otherwise dark and vacuous cavity, but I'm a programmer -- I want to see code.
That said, the code example Graham offered was about as useful as the buggy four-page "hello world" program from Que's latest installment of Mastering Java Unleashed in 21 minutes for the Mentally Challenged. Demonstrating how difficult it is to create accumulators in various languages that don’t naturally support closures tells me more about the deficiencies of the author than the deficiencies of the languages in question. An accumulator is just a fancy, functional way of performing a simple imperative task, namely x+=y. Graham's argument is as convincing as the collective ramblings of the OOP zealot who insists that because your language doesn’t support introspective-metaobject-operator-inheritence, it’s not a "true object-oriented language", but at least the zealot is fun to punch in the face... again, and again, and again. And by the way, my language does support introspective-metaobject-operator-inheritence, you just have to go out and buy TheDamian's book, it's all in there, really, it is.
Others have already taken issue with Graham's assertion that there is a linear relationship between code size and development time. Even if this claim were correct, Lisp is stunningly mediocre when measured against this metric, especially for such a supposedly high-level language. Lisp is hardly renowned for its terse and succinct code; a golf contest in Lisp would be about as much fun as an eat-the-biscuit circle jerk, minus the biscuit.
Don’t get me wrong, I have nothing against Lisp, except for its verbose, parenthesis-laden, unidiomatic, overly literal, hyper-indented code that makes my eyes burn and my nose bleed. Macros are cool, and sure, it’s nice to be able to shoehorn, er... embed Lisp into your already bloated realtime pacemaker application, you know, for end-user programmability. Just don’t tell me that your language is so much more powerful than mine, while I’m getting actual tasks done and you’re busy macro-morphing Lisp into a language that still sucks but is just perfect for your problem domain, because you know what? We have a word for that around here. It's called Perl.
Of course, that's just my opinion, I could be wrong.
</rant>
ps. This node is comic relief, or at least hopes it is. Take an extra grain of salt, on me.
MeowChow
s aamecha.s a..a\u$&owag.print | [reply] [d/l] [select] |
(OT) Re: Productivity and Perl
by FoxtrotUniform (Prior) on Jun 02, 2002 at 08:14 UTC
|
I have a feeling that I'm going to get pounded for
this, but it's really starting to get on my nerves:
Yes, our language doesn't have a large corporation
backing it. Our language doesn't necessarily have the
prestige of many others. But when push comes to shove,
if you need something done, Perl is often a fantastic
choice.
Can we please lay off the "tragically
misunderstood underdog" victim mentality? Sure, plenty
of people think that "Perl's just for CGI" or "Perl's
sort of like VB, right?" or outright disdain Perl because
it's free software/open source/insert your favourite
jargon here, but when people start organizing cruises
dedicated to a language's user base, I think it's hit a
certain level of industry acceptance.
No large corporation backing Perl? What,
O'Reilly, the most respected
practical computer publisher on the planet, doesn't count?
Sure, they didn't develop the language, and they
aren't charging for it, but still....
--
The hell with paco, vote for Erudil!
/msg me if you downvote this node, please.
:wq
| [reply] |
|
| [reply] |
|
First: I did not downvote your node (I gave it a ++ because you raised a good point). Second, while you may be tired of "they have a better PR firm" comments, that doesn't make them untrue. And no, I don't think O'Reilly counts. How many O'Reilly books sit on the PHBs shelf? On the other hand, how many "Learn IT Management through short articles" type of magazines are there? Many, if not most, of those magazines are filled with puff pieces.
One of my best friends worked for many years with XXX, the PR firm for YYY (I deleted the names because it dawned on my that dropping them might not be wise from a litigation standpoint), and it was fascinating to hear how they worked. First, the PR person assigned to an account would learn what the selling points of a product were and, if the PR person was competant, might learn about similar points of competitors products. Then, much of their job would be scanning trade publications for "unfavorable" articles and demand equal space, or send out puff pieces about the product in question (which often get printed verbatim - Wall Street Journal is often guilty of this, I understand) and try to arrange interviews. When you see the VP of Acme Corp being interviewed, quite often that interview was set up by an anonymous PR person calling the magazine and saying "we have someone you might want to talk to."
It was also fun listening to tales of large corporations threatening to pull advertising if unfavorable reviews were received, or sometimes the advertisers would receive advance notice of unfavorable reviews so they could pull their advertising for an issue or so. This goes on all the time and I think it's fair to say that Larry and Friends don't have the desire, or money, to play that game.
Oh, and I'm a little embarrassed at how paco got out of hand :)
Cheers,
Ovid
Join the Perlmonks Setiathome Group or just click on the the link and check out our stats.
| [reply] |
Re: Productivity and Perl
by Abigail-II (Bishop) on Jun 03, 2002 at 13:29 UTC
|
Paul Graham points out that if another language requires 3 times as much
code, coding a similar feature will take three times as long. Actually, I
think he's mistaken here. Development time, IMHO, does not scale linearly
with the size of the program. I don't have any statistics handy (anyone,
anyone, Bueller?), so this is only a suspicion, but the longer your
program, the more bugs you're going to have. This will slow development
time even more.
I do not follow this argument. Sure, if your program is longer, you will
have more bugs. But how does that translate into the claim that development
time doesn't scale linearly with program length? Why would the number of
bugs grow superlinear?
However, I think it's a mistake use the "development time is linear with
code size" when comparing developing in different languages. Sure,
a Perl program might be smaller in size than a Java program. But
Perl programs are likely to contain more regular expressions than
Java programs. Regular expressions are an language in themselves -
a very dense language to be precise. But I would not want to make the
claim that regular expressions contain less bugs than other code, just
because they are more dense! In fact, it's probably the other way around.
I would say that dense languages tend to have more bugs per line of code
than less dense languages. Don't forget that Perl gives you a lot of rope.
That makes for shorter programs, but it also gives you more opportunity
to hang yourself.
There is also a large class of bugs that are language independent. Bugs in
the design will be bugs, regardless whether the implementation is written
in Perl, LISP or Java. Those bugs do not scale at all with the length of
the implementation.
Maintenance speed, and speed of implementing new features isn't dominated
by program size. What's far more important is a good design. I'd rather
add some new functionality in a well designed C program of 1,000,000 lines
of code than a new features in some horribly written Perl program of 1,000
lines of OO-maze. And I'm a far better Perl coder than C coder. And then
we haven't even talked about the development environment. How is source
control being done? What's the testing environment, etc, etc.
Abigail
| [reply] |
|
|