http://www.perlmonks.org?node_id=610515

Why I Use Perl (title shortened to fit in)

Walking the road to enlightenment... I found a penguin and a camel on the way.....
Fancy a yourname@perl.me.uk? Just ask!!!
  • Comment on Interesting read: "Why I use perl and still hate dynamic language weenies too"

Replies are listed 'Best First'.
Re: Interesting read: "Why I use perl and still hate dynamic language weenies too"
by Ovid (Cardinal) on Apr 17, 2007 at 14:01 UTC

    Though it's stated at the beginning of that post, it was primarily a response to this vacuous post attacking dynamic languages. Frankly, the main problem with dynamic languages is that they're still relatively new and finding their feet. As a result, there are plenty of things which quickly get frustrating about them. However, the author of the 'attack' post seems to miss the point quite a bit. For example:

    Can dynamic typing result in a reduction in code volume of 80 to 90 percent? It seems highly improbable to me. Suppose I had a piece of Java code and went through it removing all the type qualifications. Would that reduce the code volume by 80 to 90 percent? No, I don't think so. Maybe there are other omissions that dynamic typing would make possible, such as obviating adapter-style classes and some interfaces. But even including such omissions, claiming an 80 to 90 percent code saving seems a bit rich.

    The author doesn't seem to have extensive familiarity with dynamically typed languages. It's far more than removing type qualifications, adapter-style classes and the like. Consider the classic case of writing a line to a file in Java:

    import java.io.*; class WriteFile { public static void main(String args[]) { FileOutputStream foStream; PrintStream pStream; try { foStream = new FileOutputStream("somefile.txt"); pStream = new PrintStream( foStream ); pStream.println("This is written to a file"); pStream.close(); } catch (Exception e) { System.err.println ("Error writing to file " e); } } }

    Now let's compare that to Perl (the close is implicit when the filehandle is out of scope and use you 'use fatal' to make your exceptions automatic!)

    open my $fh, ">", "somefile.txt" or die "Can't open file: $!"; print $fh "This is written to a file\n" or die "Can't print to file: $ +!";

    These examples and many, many more can easily show a tremendous difference in both the length and maintainability of a "dynamic" versus "static" language (granted that those are loose terms). That's far more than just type declarations or adaptor classes.

    Here's another gem:

    Most DLs are interpreted rather than compiled. This means that your test/debug cycle no longer includes an explicit "compile" or "link" step which, DL weenies claim, makes for a "development cycle that is dramatically shorter than that of traditional tools." Maybe so, but how significant an impact does that have on overall development speed? You don't have to wait for the compile/link to occur, but you still have to wait for the thing to execute so you can test it, and interpreted languages are often slower to execute than compiled ones. So how much do you really gain overall? And there are other confounding factors. Perhaps the compile-time type checking of a static language serves to catch bugs that a DL won't pick up until runtime, in which case some of your test/debug cycles for a DL will be spent catching type-related problems that a static language would've detected before execution began.

    He missed the boat on his first criticism. I rarely work on code which is significantly slower (for me) in a dynamic versus compiled language. So I wait an extra half-second for my CLI or Web page. That really doesn't affect what I do (though for batch processes that run repeatedly, this can add up). If my CPU performance is the limiting factor, I'm not going to be using a dynamic language in the first place!

    His second point is correct that static languages often pick up bugs at compile time that dynamic languages pick up at runtime. This is certainly something which has bitten me before, but most decent programmers I know who have worked with both static and dynamic languages comment that this problem happens far less than 'static-only' proponents think. That being said, it can be a pain to track down when it does occur, but the development speed with dynamic languages seems to more than make up for this occasional issue.

    He raises other points, some good, some bad, but altogether, it doesn't seem to be a terribly well-thought out essay. It's the usual "my way is better" argument but without much substance behind it.

    Cheers,
    Ovid

    New address of my CGI Course.

      I don't know what your point of reference is when you say "relatively young", as at least this timeline of programming languages (pretty image) suggests that languages I'd put in the "dynamic" basket (little typing,runtime code modification,(bytecode) interpreter) seem to start 1959 (Lisp) and seem to be available since the late '80s (*sh+awk, 1978; Smalltalk, 1980; Perl/Tcl, 1987)... Maybe you mean "other" dynamic languages (which aren't even included in the pretty graph), like Python (1991) and Ruby (1995)...

        I knew somewhat was going to comment on that :)

        I don't count Lisp because it's such a radically different paradigm than Perl (though it doesn't have to be) that I don't think it translates that well. Plus, enough people have successfully ignored it that learning from it seems to be non-existent for many (despite how powerful it can be).

        As for the late 80s, I think that's still relatively young in terms of coming to grips with the problem domain. Though one might think of Smalltalk, it flared and died. Perl has really carried the torch for full-fledged programming languages which are dynamic in nature and it has numerous flaws to go with its brilliant success. This is, in large part, because of how Perl came about and Python and Ruby are largely viewed as reactions to perceived weaknesses in Perl.

        Of course, we can easily point out plenty of issues we have with Python and Ruby, but let's face is, Perl often has similar or different weaknesses. I think dynamic languages in general are still relatively young in terms maturity and general appreciation of what they can do. For example, MJD's marvelous HOP book has really opened a lot of programmer's eyes about how powerful functional techniques are and how easy they are to bring into dynamic languages. While others have understood this before, I don't believe it was as widespread before this, yet that's still a relatively new book.

        I think we've barely scratched the surface of what these languages can do and we've not really come to grips (in a widespread manner) of how we can intermingle different paradigms and gain the benefits of them. Everyone's sitting in their camp and shaking their fist at others. I think Perl 6 will help out a lot here (it would be better with Parrot, but don't hold your breath) and potentially transform programming, but until then, popular dynamic languages are still learning their way.

        Cheers,
        Ovid

        New address of my CGI Course.

      I upvoted you for the time and thought you've put into this, but I'm not sure I agree with your post.

      I don't think the example you gave is a good one to debunk his argument against dynamic typing reducing code volume. First and foremost, what do the two code blocks you gave have to do with the "dynamic" nature of Perl and the "static" nature of Java? It seems to me that the gain has more to do with other aspects of the languages.

      First off: it seems unfair to have the "import" line in the Java program; surely it isn't because Perl is dynamic that file operations are primitive. Likewise, what does automatic file closing have to do with static vs. dynamic languages? I'll let Java absorb that feature as well. And surely it is in the nature of a scripting language (Perl) to allow programs with no explicit modular structure, while Java (as an OO language) requires class definitions. Nothing about class definition seems to highlight the static-ness or dynamism of a language. I've also sugared this up a bit by eliminating the extra file output stream variable. I haven't Java'd for a while, but I think I got it right.
      Here is my modified Java from your example:
      PrintStream pStream; try{ pStream = new PrintStream( new FileOutputStream("somefile.txt")); pStream.println("This is written to a file"); catch (Exception e) { System.err.println ("Error writing to file " e); }
      To me, this seems a lot more reasonable for comparison. Perl no longer holds an obvious upper hand on code length, especially considering that we can drop the Java example from eight lines to five with some whitespace manipulation.

      As far as maintainability goes, I don't see how the Perl code is much more maintainable than the Java. Enlighten me?

      Overall, I agree with you that slinging mud at one side or the other isn't productive; both "sides" want the same thing and can learn from each other. I just didn't think your example was very convincing.

      ~dewey
        Perl no longer holds an obvious upper hand on code length, especially considering that we can drop the Java example from eight lines to five with some whitespace manipulation.

        I quite agree. As I was reading your deconstruction of the Perl/Java comparison, I thought about some code I wrote the other day.

        I'm dealing with a big system at the moment, and in one place I have some database queries that return a dozen records or so of a thousand or more columns. I wanted to look at the result sets in Excel to trace a bug. So I extracted the query and recorded the results as a tab-delimited file. Load it into Excel.

        Boom.

        Turns out Excel has some really feeble limit on the number of columns a spreadsheet may contain. So I scratched my head for a moment, and thought... hmm, just have to transpose the rows and columns in the file, and I can load the transposed data in Excel and get on with the job.

        I was going to paste the code here, but then I realised that it's sitting on a Windows server at work, so I'd have to log in through a VPN and dick around to get it out.

        And then it occurred to me that if it took me about two minutes to write the code then, I could rewrite it again from memory. Here's the code, or something close to what I wrote. I typed this in, and compiled it, and it ran the very first time. I had to ponder the push statement for about 10 seconds, but that's it. And here it is:

        use strict; use warnings; my @trans; while (<DATA>) { chomp; my @row = split /\t/; push @{$trans[$_]}, $row[$_] for 0..$#row; } print join( "\t", @$_ ), "\n" for @trans; __DATA__ A B C D E F G H 1 1 2 3 8 0 7 9 3 5 3 3 5 5 3 3

        So there you have it. I imagine something like that is not going to come out at even 20 lines of Java, but if you want to take a stab at it, I'd be very interested at seeing even a rough sketch.

        I'm not saying that my code is particularly clever or efficient, but it solved a specific problem in about as much time as it took me to write the code (although to be fair I did have a couple of off-by-one errors the first time around).

        That's why I program in Perl.

        • another intruder with the mooring in the heart of the Perl

        That code example was a very poor pick. The only data involved are constant string literals arguments. How can one show a typed language requires more code than an untyped language using code without using any data structures?

        IMHO, the real savings for dynamic vs. static languages is in how it changes your thinking patterns. For instance, when confronted with a problem that involves filtering out duplicate strings, programmers for dynamic languages reflexively reach for an associated array (a hash in Perl, but not necessarily implemented with a hash algorithm in other languages). Users of static languages tend to create their own solutions, even if they have something like an associated array readily at hand (like java.util.Hash). This nearly always results in the dynamic language implementations being smaller, clearer, and more robust.

        Of note, I don't think users of true strongly-typed languages have the same problems.


        "There is no shame in being self-taught, only in not trying to learn in the first place." -- Atrus, Myst: The Book of D'ni.

      I loved some on the points that they try to make, especially based upon their ignorance. Starting their qualifications,

      Though I've principally used Java for the last 10 years or so, and C/C++ for five years preceding that, I have a basic familiarity with Python, having written a few utilities here and there with it.

      What about Python's other dynamic feature, runtime code modification. Can it result in a reduction in code volume of 80 to 90 percent? I find this impossible to assess, as I have insufficient familiarity with self-modifying code. But I might mention that this lack of experience is quite intentional. Even when I find myself using a language that permits runtime code modification, it is a feature I am loathe to use for it seems to me to hold great potential for producing a code base that is hard to understand, and therefore difficult to debug and maintain. Additionally, there is significant cognitive complexity that goes along with dynamically modifying code, not unlike the intrinsic difficulty that accompanies multi-threaded code. In my view, this level of complexity is to be avoided if possible.

      Translation: I have not used it, don't understand it, and take pride my ignorance because it sounds HARD!

      Not much substance at all.

      Ovid I think your dead on about the hacknot article (I find most of his blogging blah) and your the antithesis of the OP article. The OP article seemed more to me that you should respond to criticism as you have done, rather than with fanboy zeal -- and it's that unabashed, uncritical thinking of ruby fanboys which is going to give all DLs a badname.

      -derby
Re: Interesting read: "Why I use perl and still hate dynamic language weenies too"
by hardburn (Abbot) on Apr 17, 2007 at 16:37 UTC

    Adding to what Ovid noted above, I think the original post makes a huge mistake in trying to compare dynamic languages to languages with C-style static typing that's been absorbed into many other languages (C++, Java, C#, etc.).

    When comparing C-style typing to Perl/Ruby/Python/flavor-of-the-month dynamic language, dynamic languages usually win. If we add verboseness to the language (in this case, declaring types), it should give us something in return. The claimed return is speed (compiler has more information to perform optimizations), safety (compiler has more information on what you're trying to do and can see if you're doing something that doesn't make sense, like taking the square root of a string), and self-documentation.

    These days, the additional speed is better handled by simply throwing hardware at the problem or finding optimizations elsewhere when they're needed. If this were the only argument for C-style typing, then it would rarely be enough to overcome the cost of additional verboseness. The argument for speed was a valid one when Richie and the gang were writing UNIX, but today, hardware is usually cheaper than developer time.

    The safety argument is outright bunk for this style of typing. Type errors in C are just as likely to be minor annoyances (requiring a typecast, which further increases verboseness) as they are to be real problems. The false-positive rate is high enough that programmers are likely to treat all type errors as annoyances. C++/Java/C# make these problems somewhat less likely, but the type system still gets in your way more than it actually helps.

    Self-documentation is a weak argument. The additional verboseness is there for the compiler's benefit, not your's. While there might be some benefit here, I can't see it being a replacement for real documentation or good coding practices.

    Therefore, the arguments for C-style static typing over Perl-style dynamic typing are weak at best, and I can't take people seriously who try to compare these two possibilities.

    I can take people seriously who compare languages with strong type inferencing systems (Haskell and Ocaml) against dynamic languages. These languages demonstrate the safety value of strong type systems (type errors are almost always real problems), and aren't even particularly more verbose than dynamic languages. People who are still gripping to C-style typing have rarely explored these alternatives.


    "There is no shame in being self-taught, only in not trying to learn in the first place." -- Atrus, Myst: The Book of D'ni.

      The false-positive rate is high enough that programmers are likely to treat all type errors as annoyances.

      I recently worked with Watcom C, where -w4 doesn't enable "W130 Possible loss of precision", defined as follows:

      This warning indicates that you may be converting a argument of one size to another, different size. For instance, you may be losing precision by passing a long argument to a function that takes a short.

      The warning needs to be explicitly enabled using -wce=130! Even the compiler is treating the type error as an annoyance!

code reuse?
by doom (Deacon) on May 03, 2007 at 18:55 UTC
    The article under discussion Why I can use perl and hate dynamic language weenies, too is commentary on Invasion of the Dynamic Language Weenies, which in turn is (in-part) commentary on the article What's Wrong With Ruby?, which was written by Matthew Huntbach, a Lecturer in Computer Science at Queen Mary, University of London.

    Some of Huntbach's commentary about the current state of Java caught my eye:

    However, these days when I look at the Java section in the bookshop, there is little I understand. There are huge numbers of add-on libraries, each of which has its justification but which have been developed on a learning curve and so haven’t got it quite right. Even the core APIs are bloated, since backwards compatibility means one cannot throw away one’s first solution to a problem once it has become an integral part of the language, even if developing and using it has led to a better solution. It seems to be inevitable that once a language becomes widely used as a general purpose language, it is pushed in directions it isn’t suited to, builds up unnecessary complexity through accretions, and the urge to throw it away and start again becomes stronger. A bit like any large software system which has served its time.

    I'm getting to the point where I find things like this screamingly funny...

    Use JAVA! It will save costs by encouraging code reuse!

    Now let's throw that code away and re-write it all in the next language that will encourage code-reuse.

    And using an out-dated language like perl is obviously pointless (it's more fun to re-write everything from scratch than download a solution from CPAN).

    (I sometimes wonder how "Computer Scientists" call themselves "Scientists" with a straight face...)

Re: Interesting read: "Why I use perl and still hate dynamic language weenies too"
by Anonymous Monk on Mar 08, 2012 at 09:16 UTC

    These examples and many, many more can easily show a tremendous difference in both the length and maintainability of a "dynamic" versus "static" language (granted that those are loose terms). That's far more than just type declarations or adaptor classes.

    You missed the point. That is not a merit of the language but of the library.