Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot
 
PerlMonks  

RFC: A Perlesque Introduction to Haskell, Part One (DRAFT)

by FoxtrotUniform (Prior)
on Jun 23, 2004 at 20:56 UTC ( #369174=perlmeditation: print w/ replies, xml ) Need Help??

Please comment. I'd especially appreciate ideas for where to go next, suggestions for a quick overview of currying, and general remarks on style.

A Perlesque Introduction to Haskell, Part One (draft)

-1. Introduction and Disclaimer

Sure, this is Perl Monks, but as a whole we seem to be pretty receptive to other languages (as long as they aren't Java or VB), and I've seen Haskell mentioned a few times. As it happens, I'm taking a grad-level functional programming course at the moment, and we're using Haskell. After a bit of Chatterbox chatter, I decided that enough people would be interested in a Perl-oriented intro to Haskell that I'd write one up. (I'm not being entirely altruistic here; I've noticed that I learn things a lot more thoroughly if I have to explain them to other people.)

A few words of warning: I don't have a lot of experience with Haskell, and very little indeed with some of its more advanced features (functors and monads, for instance). I might miss important details; I might even misrepresent important details; I'll try not to. If you notice anything that I should have said, but didn't, or any mistakes I've made, please let me know.

0. What is Haskell?

According to haskell.org,

Haskell is a computer programming language. In particular, it is a polymorphicly typed, lazy, purely functional language, quite different from most other programming languages. The language is named for Haskell Brooks Curry, whose work in mathematical logic serves as a foundation for functional languages.

Haskell lets you do a lot of the things that you probably really like about Perl, especially relating to lists and lambdas (anonymous functions). It's that list-manipulation similarity that I'm going to try to exploit to bulldoze the learning curve a bit.

haskell.org is Haskell's big web presence; it's an excellent collection of links and resources. It has a rather complete list of implementations, of which hugs and ghc are probably the most popular. It also hosts A Gentle Introduction to Haskell (which isn't especially gentle), and links to The Evolution of a Haskell Programmer, both of which you should skim through while you read these nodes.

1. Okay, What's a Polymorphically Typed, Lazy, Purely Functional Language?

1.1. Functional Languages

I haven't been able to get anyone to agree on a definition of what makes a language functional. Functional languages (Lispen, Haskell, ML, Miranda, etc) share a few useful characteristics, though:

Good list-processing facilities

Perl has a few generalized list-munging tools (map and grep come to mind, as well as for (@list) auto-binding $_). Haskell has more -- nothing you can't implement with what we have, of course (just look at Language::Functional), but well-integrated into the language. Lists aren't quite as ubiquitous in Haskell as they are in Lisp, but they're pretty close.

Ad-hoc, anonymous functions (lambdas)

You know that anonymous code block you can pass to map or grep? Yep. In Perl, you can build anonymous subs and assign them to coderefs; in functional languages, building anonymous and/or locally-scoped functions is just another idiom.

First-class functions

Once you have lambdas that are fun and easy to sling around, you can do a lot more with functions than just call them. You can pass them to other functions, store them in lists, build a bunch of related functions on the fly, and so on. (You can do all of this in Perl, too: for instance, storing a bunch of handler functions in a hash as a dispatch table.)

Since functions are first-class objects, there's nothing preventing you from writing functions that operate on functions (in fact, this is quite common). These are known as higher-order functions, and are rather closely tied to the idea of currying (see below).

No side effects

Well, mostly. If you ignore inconvenient ideas like I/O, Haskell is pretty much entirely free of side effects (for example, the Perl expression $foo = 5 evaluates to 5, and changes the value of $foo as a side effect -- or is it the other way around?). Proponents of this sort of programming will tell you that side-effect-free programming causes far fewer bugs; I'm going to reserve judgement on this point. Put simply, don't expect to do much in the way of assignment or iteration in Haskell. (Note that if you consider Lisp and friends to be functional languages, you probably don't consider this to be a defining point -- unless your variety of Lisp doesn't have setq, of course.)

1.2. Polymorphic Typing

Haskell is a polymorphically typed language. If that makes you think of object-oriented "strongly typed" languages like Java or C++, you're not too far off the mark. (In fact, you'll probably spend your first few months with Haskell cursing its anal-retentive type system, especially given the fact that neither hugs nor ghc produces particularly clear type-error reports.) It's not as bad as you think, though: Haskell is pretty clever at inferring types for your expressions and functions -- however, any type mismatches will be caught at compile-time, not at runtime. (This is generally supposed to be a good thing.)

Haskell's types aren't really isomorphic to your standard OO class hierarchy, though. Okay, Haskell has a hierarchy of type classes, which give you polymorphism in much the same way as a base-class pointer in C++. However, Haskell type classes don't enforce the usual data-encapsulation you'd find in OO languages; nor do they adhere to the object->method "functions attached to data" model that most of us associate with OOP.

For example, Haskell's negate function takes a number and negates it. (Big surprise, eh?) It has the type:

Pm_tut> :t negate negate :: Num a => a -> a
which basically says, "if a is a numeric type, then negate takes something of type a and returns something of type a". The binary function (-) (operators are functions like any other) has this type:
Pm_tut> :t (-) (-) :: Num a => a -> a -> a
which gets more interesting in section 1.4.

1.3. Lazy Evaluation

In Haskell, programming with infinite data structures is fun and easy. For instance, you can say:

nats = [0..]
and Haskell will give you a list of the natural numbers. (Does that list-range syntax look familiar, by the way?) If you want the first ten natural numbers, then, you can say:
Pm_tut> take 10 nats [0,1,2,3,4,5,6,7,8,9]
(Of course, if you want to print all of nats, you should be prepared to wait a while.)

This works because Haskell is a lazy language. When we defined nats, the interpreter (or compiler) didn't dash right out to calculate its value; when we used nats, the interpreter only generated as many elements as we asked for.

A more Perlish example of laziness is the "or die" idiom:

open FH, '<', $file or die "Can't open $file for reading: $!\n";
perl won't bother to evaluate the die call if the open succeeds -- it already knows the value of the or. This happens in Haskell, too:
const x y = x
doesn't need to evaluate its second parameter (useful if the second parameter is an infinite list, for instance) to figure out its value. (This happens more often in multi-part function patterns, which I'll get to in a minute.)

Lazy evaluation is common enough in Haskell that we tend to talk about values that can't be evaluated (either because they cause a fatal exception of some sort, or because they're infinite) -- there's a generic "black-hole" value called "bottom" (usually drawn as an inverted T; $\perp$ in LaTeX; or the awfully ugly _|_ in ascii). If you ever evaluate _|_, bad things happen -- your program terminates, or loops forever; your computer blows up; demons fly out of your nose (depending on the C compiler that built your Haskell implementation): you get the idea. (_|_ is like undef on steroids and PCP.) Thanks to Haskell's laziness, we can work happily around _|_s; as long as we don't evaluate them, everything's fine.

1.4. Currying

Haskell brings another useful functional technique to the party: currying.

Remember the type of (-):

(-) :: Num a => a -> a -> a
from before? That doesn't look quite right: a function from as to as to as? Shouldn't it be more like:
(-) :: Num a => (a, a) -> a
instead? (That's actually a valid type: it says that (-) is a function from pairs of numbers to numbers, which isn't quite what we want.) Things get more interesting when you know that -> is right associative, so our function type becomes:
(-) :: Num a => a -> (a -> a)
So subtraction (the (-) function) takes a number and returns a function from numbers to numbers. In other words:
(5-)
is a function that subtracts its argument from five. (This is called an operator section, and the parentheses are mandatory. See Section 3.2.1 of AGItH for more details on sections.) We'll see more about currying later on.

2. "Hello, world!"

The basic Haskell "Hello, world!" program isn't particularly edifying:

Pm_tut> putStr "Hello, world!\n" Hello, world! Pm_tut>

(Besides, it raises the question "How do you do I/O in a language that's supposedly without side effects?", the answer to which is "Magic and monads", and I really don't want to get into monads right now.) Instead of printing a string, then, I'm going to use a different simple problem as an example: factorial.

2.1. Factorial, Recursively

With a quick glance through the available operations, your first factorial function in Haskell might look like:

factorial n = if n == 0 then 1 else n * factorial (n-1)
Not too bad. The if ... then ... else ... construct is pretty clear. We're obviously defining a function (in fact, to a mathematician, this definition of factorial is probably more obvious than what you'd write in C, or Perl, or even Lisp).

Wait: we said earlier that Haskell makes a big deal about types; why didn't we have to specify a type for this function? Our Haskell interpreter inferred the type of factorial automagically; let's see what it found:

Pm_tut> :t factorial factorial :: Num a => a -> a
What this tells us is that factorial is a function from things of type a to more things of type a, as long as type a is numeric (that is, inherits from type Num). (If you have a bit of background in symbolic logic, think of the type as saying "numeric a implies a -> a".)

2.2. Factorial With Pattern-Matching

We can define factorial a bit more idiomatically:

factorial 0 = 1 factorial n = n * factorial (n-1)
Note that the two cases of the function (base and recursive) are more apparent here; there's less cruft around the actual mathematical part of function. What happens here is that, when Haskell sees a call to factorial, it tries to match patterns from top to bottom. (If you've done any Prolog hacking, this'll look familiar, although Prolog's pattern-matching is much more powerful than Haskell's.) That means that switching the order of the patterns is a bad thing:
factorial n = n * factorial (n-1) factorial 0 = 1
Any calls to this factorial function will not terminate: the recursive pattern always matches, so you'll never get to the base case.

Another way to do it (sound familiar?) is to do the arithmetic on the left-hand side, and coincidentally in the pattern:

factorial 0 = 1 factorial (n+1) = (n+1) * factorial n
This isn't exactly a great way to do it, but it shows that patterns in Haskell are more flexible than you might first think: they're not just simple argument lists.

2.3. Recursively, Factorial

"Wait a minute," you might ask, "all this recursion has gotta be pretty slow. Isn't there a better way?" Yes, in fact: we can use tail recursion. What tail recursion amounts to is doing all the calculations before the recursive call; that way, we don't need to keep anything on the stack, and we can optimize a set of recursive calls into a while loop (see your favourite intro-CS textbook for details). In Haskell, that looks like this:

factorial n = tr n 1 where tr 0 f = f tr n f = tr (n-1) (n*f)
In the tail-recursive function tr, we're accumulating a partial result into the second parameter. Eventually, we hit the base case (n=0), and return the result.

What's more interesting is the fact that we've defined tr as a a purely local function inside a where clause. This is handy for not cluttering up the local namespace. There aren't any delimiters on the where clause: the interpreter figures out what's going on based on the layout rule: basically, "indent your code the way you usually would and everything'll be fine". (Yes, this is similar to Python. No, you won't go to hell for it.) Anyways, patterns obviously work just fine in where clauses.

2.4. Use The Builtins, Luke!

All this messing around with recursion is kinda fun, but if you've ever played Perl Golf you're probably wondering if there isn't a more concise way to do it. There is: Haskell's Standard Prelude (think of it as stdlib in Haskell) defines a bunch of useful functions, among them one called product:

factorial n = product [1..n]
Now that's a lot better: short and to the point. Of course, it raises a couple of questions:
  • Why just product? Why not a more general version for doing arbitrary binary operations on lists?
  • What happens when n is zero?
To answer those questions, we have to look at the definition of product.

2.5. Foldl? What's That?

The Haskell Standard Prelude defines product like this:

product = foldl (*) 1
That's not exactly enlightening. (There's a lot going on here.) What's this foldl business?

Generic list-combining, that's what. foldl takes a binary function, a start value, and a list, and "folds" the list into a scalar from the left. For instance, foldl (*) 1 [a b c d] is about the same as ((((1 * a) * b) * c) * d). (There's another function foldr that does about the same, except from the right.)

So what happens when we call product with an empty list (which is what Haskell generates from [1..0])? For that, we need to look at the definition of foldl:

foldl f z [] = z foldl f z (x:xs) = foldl f (f z x) xs
With an empty list, foldl just returns the starting value (in this case, 1).

A short digression on extensionality

Wait a minute: foldl takes three parameters (operator, start, list), but in our definition of product we only passed two! Shouldn't that be:

product xs = foldl (*) 1 xs
instead? As it turns out, it doesn't really matter, and we can blame that on something called extensionality, which basically means that we can (usually) cancel out bound variables on both sides of an equation. Extensionality eventually leads us to something called the monomorphism restriction and a lot of probably unnecessary pain. (See the Haskell Report, section 4.5.5, for all the gory details -- or just forget about it for the moment.)

There's something else we can learn from this code: we can convert an infix operator (in this case, *) to a general function by wrapping it in parens. So if we wanted to take the sum of a list of numbers, we'd do it thus:

sum xs = foldl (+) 0 xs -- this is in the Standard Prelude, too
(End-of-line comments in Haskell are introduced by two dashes followed by something that isn't punctuation. I'm sure that must've made sense to someone at some time.) Similarly, we're not restricted to folding numbers. We can find out whether a list of booleans contains any true elements:
or xs = foldl (||) False xs -- this is also in the Prelude
or contains all true elements:
and xs = foldl (&&) True xs -- so is this
If we have a function prime that tests a number for primality, we can check a list of numbers:
anyprime xs = or (map prime xs) -- map does what you'd expect allprime xs = and (map prime xs)
foldl and type signatures

foldl's type signature is:

foldl :: (a -> b -> a) -> a -> [b] -> a
What this says is that foldl takes a binary function on as and bs, a scalar a, and a list of bs and returns an a. It's interesting to note that the list elements don't have to be the same type as what they're folded into. For instance, we can re-write allprime like so:
andprime p n = p && prime n allprime xs = foldl andprime True xs
If we're only going to use andprime once, we don't really want to have it clogging up the symbol table, so we can write it inline as a lambda function:
allprime xs = foldl (\p n -> p && prime n) True xs
See Section 3.1 of AGItH for more on lambdas.

But back to hello world -- er, I mean, back to factorial.

2.6. Lazy Factorials

So far, all the factorial functions we've defined have been a bit limited. They calculate one factorial, return it, and that's it. If you want to find another factorial, you're going to repeat a lot of work. Wouldn't it be nice if we could get Haskell to generate a list of factorials for us?

factorials = scanl (*) 1 [1..]
scanl is similar to foldl, but it generates a list of successively reduced values. In this case, scanl gives us [1, 1*1, (1*1)*2, ((1*1)*2)*3, ...] -- we're basically doing a foldl on successively longer prefixes of the list [1..], which generates the positive integers.

Generating a list of all factorials sounds like an impossible task, but as long as we never try to use the whole list, it isn't a problem. For instance:

Pm_tut> take 10 factorials [1,1,2,6,24,120,720,5040,40320,362880] Pm_tut> factorials !! 24 620448401733239439360000
take n xs returns the first n elements of the list xs, and factorials !! 24 is Haskell for list indexing (in Perl, we'd write $factorials[24]). (Did I mention that Haskell uses bignum ints by default?) What's going on here is that Haskell's generating only as many elements as it needs to in order to satisfy our requests. If we ask for the factorial of a ridiculously large number, we'll have problems:
Pm_tut> factorials !! 65536 ERROR - Garbage collection fails to reclaim sufficient space Pm_tut>

Edit 2004 June 24:

  • s/LISP/Lisp/g -- thanks hding
  • Corrected extensionality discussion, and did s/any/or/ ; s/all/and/ -- thanks tmoertel
  • Added some examples to 1.2
  • Added a quick overview of currying.
  • Added a digression on foldl's type, with a little bit on lambda.

--
F o x t r o t U n i f o r m
Found a typo in this node? /msg me
% man 3 strfry

Comment on RFC: A Perlesque Introduction to Haskell, Part One (DRAFT)
Select or Download Code
Re: RFC: A Perlesque Introduction to Haskell, Part One (draft)
by etcshadow (Priest) on Jun 23, 2004 at 21:19 UTC
    I've always understood the difference between functional programming and imperative programming (maybe some people don't even know that imperative is the term for "not functional") is basically this:
    • Imperative programming means just that: you are giving orders (or "instructions", if you want to put it nicely) to the computer.
    • Functional programming is about evaluation. It's about getting answers to questions. "What is this thing squared?" "What is the representation of this data as a binary search tree?". It's not about doing things. Rather, it's about composing answers to questions. Things like ordering of events and even the passage of time itself (time in programs often being measured in terms of operations, or in terms of tracing through program execution one step at a time) don't really exist in functional programming.
    • Probably the biggest direct consequence of this is that functional programming has no concept of "state", as imperative programming does. "State" being the current value of all the variables (or memory or disk or however you want to think about it) being used by your program. "Current" is an important part of that, because these values change over time (after all, you're giving orders like "set this variable's value to 5" all over the place), and as I said... there's not the same concept of time in functional programming.

    Anyway, describing functional languages by noting some of their "common features" is really not the way to go. It's a philosophical difference, all the way.

    (Granted, as you noted, there is a limited concept of "state" in LISP, and, in general side-effect can't be completely avoided or you'd have no means by which to perform I/O... but you're getting into details, there.)

    ------------ :Wq Not an editor command: Wq
      Interesting comparison. True, in functional languages solutions tend to be in the form of an answer given a valid input - but this is the same as non-functional languages. Perl's
      sub factorial { my ($i) = @_; return 1 if ( $i <= 0 ); return $i * factorial( $i-1 ); }
      is barely different from the generic functional code of
      factorial 0 : 1; factorial i : i * factorial( i - 1 );
      Both are solutions to the question 'what is factorial(i).' So, really, I'd say that coding in general is about forming well-defined questions, writing those down on paper, and then composing answers in code.

      Incidentally, I believe most functional languages have a full-fledged concept of state. Even Prolog, that bastion of 'tell us the rules, and we'll get you an answer' can be twiddled to spit out state at every recursion (otherwise debugging would be a royal pain). And it's worth mentioning that it's not uncommon to write a functional program based on a problem that's formed in state-machine terms, just because it's so easy to handle such problems.

      But programs as 'composing answers' - I like it. Nice thought.
        Well, to address your points:

        Yes, you can write functional code in (most) imperative languages... at least those that support recursive function calls. Although really the way to make a functional-like factorial call in perl would be like:

        sub factorial { $_[0] == 1 ? 1 : $_[0] * factorial($_[0] - 1) }
        That is: each function is only a single expression. Haskel goes a little further, though, because it supports function-dispatch by pattern matching... but this is the general case of how one writes functional code in perl. No assignments, no loops, a single expression. While it's true that any decent functional language interpretter supports automatic optimization of tail-recursion, that is a property of the interpretter, not of the language itself (except to the point that specifically optimized idioms usually become a part of any language, if only in the developers training of best practices).

        As far as the functional language interpretter having state: well of course it does. It's ultimately implemented in machine code, and machine code on any computer is imperative. It has state (memory, registers, etc). It is a sequence of commands. So on. The point is that this state is not a mechanism employed by the programmer in his/her functional programs, it is merely an artifact of how the functional language interpretter is implemented on an inherently imperative computation machine.

        Update: forgot the "- 1" in the code. Oops. I was just trying to make a point, anyway... it was obvious what I meant.

        ------------ :Wq Not an editor command: Wq
Re: RFC: A Perlesque Introduction to Haskell, Part One (draft)
by blokhead (Monsignor) on Jun 23, 2004 at 21:58 UTC
    Disclaimer: I haven't had any experience with Haskell, only OCaml, which is similar in many respects (type-inferencing and good pattern matching).. my Haskell syntax might be off at times, so bear with.

    What you call extensionality is what I've known as currying. And a description of it will probably be easier if you talk more about type signatures and type inferencing.

    In my opinion, the absolute coolest features of modern function languages (other than just being functional) are type inferencing and pattern matching. They are foreign concepts if your only programming background is Perl (especially type inferencing), so you should give them both a bit more time. Other cool features that you do mention are polymorphic types (Num a), and lazy evaluation of infinite objects (which I've never had any experience with).

    As for pattern-matching, the absolute coolest demonstration of this is quicksort in 2 or 3 lines of Haskell (Update: code here). It shows how elegant and powerful pattern-matching can be... especially in Haskell, which has the most expressive matching out there.

    WRT currying, it's easy to grasp by having a good understanding of what the type signatures mean -- plus it gives you a bit of insight into the language internals as well. The type signatures for multi-arg functions look like this:

    sum x y = x + y sum :: Num a => a -> a -> a
    Note that there are no parens in the signature a -> a -> a. This is a function of two variables, so why isn't the signature something like (a, a) -> a ?? Turns out that arrow is right-associative, so the type signature really means:
    sum :: Num a => a -> (a -> a)
    When you read it this way, you can see that sum is a function of one variable that returns another function of one variable. Under this view (the lambda-calculus view), currying is simply a natural side-effect:
    increment = sum 1
    sum 1 returned a function of one variable (a -> a), just like the type signature said it would! It's also important to notice that at this point, the type-inferencing engine may have specified the polymorphic type a to an Int type (don't know Haskell well enough to say for sure).

    Function application on multiple arguments also makes sense under this interpretation as a left-associative operation:

    sum 1 2 ## really means (sum 1) 2 ## which is (\y -> 1 + y) 2 ## 1 + 2
    Multiple arguments and currying just come naturally by looking at functions from a lambda calculus context.

    Also, MJD has a Perl-based talk about Strong typing that uses ML and gives a good example about how type inferencing helps the programmer catch bugs (I can attest to that).

    Anyway, you're making me want to write an OCaml primer (and learn Haskell) ;) Good work so far!

    blokhead

      What you call extensionality is what I've known as currying.

      Then I should do a better job.

      Extensionality says that the following two definitions are equivalent:

      foo = foldl1 f bar xs = foldl1 f xs -- note: xs is free on both sides
      The reason this works is that xs is free on both sides of bar: it's basically a placeholder that says "a list should go here". Both of these functions have type (a -> a) -> [a] -> a (modulo monomorphism-restriction annoyances).

      On the other hand, the function application

      foldl f -- note: not a definition (type is [a] -> a)
      is a curried expression: while foldl f xs returns a scalar, foldl f returns a function from lists to scalars. To me, currying means partial function application, and it's useful for producing ad-hoc functions where lambda is overkill.

      I like your point that currying is easier to understand when you know about type signatures, and thanks for the link to MJD's talk.

      --
      F o x t r o t U n i f o r m
      Found a typo in this node? /msg me
      % man 3 strfry

      As for pattern-matching, the absolute coolest demonstration of this is quicksort in 2 or 3 lines of Haskell

      Just for the fun of it, and for comparision, here is what (I think) the equivalent perl would be (for numeric sorts, I don't know if Haskell has a separate character comparision operator like Perl):

      sub qs { return unless @_; my $x = shift; return qs(grep($_ < $x, @_)), $x, qs(grep($_ >= $x, @_)); }
      It can be done without the $x temp variable and the shift (saving one line), but then it's less readable IMO.

        Of course, since Perl allows in-place modification of arrays, you can implement quicksort much more efficiently than that in Perl, but that way is much more cool-looking.

        Incidentally, there is only one set of comparison operators in Haskell. To deal with the fact that Haskell is a strictly-typed language, there are multiple implementations of the same operator for different argument types. This is done throught the type classes that FoxtrotUniform mentioned. The determination of which implementation to use in a given expression is made at compile-time, which is both very powerful and very annoying. Getting Haskell programs to compile is rather difficult, but once they do they stand a pretty good chance of working the first time.

        Just for the hell of it:

        sub qs { local ($x, @xs) = @_ and return qs(grep($_ < $x, @xs)), $x, q +s(grep($_ >= $x, @xs)) or () }

        ihb

Re: RFC: A Perlesque Introduction to Haskell, Part One (draft)
by lachoy (Parson) on Jun 23, 2004 at 22:14 UTC
    Nice! You might be interested in the talk Tom Moertel gave to the Pittsburgh Perl Mongers a few months ago.

    Chris
    M-x auto-bs-mode

Re: RFC: A Perlesque Introduction to Haskell, Part One (draft)
by ihb (Deacon) on Jun 23, 2004 at 22:39 UTC

    Kudos for writing this introduction!

    Purpose: Depending on that the purpose of this is -- to learn Perl programmers to program in Haskell or learn Perl programmers Haskell's techniques -- different comments will follow. Perhaps you should state the purpose of the document clearly in the -1 chapter, other than that it may be interesting?

    Density: As always with introductions it's hard to make everyone happy. I was happy as it was a quick re-cap for me as I already know basic Haskell. But I think it may be a bit too dense for the average Perl programmer, especially with the strange syntax.

    Syntax: A function call in Haskell looks quite differently and the precedence is different too. As you have in your example, factorial n = n * factorial (n - 1). Those parenthesis there are unnecessary to a Perl programmer. Also, two-argument functions don't use commas, which also look strange to a Perl programmer. A paragraph on the key differences in syntax may be worth its space, and then you can move the comment about the significant indenting to a more logical place. Idealy, you wouldn't have to spend time on explaining syntax, but I think that's can't be avoided successfully.

    Useful tools: I liked learning Haskell because it gave me some new cool ideas about programming languages and tools I'd like to be able to use in Perl. I think that a document like this should focus the most on parts of Haskell that may be truly hands-on useful to Perl programmers and explain other parts just enough to make the Perl relevant parts understandable. Hopefully, this will get Perl programmers even more interested in Haskell as they see applications of what they read here. Depending on your purpose you could perhaps retitle it to "Haskell idioms/design patterns in Perl" or write a sister document called that?

    Perl examples: Perhaps every Haskell example could be accompanied with a Perl translation, or something that goes as close as possible? That way Perl programmers may more easily relate to the Haskell code.

    Hope I've helped,
    ihb

Re: RFC: A Perlesque Introduction to Haskell, Part One (DRAFT)
by Errto (Vicar) on Jun 24, 2004 at 01:20 UTC

    I had the pleasure of studying with Paul Hudak, the creator of Haskell, and I remain highly intrigued by the language. I like your introduction a lot. A couple of things I would add in the relevance-to-Perl department:

    • foldl is a lot like reduce in the List::Util module, which I believe will be a builtin in Perl 6.
    • I believe there is a Perl implementation of lazy lists. Update: Per the reply below, it is Language::Functional.
    • something I'm not thinking of

    Also, I know you're trying to keep it reasonably short, but I think it would be really cool to show an example of constructing recursive polymorphic types, and the fact that you can have different data constructors with completely different type arguments. In other words, the fact that in Haskell you can implement large parts of your program logic just by the way you design your types.

    I like the lazy factorial example. My favorite lazy expressions in Haskell were the fibonacci sequence:

    fib :: [Integer] fib = 1 : 1 : zipWith (+) fib (tail fib)

    and the function for inserting commas in a string of digits according to standard notation:

    inschar n c = foldr1 (\s1 s2 -> s1 ++ c : s2) . map (take n) . takeWhile (not . null) . iterate (drop n) commaize :: String -> String commaize = reverse . inschar 3 ',' . reverse
      foldl is a lot like reduce in the List::Util module, which I believe will be a builtin in Perl 6.

      foldl is also a lot like foldl in the Language::Functional module. :-)

      Also, I know you're trying to keep it reasonably short, but I think it would be really cool to show an example of constructing recursive polymorphic types, and the fact that you can have different data constructors with completely different type arguments. In other words, the fact that in Haskell you can implement large parts of your program logic just by the way you design your types.

      I'm getting there... I'm planning to talk about Haskell's typing system in more detail in a later installment.

      --
      F o x t r o t U n i f o r m
      Found a typo in this node? /msg me
      % man 3 strfry

Inline::Haskell
by sleepingsquirrel (Hermit) on Jun 24, 2004 at 04:11 UTC
    I've always wanted to see an implementation of Inline::Haskell (a la Inline::C). And I figured functions could return references to tied file handles to take care of (possibly infinite) lazy lists.
    $fh=haskell_func(); while(<$fh>) { ...etc... } @foo=map $_*2, <$fh>; __HASKELL__ haskell_func = primes [2..] where primes [x:xs] = x:primes (filter (\n->(mod n x)/=0) xs)


    -- All code is 100% tested and functional unless otherwise noted.
Re: RFC: A Perlesque Introduction to Haskell, Part One (DRAFT)
by tmoertel (Chaplain) on Jun 24, 2004 at 04:28 UTC
    First, great introduction. Keep it coming!

    I write a fair bit of software in Haskell, and so I hope you won't mind a few suggestions. I'm numbering them to coincide with your section numbering.

    1.1 You might want to make mention of higher-order functions, i.e., functions that operate on functions. It's implied by your description of first-class functions, but it's such an important part of functional programming that you might want to call more attention to it.

    1.2 I don't find the comparison of Haskell's type system to that of C++ or Java's to be accurate. C++ and Java's type systems bring to mind "bondage and discipline" and require annoying, redundant type declarations all over the place. In Haskell, you are for the most part unshackled and can program without type annotations, reserving them for the few places where they add documentation value (or are needed to avoid ambiguity), and yet you still get their full benefit in all of your code.

    2.5 You're not canceling free variables but rather bound variables. This canceling is commonly called "eta reduction." (The opposite is "lambda lifting.")

    Also, your definition of any is actually that of or. Likewise, your all is actually and. The any and all functions in the Prelude take a predicate as their first argument and a list as their second. You probably meant the following:

    anyprime xs = any prime xs allprime xs = all prime xs

    Cheers,
    Tom

Lisp (RFC: A Perlesque Introduction to Haskell, Part One (DRAFT))
by hding (Chaplain) on Jun 24, 2004 at 12:52 UTC
    Just like people no longer write PERL for Perl, Lisp has been for quite some time the correct way to refer to Lisp, not LISP.

    2006-08-11 Retitled by planetscape, as per Monastery guidelines: one-word (or module-only) titles hinder site navigation


    Original title: 'Lisp'

      It ain't short for "Lots of Irritating Silly Parenthesis"? ;-)

      Thanks FoxtrotUniform for the intro. All my comments have already been said by other people so just ++.

      Jenda
      Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.
         -- Rick Osborne

Re: RFC: A Perlesque Introduction to Haskell, Part One (DRAFT)
by FoxtrotUniform (Prior) on Jun 24, 2004 at 22:25 UTC

    Thanks for all your help, folks.

    My plan from here is to work through successively more complex examples, drawing on more and more features of Haskell as I go along (and possibly writing "Haskellish Perl" alongside the Haskell). I was thinking of moving on to a Towers of Hanoi solver from factorial, and from there to a Huffman encoder/decoder. Any other suggestions?

    --
    F o x t r o t U n i f o r m
    Found a typo in this node? /msg me
    % man 3 strfry

      Since you axed, here are a few more suggestions for future topics. You might want to see if you can work in the idea of combinators, perhaps using simple parsers as a subject domain from which to draw examples. Then you could go further, if you wanted, into domain-specific embedded languages, drawing on the hyper-cool QuickCheck or Parsec or HaskellDB.

      While not particularly advanced, list comprehensions are cool stuff, especially when used recursively. (See Koen Claessen's selections/permutations implementation (scroll for it), for a fun example.)

      Cheers,
      Tom

Re: RFC: A Perlesque Introduction to Haskell, Part One (DRAFT)
by bsb (Priest) on Jun 25, 2004 at 08:03 UTC
    I enjoyed reading your intro, refreshed my goldfish memory

    One think I'd find valuable is descriptions of your "Aha!" moments, when you grasped a particular aspect of Haskell. (Although maybe this doesn't belong in an introduction, or maybe it does, especially a cross-language one). I got one from the lazy fibonacci snail in the Haskell98 Tutorial

    So you get aha-s with various aspects, then you get them again at a higher-level when you discover how laziness, currying, statelessness and higher-order types all play together. Individually, the features are not that interesting, but together they're mind altering.

    Brad

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlmeditation [id://369174]
Front-paged by Enlil
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others perusing the Monastery: (9)
As of 2014-12-18 11:09 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Is guessing a good strategy for surviving in the IT business?





    Results (50 votes), past polls