I was discussing with a student (well, friend, too, but student in this context) about language approaches: some start from the machine, and work towards making things nicer for the programmer (C is the standard example) while others start as a theory of how things should work and then worry about implementation afterwards (any number of research-spawned languages, such as Smalltalk or Lisp fit in here).
Haskell has some really great theory behind it; it feels a lot like ML (it's Hindley-Milner, after all) with all of the hacks (like the ''a "equality types", or even side effects, in a sense) taken out because the theory is strong enough to support them in a consistent fashion. Even the syntax feels really clean. All in all, the adjective that comes to mind to describe the language is "gorgeous".
For example: One of these papers (one of Wadler's) I've read recently introduced the array monad. The problem, basically, is that in a truly functional language, the array-update function ought to return the new updated array, but you can't mutate the old array out from under anyone else holding a pointer to it. The purist solution is to copy the array every time you update it, but that's ugly and slow. In fact, with the proper abstractions above it, you can actually implement arrays using mutation without breaking the purity of the language. I really like the idea that the theory can eventually get complicated and strong enough that it eventually can fold back over to being implemented effiicently.
(Background: I study Arabic in my free time; it helps that my girlfriend speaks it.) I was thinking the other day about why the difficulties between learning human languages and computer languages are different. I find I'm pretty good at the latter, but the former comes incredibly slowly.
I think the difference is this: human languages (when contrasted against computer languages) are all structurally more or less the same*. Especially with a bit of background in linguistics, you can identify features. For example: I hear Meena say غ (the letter is called "ghain"; it's the "gh" in Baghdad), and then think, "that's a voiced velar fricative, like a German r". (Contrast to picking up a new programming language: "oh, 'let' introduces variables in this one, while 'dim' does it over here".) Or, more abstractly, I learn that definiteness (what's the right term? "a" versus "the") isn't in Japanese and that it's inflectional in Arabic, and after a bit of consideration I can make sense of it.
What makes human languages difficult is the lexicon: the quantity of words/phrases/idioms and their meanings. Here, programming languages are all more or less the same; you get some loops and some recursion and some variables and you're done. So to learn a new programming language, the learning is all in the new ideas; I pick up ML and learn how to write loops using recursion or I pick up Haskell and I learn why we use monads, and the actual gain in terms of quantity of terms is minimal: Haskell feels like it only has a few keywords (let, where, do, case, of, ...).
There are two lessons from this, for me. One is that the structure comes more naturally to me than lexicon (I did notice I'd pick up the form of new constructions in Japanese much more quickly that my classmates, probably because I'm accustomed to doing it with computers). The other goes back to Haskell again: they actually seem to have a pretty sizable lexicon. They still start with a few basic parts and combine them in powerful ways, but the lexicon consists of different combinations of parts.
(Right here I was gonna describe an example using mapM_, but it doesn't make much sense unless you're already In The Know. It's basically a, um, map plus a fold but lifted to a monad and it throws away the result of the fold. I think the other way to look at it is that it applies sequence_ to map. But anyway, trust me, it's rad.)
I think I first became curious about the language from reading about graydon (long before he was on LJ, even!) writing nice things about it. So thanks!
Also: I managed to make the interpreter segfault. I tried to reproduce it with a smaller test case, but I couldn't. It only occurred when I shadowed a variable:
let c = ...
let c = ...
If I renamed the second one to c' it would run happily. (And only renamed it: it wasn't like I used the renamed one differently.)
* As always, there are exceptions that get stranger and stranger, like polysynthetic languages or Prolog / J.