# evan_tech

08:33 am, 29 Oct 03

### lambda

Last CS class was another 80-minute-long proof. As I understood it, the point was to show that expressions are guaranteed to terminate in our simply typed lambda calculus.

Most of these programming language proofs are hard for me to grasp because we programmers have such an intuitive grasp of what programming languages "do", despite any hard logical understanding of them. This proof sorta seemed like another one of those of-course-its-true proofs. But I've been thinking about it a bit more and it's actually pretty cool.

The reason it seems obvious is that it guarantees that *expressions*, not statements, terminate. When I try to think of an expression, I think of something like (4 + (5 * 3)), and it seems pretty clear to me that all of those will either evaluate to something or die (division by zero or whatever). But first-class functions make the problem a bit more interesting.

We still can't do anything like assign to any variables. But we still can cause infinite loops. Here's an expression in lambda calculus:
(λx. x x)(λx. x x)¹.

What does it mean? Take the pieces. (λx. x x) is a function of one parameter (that's what the (λx. ...) means), and its body takes the parameter (the second x) and uses it as a function to call on the same parameter. The larger expression with that doubled, then, passes that function in as the argument to the function.

Are you confused yet? Here's the same expression in Python, which obviously borrows the lambda calculus idea but uses a more traditional syntax and might be easier to follow:
`  (lambda x: x(x))(lambda x: x(x))`

Keep in mind here that we're not doing anything obviously infinite like just declaring a function and having it call itself-- we're still just evaluating an expression! Now convince yourself that that recurses infinitely.
(Or just try running it in a Python interpreter. It'll stop with a "maximum recursion depth exceeded" message.)

Here's the same code in Ruby (a little bit uglier because of the way function calls work in Ruby):
`  proc {|x| x.call(x)}.call(proc {|x| x.call(x)})`

and in Perl (even uglier, but maybe a Perl guru can think of something better
than reusing \$_[0] -- and did you even know you could call subrefs like this?
if you wrap the entire sub {...} in curlies then ->() on it, it doesn't work):
`  sub { \$_[0]->(\$_[0]) }->(sub { \$_[0]->(\$_[0]) })`

The Ruby one dies like the Python one, but Perl one just sat there and gobbled memory. I'm surprised my computer didn't lock (it usually does when I write programs that do that).

So why is this interesting, aside from being a weird bit of code? The proof from class showed that with "strong" typing information, this code would be illegal, and beyond that it's actually impossible to write an expression that doesn't terminate!

Try writing similar in C without using any casts (i.e., a more strongly-typed language) to see why this particular example wouldn't compile. We start with a straightforward declaration:
void x(void (*y)(void (*)( ...er, what goes in here?
We need to be able to call y(y) in the body of the function, which means that y must be a function that can take, uh, functions of type y as the argument... the actual type recurses infinitely!

In fact, in OCaml (which handles other polymorphism, like the identity function, just fine), it fails to typecheck just for this reason:
```  # fun x -> x x;;
This expression has type 'a -> 'b but is here used with type 'a```

Super neato. I wonder if the proof applies to a more powerful language like OCaml?

Speaking of proofs, they mentioned in class that someone showed C++'s template system was Turing-complete. People have reportedly constructed C++ programs that, for example, cause the *compiler* to emit the first n primes in its warnings.

Also in OCaml-land: last night, armed with a better understanding of the language, I rewrote my Brainfuck interpreter with a better approach. I use two lists to represent the tape, where the heads of each are the nearest cell of the tape on either side of the current position. Because we can only scoot over by one cell at a time in Brainfuck, this allows us to represent an infinite tape somewhat efficiently. (I'm pretty sure I stole this idea from the other OCaml BF interpreter I found a few months ago...)

OCaml has some crazy metalanguage stuff going on². Their preprocessor manual seems as large as their language manual, and they actually even provide and use an alternate syntax available for the language to show how relatively meaningless syntax is. OCaml syntax still weirds me out, but in a sorta neat way, like Japanese.

And for those who suspect I've forsaken Ruby, fear not! I still can't think of any useful program I'd ever use OCaml for. I checked out their GTK bindings the other day and they're so weird I can't even handle it. I'm almost tempted to learn the preprocessor well enough to attempt writing a C-like syntax for it... (but really, I know that it's weird for reasons much deeper than the syntax).

1 I get the impression this is well-known, because the OCaml manual uses an expression like it in one of their metalanguage examples where they add lambda calculus to the language.

2 It'll be neat to see what this sort of functionality will cause in a more popular language like Perl6. I saw an alternate Python-ish language that used the Java virtual machine on the other day... but that's at a much lower level than syntax extensions work at.

•  I have seen C preprocessor macros that emit primes. I'll bug mini to get the source if you'd like. reply to this
•  There's whole articles about c++ template meta programs, e.g. at http://osl.iu.edu/~tveldhui/papers/Template-Metaprograms/meta-art.htmlToo advanced for my liking, but fun to look at! reply to this