Evan Martin (evan) wrote in evan_tech,
Evan Martin
evan
evan_tech

how i/o can work in a purely functional language

Without using fancy jargon or theory, here's an attempt to explain how you can do "impure" operations (like deleting a file, or updating an array) in a purely functional language. It's actually pretty simple; once you get it, it may be a bit of a let-down. But I need to get this out of the way for some later posts to make sense.

Why would you want such a thing?
The promise of a "pure" language is that a given "variable" will never change. (The word "variable" is a little misleading in that case. People sometimes call them "bindings" instead.) So once you write "x = 3", x is fixed, and that's it. This also implies that functions never change: if foo(3) gives you 10, then foo(3) will always be 10. (People sometimes use the verb "mutate" to describe the process of changing variables -- it makes sense in contrast with the word "immutable".)

There's a few reasons you might like a language like this. For example, some argue it makes programs easier to reason about, as there are fewer moving parts. Others point out that part of the optimization process done by compilers for "normal" languages is to translate code that modifies variables into code where each mutation instead makes a new variable (you can read more about this on the Static single assignment wikipedia article, which says that GCC 4 uses this "extensively").

Another way to look at that is that mutation and history are two sides of the same coin: in a program where things don't change, there is no difference* between "before" or "after" a given statement. This means that the compiler is free to compute things in the order it thinks is best. (C geeks, if you haven't seen it before, you'll be interested to read the argument Threads cannot be implemented as a library, which explains how some natural-seeming compiler optimizations fail in the presence of threading.)

That comment about threading is relevant, too: in a language without mutation, any little subset of code is naturally independent from what the other code is doing, so pure languages naturally parallelize well.

You still need mutation
But in the practical world you do still need to change stuff sometimes, so pure languages still need to support this. So here's what you can do. (First, a disclaimer: this is just roughly how Haskell does it. Other languages have other approaches. But I'll use a more traditional syntax just so the Haskell thing doesn't get in the way.)

Imagine you're given two objects called printA and printB and are told that they represent outputting the characters A and B. You can't actually call them as functions in your code -- "printA()" would be an observable effect, after all. But what your code can do is talk about printA itself (note without the parens now), which refers to the the operation of printing the letter A. So you define your language such that the binding called "main" must be at one of these operations, and you're halfway there.

Here's my program that prints the letter A:
  // myint is an integer
  myint = 3  
  // foo is a function from ints to ints
  foo(x) = x + 7
  // main is an operation
  main = printA
(The compiler can drop the myint and foo bits 'cause they're not actually used.)

A real program will want to do more than one thing, of course. So we could modify our language so that main is instead expected to be a list of operations. And we might want to actually use some of our other bindings, too:
main = [printB, (foo(myint) == 5) ? printA : printB, printA]
This one prints "BBA".

From here you can even build things that feel sorta like subroutines, too. Consider this binding:
aOrB(x) = (x == 0) ? printA : printB

This is the place where people get tripped up, so be careful: aOrB is a function. But when you call aOrB(3), it doesn't print anything. Instead, you get back an operation -- either printA or printB -- and you can then stick that operation in your to-do list that is main.

Using the same reasoning, we could define a more basic function print(x) that gives you back an operation that would print out x. Again, be careful here: print("hello") doesn't print anything, it just gives you back the operation that would print "hello". See how it's a pure function? Passing "hello" to this print doesn't change anything and you always get back the exact same thing.

The final piece is input. Let's imagine were were given one final primitive operation, called readInt. Again, you could imagine that if "readInt()" were legal it'd read some input from stdin and return an integer, but again that is an effect and is not allowed. Using the same trick as before we can instead talk about the operation readInt, which, if it were run, would produce an integer.

Then we rejigger our language slightly again. main is still a list of operations, but we declare that if any operation in the list is the sort that produces a value (like readInt), that this value is fed into the next thing in the list to get the next operation. Here's a new program:
main = [printA, readInt, aOrB, printB]

This program prints the letter A, then waits for you to input an integer. If that integer is is 0, it prints another A, otherwise it prints a B. And then, finally it prints a B. But the crazy thing about it is that it is still completely pure: that is, main is defined to have a single constant value. (You'll even note that with the same inputs, in the sense of what the operations do, it will always have the same outputs.)

And that's it. You've seen conditional control flow involving I/O, but the language is still pure. Do you feel a bit cheated? (I sorta did when I first learned this -- it's like they just sidestepped the whole issue.)

Making it more pleasant
You can't pleasantly write programs with the language I outlined above, but it has all the major concepts. Let's define it in a conceptually equivalent way that makes it a bit more useful.

First, make up a syntax for anonymous functions: uh, ruby-style, I guess. With that syntax, here are two equivalent definitions of foo:
foo(n) = n + 7
foo = {|n| n + 7 }

(The bit between the pipes are the arguments to the function.) So here's the same main program again:
main = [printA, readInt, {|x| (x == 0) ? printA : printB }, printB]

Then let's define a magical operator "=>" that sequences two operations, feeding one into a function that produces the next and giving us back the whole thing as a single operation. So an operation to print "AB" would be
printAA = (printA => {printB})
and an operation to print ABAB would be
printAAAA = (printAB => {printAB})
and that same main again would be
main = printA => { readInt => {|x| ((x == 0) ?
         printA : printB) => { printB } } }


More usefully, to ask for input you might define a function like:
input(prompt) = print(prompt + ": ") => { readInt }
which you could then use in a program like this:
main = input("enter a number") => {|num| print(num+num) }

Some interesting consequences
What we've actually done is turned control flow into something that can talked about in the same way we talk about data. In the land of pure data we never needed a "sequence" operator because we never needed to say "do this first, and then this other thing second". But just with the addition of one operator we can reuse all of the other tools we have are still available. To sum a list of integers, you stick a plus sign between each element in the list. To run a list of operations in order, you stick a => between each element. You can use the same code to define both.

And we can even define our own control flow operators. Here's a function that repeats an operation until the user enters zero, and an example use of it:
repeatUntilZero(op) = op => { readInt =>
                      {|x| (x == 0) ? 0 : repeatUntilZero(op) } }

main = repeatUntilZero(print("enter a zero:"))


But through all of this we're still consistent about the difference between pure computation and side effects: once you're pulled in an operation in some code, there's no way for that code to do anything useful with it without returning something that includes that operation. Since we've made the sequencing of dependent operations explicit, it's clear which bits can be rearranged (as before, all of the code) and which can't (you wouldn't move stuff around the => any more than you'd rearrange the elements of a list).

This finally puts us at a place for my next post, which will build from here to monads, which extend this idea in all sorts of fascinating ways.

Thanks for reading
Was that too fast or too slow? I'm never sure which parts are obvious and which aren't.

* Surely there is a difference, as running the code must do something, after all. But the difference is computational time, which isn't observable by the code -- when I write "x = 3" and then "y = 2", there's no mention in the code of how much time or processor is spent between steps one and two.
Tags: programming languages
Subscribe

  • blog moved

    As described elsewhere, I've quit LiveJournal. If you're interested in my continuing posts, you should look at one of these (each contains feed…

  • dremel

    They published a paper on Dremel, my favorite previously-unpublished tool from the Google toolchest. Greg Linden discusses it: "[...] it is capable…

  • treemaps

    I finally wrote up my recent adventures in treemapping, complete with nifty clickable visualizations.

  • Post a new comment

    Error

    default userpic
    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 12 comments

  • blog moved

    As described elsewhere, I've quit LiveJournal. If you're interested in my continuing posts, you should look at one of these (each contains feed…

  • dremel

    They published a paper on Dremel, my favorite previously-unpublished tool from the Google toolchest. Greg Linden discusses it: "[...] it is capable…

  • treemaps

    I finally wrote up my recent adventures in treemapping, complete with nifty clickable visualizations.