evan_tech

Previous Entry Share Next Entry
05:24 pm, 3 Feb 07

time travel

The latest Monad Reader (a Haskell magazine, sorta) has a neat article that cleverly applies lazy evaluation to get an effect that really does seem like time travel.

The problem is this: imagine you're writing a DSL atop Haskell that produces assembly language. You want to be able to write jump instructions that can jump to a label that occurs later in the code, but when you're generating the underlying machine code for the jump instruction you don't know what offset to jump to until you've generated code up to the label point. One obvious solution is to do two passes -- one to lay everything out, then another to generate now that offsets are known.

Here's where it gets clever. Rather than two passes, they instead write the code in such a way that the code-generator function's output is also passed as a parameter to the same function. (This is often referred to as data recursion, is the same technique used in Packrat parsing, and is something I don't yet fully grok.) So why does it work?
As we all know Haskell is a lazy language which means it has an unusual order of evaluation equivalent to the so called normal order evaluation. Normal order evaluation has the property that if any evaluation order reduces to a result, then normal order will also. This means one can forget what the actual order of evaluation is. So long as one can find some order of evaluation that delivers a result, then one knows the Haskell code give the same result.

In this case, imagine [this code] first going through and computing all the instruction names, without filling in the the instruction parameter values. At the same time it creates the association list of labels and locations. Imagine that only after this association list is created does the evaluator go through and fill in the parameters for the instructions. Because only the parameters for the instructions require the association list this evaluation order works. This means that whatever order Haskell uses, it will compute the same result.
The article continues to present a different way of thinking about it:
That is the official explanation, but I prefer the sci-fi explanation. When we pass the output of [this function] into the input for the oracle we are actually sending the data backwards in time. So when [the code] queries the oracle we get a result from the future.

Time travel is a very dangerous business. One false move and you can create a temporal paradox that will destroy the universe (which in this case means that the computation will diverge). When programming with values from the future, it is important never, never, to do anything with the values that might change the future. This is the temporal prime directive.
They then go on to show another, even cleverer, way to express this using a GHC6 extension (see 7.3.3) that allows monads to refer to bindings "from the future".