evan_tech

Previous Entry Share Next Entry
A real post is eventually coming, I swear.

Outside of work I've still been playing with Haskell. I think it's certainly an interesting mental exercise to use and using it is helping shift the way I think about computation. For one thing, lazy evaluation makes the difference between values and functions become more hazy. The syntax for both Haskell and O'Caml (and lisp, sorta) use for bindings makes them look similar -- and yeah, functions are values, too -- but it's more than that. When I write a function like let f x = ... instead of a value like let f = ..., the evaluation of the function body will be delayed, and that is a big part of what it means to be a function in these languages. (For example, how can you implement "if" without closures?) But when (like in Haskell) there's no reason to delay an evaluation (because it doesn't affect the program anyway) you can use them interchangeably. I can define an "if" function simply:
myIf :: Bool -> a -> a -> a
myIf test truecase falsecase = if test then truecase else falsecase

which is an incorrect definition for C, ML, lisp, etc.


With more perspective I can say O'Caml is especially interesting because it has strong theory behind it (type inference, polymorphism) but they made the appropriate design decisions to make it so fast. Unfortunately, the fancy object system (which is a large part of the language) is mostly lost on me.


I found this bit from Xavier Leroy regarding running O'Caml on 64-bit G5s sorta interesting:
As others have explained, the first thing you need is a 64-bit kernel and a development environment (C compiler, linker, libraries) that handles 64-bit code. The next release of Mac OS X is rumored to offer all this.

[...]

Also, the only situations where 64-bit code is beneficial are 1- large integer arithmetic (bignums, crypto), and 2- exploiting more than 4 Gb of RAM. In all other cases, 64-bit code is actually a waste, since pointers occupy twice as much memory as with 32-bit code.

So, I expect 64-bit computing to take off when machines commonly have 4 Gb of RAM or more, which should take a few more years. Caml will have no problems adapting to this trend, since it's 64-bit clean from the start. (Caml Special Light, the ancestor of OCaml, was developed circa 1995 on a 64-bit Alpha, then backported to 32-bit architectures.) I expect that at that time our "tier 1" architectures will be x86-64 and PPC-64.
This indirectly reminds me of a comment I heard once: "many problems in computer science can be boiled down to trying to make one pointer into two". I wonder if it'd be useful to just use 32 bits of address on 64 bit machines and get the two pointers for free. Stuff that comes to mind includes being able to represent pairs as immediate values, such as closures, though I really shouldn't even try to comment on such things because I know very little about how they're actually implemented.