April 10th, 2005

  • evan


Hot on the heels* of my post about hip languages and encroaching types, GvR writes about multimethods in Python, defining them as "a function that has multiple versions, distinguished by the type of the arguments".

I think the more pedantic definition is that they vary on the run-time type of the arguments. (But as I understand Python, that's the only sense of type that exists...) For example, you can already implement the example multimethods code he's written in multimethod-less C++.

The group I almost got involved in at UW was always doing research with multimethods. Random Python talk mentions the Chambers and Chen algorithm, and I was briefly in Chambers' advanced compilers class...

But I never looked into multimethods too hard 'cause they pretty much give me the heebie-jeebies. I wonder if it's just the way I've been trained to think; the canonical examples (overriding the 2d-point add method with one specialized for a pair of 3d points that adds their z coordinate) make sense, sorta. I think all of the ML has also warped my brain away from thinking about everything in terms of objects anyway.

* Actually preceding by a few days, but I just saw it.
  • evan

code as text, trees, and computations

Since I pointed you at those slides, I'll point you at another that's related to something that's been on my mind lately: code as text.

I think the temptation to just break down into string representations of code is very easy -- and always a bad idea. The reason it seems so natural is that we're accustomed to thinking about code as a bunch of strings. So if I need to cut'n'paste some code in my editor, the reasoning goes, why not factor out the shared text and just write code that generates the code I need?

But the abstract computation that code means is significantly different than strings of text. In dropping to text you're effectively giving up on the language itself. Any time you need to escape the structure of the language it just means the language has failed you -- like casts in C. First we used #define for constants, then for macros, then we want variadic macros and suddenly we have this new programming language embedded into C that is less well-thought out (pass by name! it's like m4 all over again!).

The Lisp macro solution is much more elegant, because it lets you operate on the parse tree. camlp4, the O'Caml preprocessor, lets you write macros that work with the parse tree as well but O'Caml syntax is complicated enough that it's beyond mortals. (Except for graydon, but he's kinda supermortal anyway.)

Playing with Parsec teases me with more of the functional and Haskell dream, that you can work at the level of functions and computations. It's really neat that you can take a parser letter that accepts a letter, then apply a function to it and get back a new parser that accepts any sequence of letters. And beyond that, the fact that they provide a whole library of parser combinators, and that the combinations of parsers can be expressed with monads and so there's no new syntax necessary... but I'm still starry-eyed and it's all a little mysterious to me, still.

PS: I'm fully aware that I've given no consideration to practicality. Whatever.