evan_tech

Previous Entry Share Next Entry
09:09 pm, 10 Jan 05

concurrent languages

Jens asked what languages I've heard about that have interesting concurrency work going on.

You can check out:
  • JoCaml (great name, by the way!) extends an old O'Caml "with the distributed join-calculus programming model. This model includes high-level communication and synchronising channels, mobile agents, failure detection and automatic memory management." (I don't really know what that means, but try scanning their tutorial.)
  • "Alice is a functional programming language based on Standard ML, extended with rich support for concurrent, distributed, and constraint programming. Alice extends Standard ML with several new features: ..." Be sure to look at their description of futures for a simple example of a keyword-level language extension, but they also discuss the distributed objects, etc. we've come to expect from distributed programming languages.
  • The Mozart Programming System is an advanced development platform for intelligent, distributed applications. The system is the result of a decade of research in programming language design and implementation, constraint-based inference, distributed computing, and human-computer interfaces."
  • My compilers professor Larry Snyder and some of my classmates worked on ZPL, "A Portable, High-Performance Parallel Programming Language for Science and Engineering Computations". As I understand it, ZPL programs are written as very high-level descriptions of what needs to happen to elements in a matrix (expressions like "add to each element the element on its left") and ZPL handles the distribution of this problem across nodes. But check out their overview.


In general, a lot of the work on functional languages is necessarily paralellizable, both because they tend to express things at a higher level and because they avoid side effects. For example, if you initialize an array with a for loop, your compiler has to untangle what's happening in the loop to possibly distribute it. (Array initialization usually isn't necessarily serial; it's just the way we're accustomed to expressing it.) An expression like array.initialize_each_element |i| (i * i) is more naturally divisible. The previous paper I linked to dealt with new research in Haskell concurrency.

I think there are two sorts of concurrency that usually go hand in hand: the distributed systems that deal with lots of computation on each node and pass around smaller results -- stuff like what Google or LiveJournal does. The interesting work for those are systems that support RPC, like DCOM or whatever (XMLRPC, heh). At the other end you have concurrency within a machine or machines in a cluster that deal with problems like efficient locks, synchronization, races, etc. But the solutions to both often can work together: Alice futures could be probably be distributed across machines but they look like they're designed for multithreaded apps.

(I get most of this sort of news from Lambda the Ultimate.)