So I rewrote it, but in C this time, in the way the author of the paper suggested (using an adjacency matrix instead of an adjacency list).
I have friends data for: barbee brad bram dl ellenlouise erik evan faith gaal lisa meena mobley patrick petit_chou toast whitaker xaotica. I figured that'd have some interesting subgroups because they're different circles of friends of mine.
To run on my 400mhz Powerbook: ./friends 23.79s user 0.14s system 92% cpu 25.800 total
% cumulative self self total time seconds seconds calls s/call s/call name 93.33 216.86 216.86 1101 0.20 0.21 matrix_step 3.71 225.47 8.62 1100 0.01 0.01 merge 1.60 229.18 3.71 730718 0.00 0.00 matrix_dq 0.92 231.32 2.14 1948423 0.00 0.00 lookup
Which is not at all what I expected. matrix_step is the main iteration of the algorithm: it basically iterates matrix_dq over all edges and picks the largest one.
And matrix_dq (which is called a whole lot of times) is the part that moves into floating point and does some divides... but it's not the bottleneck. Instead, it's the iteration over the (rather sparse) graph. But I guess with n people (1105 in this case), you merge somewhere around n times (I end up with four groups), so it's O(n^3) iterations.
Then I rewrote it the way I did it in OCaml, except I could mutate the lists: 0.35s. Kick ass.