evan_tech

Previous Entry Share Next Entry
06:23 pm, 20 Feb 04

graph merging

I got my first negative (beyond comments on my shaving habits) feedback from quiz section today. I usually try to talk about stuff outside of the class's scope because networking, especially as presented by school, is really just rather boring (the most common complaints are about a proliferation of acronyms and formulas). But for me, all that is just the fundamentals that get in the way of the more interesting stuff. We have to use all these bits and wires, but on top of that routing and protocols become something more motivated by social factors.
I know that at least for the classes I've taken, I'd rather read the book and have the classes talk about stuff outside of the book. So last section, for example, I talked about syn flooding and why it is a problem... but it's also important for me to remember that I am there to assist in teaching and to not diverge too much from the lectures. *shrug*

That sorta dampens my good mood, but not fully: I think I have a working implementation of that group-finding algorithm. The problem I mentioned having was actually discussed more fully in a footnote, which was disguised as a bibliography reference (how was I to know that [1] through [18] were names of papers but [19] actually discussed the details!).

The basic idea of the algorithm is this: you have this magic cost function that tells you how good a given grouping of the nodes is. To maximize it, you start with each node in its own group, and then you iteratively calculate the delta cost of merging the ends of each edge between groups and merge the best. Once the best delta cost goes negative, you've hit a local maximum and you stop. (The paper discusses in depth how well this compares to maximizing it more properly.)

I think my implementation might be faster than the way they presented it, and it's definitely less consumptive of memory. They represent the graph as an n by n matrix, so adding rows and columns is O(n), and since they add both rows and columns it can't be represented as a sparse matrix (at least as I understand them). I represent the graph as an, um, adjacency list, and by being careful about the way in which I merge nodes (the graph is always shrinking, sorta, and I can merge two nodes' edge lists with a simple merge) it speeds up as it progresses. I have the gut feeling it isn't any significant change in the n^2 runtime, but it was fun discussing the problem and trading ideas with Jeff. (I know so many Jeffs, sorry. This is the classmate one that I hang out with a lot—he's not especially knowledgeable about anything in particular but he's got a lot of smarts and a good intuition, so he usually has a lot to contribute on any problem.)

Now to try feeding my LJ friends data and my sex chart data into it!