One proposal was that it could send the URL you're at to a server that could look it up. That's consistent with the XmlHttpRequest-style of Greasemonkey but terribly broken:
- for privacy reasons, you don't want to send every URL you visit to a central server;
- a central server couldn't handle the load of a bunch of people hitting it with every page.
It seems the right solution is to let clients periodically download an index.
Greasemonkey scripts provide regular expressions that match the URL when the script could run. So how would you build an index that could quickly find which script out of many would match? The straightforward way is to stick every regex in an array and then iterate through it, but that's O(n) and n's increasing rapidly.