Then I just chatted with Matt a bit about it and realized I was thinking about it wrong. His first thought was that there shouldn't be that much data involved here and I should be able to just do a brute-force scan.
But there's too much data, I protested. To start with, find across my tree with a cold disk takes tens of seconds to even enumerate all the files (over 200,000). And Visual Studio's "find in files" is also super-slow, supporting my intuition. But upon second glance that number is immediately suspicious as way too many files -- because it's including all sorts of files I don't care about! As far as source goes it's only around 10,000 source files and under 50mb of data.
So what's faster? Git knows which files are in the repo. (A great trick for finding files by name more quickly than using find is git ls-files <pattern>.) Then I just want to limit my grep to source files. I had thought git grep didn't let you specify a filename pattern (which is why I was fooling around with find and grep in the first place) but after rereading the source I see I overlooked it in the man page; something like git grep foo -- *.cc *.h does exactly what I wanted. It's easy enough to stuff in a one-liner shell script so git gs foo quickly searches my code.
On a cold disk (after flushing the cache), it takes ~11s on my laptop. But as soon as the disk is warm (and it's only 50mb of data to keep around anyway) it's 0.35s, which is plenty fast. I note that whoever wrote the grep support for git was clever enough to shell out to grep (unless your combination of passed-in flags prevents it), because you are unlikely to beat grep.
(PS/update: it turns out that M-x vc-git-grep hooks all this into the existing Emacs grep support, complete with shorthand for specifying "file extensions that look like C++ source or headers". I am humbled.)