They give an example:
Take the following theorem from Levi Ben Gershon's (13th century) manuscript Maaseh Hoshev (The Art of Calculation):
"When you add consecutive numbers starting with 1, and the number of numbers you add is odd, the result is equal to the product of the middle number among them times the last number." It is natural for modern day mathematicians to write this as:
Σ i = (k+1) (2k+1)
A reader should take as much time unraveling the two-inch version as he would unraveling the two sentence version.
It occurred to me today that when I first encountered functional languages (and when I later helped teach a class that had a lot of functional languages) I (/the students) had analogous difficultly with reading the meaning out of these heavily nested S-expressions. It felt to me like they were write-only, that some programmer was taking pleasure in golfing their code down to the minimal lines.
But I gradually discovered that it is just a question of being comfortable with the abstractions. For a long time (even through college) I read sigmas as for loops because that's the way I'd understand them. But mathematicians write sigmas instead because eventually the concept of summation becomes fundamental and more verbosity would only get in the way of the meaning.
It took me a while to become comfortable writing loops as tail-recursive functions in O'Caml, but eventually it became second nature. I find in Haskell you're rarely using recursion at all because you're instead composing higher-order functions. It's like each new abstraction collapses a new pile of ideas.
Russell was laughing at me for using accumulate() in some C++ code at work, and I eventually rewrote it as a for loop because it's important to match the culture of the code you're writing. But it's weird to go back to
for (int i = 0; i < n; ++i) dest[i] = src[i]+1;
when you've been thinking about
[x+1 for x in l]
or even (without using a language's special comprehension syntax)
map (1+) l.