# evan_tech

11:25 pm, 12 Nov 03

### all i have to talk about is schoolwork these days

Man, this is ugly. Multiplication is implemented recursively as repeated addition, then exponentiation on top of that as repeated multiplication. But it works! And type-checks! Give it two integers and it gives you n^m.

Sadly, recursive functions mean that it's no longer guaranteed to terminate.
```(fn n:int.
(rec pwr:int->int m.
(if m > 0 then
(rec mul:int->int y.
(if y > 0 then
n+(mul (y + (-1)))
else
0)
) (pwr (m + (-1)))
else
1
)
)
)```
I was about to rewrite in in Python just for fun, but then I realized you can't (?) make recursive functions with Python lambdas.

•  That's a variant on the old Ackermann function (of ML textbook fame). What's scary is that by messing with recursion you can have addition, multiplication, exponentiation and then carry on .. is there even a name for what happens after exponentiation? reply to this
• I meant to ask this when I first saw this post but I forgot. Why is this not guaranteed to terminate? reply to this
•  The termination proof before relied on the fact that types were limited and not recursive: a function can call a function, but the callee's type must be "smaller" than the caller (the type of the caller is a function that takes the callee's type as an argument). Recursion allows a function to call a function that's the "same size": itself.A simple infinite loop using the above syntax is(rec f:int->int. f x) 1(That is, define f(x) to be running f(x), and then call it with some integer.)My exponentiation function probably terminates, though... maybe some creative overflowing of integers could work around that? (To be clear: the termination proof, from before recursion, showed that no matter what the functions did, they had to terminate. So I suppose tricks like overflow are a valid way of trying to break that-- they still would terminate.)(I'm not completely solid on this stuff yet so I apologize if I get any of it wrong.) reply to this
• Ahh, you can prove that recursive functions terminate. Not all of them do, naturally, but this one does (for all integers m >=0). I think considering integer overflows is 'cheating' (it has more to do with the storage limitations of the computer and not a property of the algorithm itself). reply to this
• Hm, let me be less vague and offer a proof (my induction skillz r rusty, so uh forgive any errors)Here's what the pwr function boils down to, right?pwr n m = mul (n) (pwr (n) (m - 1))And here's the muliplication function:mul n m = n + mul (n) (m - 1)To prove pwr terminates, you should also prove mul terminates. here's your base cases:For m = 0:mult n 0 = 0pwr n 0 = 1Which trivially terminate. As long as m converges to 0, then the functions should terminate, no?Strong induction sez: If we assume these two functions terminate for all 0 <= m <= k, then if we prove they terminate for m = k + 1, we know it terminates for all m. Let's prove mul terminates first with m = k+1:mul n (k + 1) = n + mul (n) (k + 1 - 1) -> reduces to mul n (k + 1) = n + mul (n) (k) But, we know "mul (n) (k)" terminates (by our assumption), and n + (value) terminates, therefore "mul n (k + 1)" terminates! excellent.So, substitute (k + 1) for m in the pwr function:pwr n (k + 1) = mul (n) (pwr (n)(k + 1 - 1))-> mul (n) (pwr (n)(k)) |__________|--> this terminates by our assumption. and "mul (n) (value)" terminates for all values >= 0 because we just proved it. So "pwr (n) (k + 1)" terminates. Therefore, pwr (n, m) terminates for all m. reply to this 