## Tail Call Optimization

##### August 30, 2009

Tail-Call Optimization (a.k.a Tail Recursion) is a technique often used in functional programming languages that promotes the use of recursion rather than imperative loops to perform iterative calculations. When I say imperative loops, I mean using a local variable that you change the value of as you calculate each step of the iteration. A generic term for this kind of variable is an accumulator, because it accumulates the new value at each step of the iteration.

Let’s say you are hoping to become the next mega millionare and you want to know your odds of winning. Most lotteries consist of something like pick 5 numbers between 1 and 50. Each number can only come up once in the drawing and if you get all 5 numbers right, you win. The order isn’t significant, you just need to get the 5 numbers.

Here’s a JavaScript function written in an imperative style to get your odds:

```
function odds(n, p) {
var acc = 1
for(var i = 0; i < n; i++) {
acc *= (n - i) / (p - i)
}
return acc
}
```

The parameters are n, the number of numbers you have to pick and p is the upper limit of all possible numbers to choose from. So the odds for the “pick 5 of 50” are `4.71974173573222e-7`

. That’s expressed in decimal in scientific notation. To get the more traditional looking “1 in whatever” number, you simply take the inverse, `1/4.71974173573222e-7`

, which is 1 in 2,118,760.

The mega millions actually has a twist to it with the powerball. You pick 5 numbers of of 56 and then 1 number out of 46. A number that comes up in the first 5 can come up as the powerball. So that means the mega millions odds are `1/(odds(5,56) * odds(1,46))`

, which is 1 in 175,711,536.

One problem with our function is that it uses a mutable local variable and mutable local variables are evil. Some functional languages, such as Clojure, Haskell and Erlang, don’t even allow you to use mutable local variables. So how can we do it without the accumulator? The answer is recursion:

```
function odds(n, p) {
if(n == 0) {
return 1
} else {
return (n / p) * odds(n - 1, p - 1)
}
}
```

This is a pretty elegant implementation of the function because it mirrors how you might think of the problem. This still works great but it breaks down if we try to calculate the odds of much larger numbers. For example, if you try to calculate the odds of picking 10,000 numbers out of 100,000 numbers with `odds(10000, 100000)`

, you will some kind of stack overflow / too much recursion error, depending on your interpreter.

The reason for this becomes clear when you walk through how the interpreter calculates the value. When you say `odds(5, 56)`

, the return value is `(5/56) * odds(4, 55)`

. The interpreter cannot evaluate this expression until `odds(4, 55)`

is calculated. So the series of expressions that get evaluated are:

```
odds(5, 56)
(5/56) * odds(4, 55)
(5/56) * (4/55) * odds(3, 54)
(5/56) * (4/55) * (3/54) * odds(2, 53)
(5/56) * (4/55) * (3/54) * (2/53) * odds(1, 52)
(5/56) * (4/55) * (3/54) * (2/53) * (1/52) * odds(0, 51)
(5/56) * (4/55) * (3/54) * (2/53) * (1/52) * 1
```

As you can see, the lines get wider as the recursive calls build up. This is representative of how the interpreter works. As each recursive call is added, memory is temporarily used up. Once there are no more recursive calls, it can start evaluating the expressions and freeing up the memory. This works fine, but if you are performing the calculation on a large number that generates more recursive calls than the interpreter can hold in memory, then boom goes the dynamite. Most interpreters push each recursive call onto a stack, so when the stack runs out of memory, you get a Stack Overflow error.

So how can we possibly do calculations like this recursively? The answer to that is tail-call optimization. In order to do this, we are going to have to bring back the accumulator, but in this case, it’s not going to be a mutable local variable, instead it will be a parameter to the function. Since we don’t want callers of our function to have to pass in the initial value of the accumulator, we will define two functions, one “public” and the other “private”:

```
(function(){
var odds1 = function(n, p, acc) {
if(n == 0) {
return acc
} else {
return odds1(n - 1, p - 1, (n / p) * acc)
}
}
odds = function(n, p) {
return odds1(n, p, 1)
}
})()
```

JavaScript doesn’t really have a public/private mechanism, but we can use closures to accomplish the same thing. First this is an anonymous function that we create and then call immediately after we create it. This just gives us a local scope. If we didn’t have this outer wrapping function, all variables would be global. Then we define a function and assign it to a local variable called `odds1`

. Next we define a function and assign it to a global variable `odds`

. The function `odds`

is “closed over” the variable `odds1`

, which means that it can reference that variable, even when the odds function is called outside the context it is defined in, even though the `odds1`

variable will not be directly available from that context. To test this out, simply try to print the value of `odds`

and then `odds1`

from outside of this outer function. The interpreter will say `odds`

is a function but it will say `odds1`

is undefined, which means `odds1`

is effectively private.

So when someone calls our function `odds`

, it calls `odds1`

with an initial value of 1 for the accumulator and then returns that value. Since JavaScript does not have tail-call optimization, that is what actually happens. `odds`

waits for the value of `odds1`

to be calculated and then returns that value once it gets one. If you look at `odds1`

, it does the same thing, returning the value from a recursive call to itself. And since JavaScript does not have tail-call optimization, this still results in some sort of stack overflow with large enough values for `n`

.

What a tail-call optimized language does is realize that the statement `return odds1(n, p, 1)`

is saying call the function `odds1`

with the arguments `n`

, `p`

and 1 and then return value. What the language can do is have the `odds`

function return immediately and tell the caller to call `odds1`

with the arguments `n`

, `p`

and 1 in order to get the value. The important distinction between these two is that the call to `odds`

is eliminated from the call stack before `odds1`

is called.

The way I think of it is tail-call optimization almost like an HTTP redirect. The call to `odds`

is simply redirected to `odds1`

. Once `odds`

has issued a redirect, it’s out of the equation. Without tail-call optimization, the call to `odds`

has to make a call to `odds1`

. While the call it `odds1`

is being processed, both `odds`

and the original caller are waiting for the response. If the chain of calls becomes to long, the call stack overflows.

This same idea can be applied to the recursive calls within `odds1`

itself. When `odds(5, 56)`

is called, it will in turn call `odds1(5, 56, 1)`

. In `odds1`

, the last thing it will do is call itself recursively, but this time the parameters will be `odds1(4, 55, (5/56))`

. A function that returns a recursive call to itself as the last thing it does is said to be tail-recursive. Our original recursive `odds`

function was not tail-recursive, because it needed to multiple the value of the recursive call by a value after the recursive call returned.

Since `odds1`

is tail-recursive, the call to `odds1(5, 56, 1)`

can just say call `odds1(4, 55, (5/56))`

instead and take itself off the call stack. The series of calls looks like this:

```
odds(5, 56))
odds1(5, 56, 1)
odds1(4, 55, 5/56)
odds1(3, 54, 1/154)
odds1(2, 53, 1/2772)
odds1(1, 52, 1/73458)
odds1(0, 51, 1/3819816)
```

The value of the accumulator in each of these calls is the result of the multiplication of the previous value and the current n divided by the current p. As you can see here, the width of the expression doesn’t increase for each iteration (well, except a little bit for the width of the accumulator value). If the interpreter actually evaluates the expressions this way there is no way to cause a stack overflow error. You can evaluate any of these expressions independently from the others, assuming you make `odds1`

public.

To see this in action, you will have to use a language with an interpreter that does tail-call optimization. Erlang is one such language. First let’s write a non-tail-recursive implementation of the odds function:

```
-module(lotto).
-compile(export_all).
odds(0, _) -> 1;
odds(N, P) -> (N / P) * odds(N - 1, P - 1).
```

Now we can see that our function works using the Erlang interactive shell:

```
$ erlc lotto.erl
$ erl
Erlang R13B01 (erts-5.7.2) [source] [smp:2:2] [rq:2] [async-threads:0] [hipe] [kernel-poll:false]
Eshell V5.7.2 (abort with ^G)
1> lotto:odds(5,56).
2.6179271462290333e-7
2> 1/(lotto:odds(5,56) * lotto:odds(1,46)).
175711536.0
```

So it turns out that you have to work really hard to get the Erlang VM to blow the stack on a non-tail-recursive call, but I managed to get an error `eheap_alloc: Cannot allocate 1140328500 bytes of memory (of type "old_heap").`

by trying to evaluate `lotto:odds(100000000, 1000000000).`

. Let’s re-write the function to be tail-recursive:

```
-module(lotto).
-export([odds/2]).
odds(N, P) -> odds(N, P, 1).
odds(0, _, Acc) -> Acc;
odds(N, P, Acc) -> odds(N - 1, P - 1, (N / P) * Acc).
```

You can see that Erlang gives us a couple of language features to make this a little cleaner than the JavaScript function. First, functions with different arities are completely different functions. Arity is the number of arguments to a function. So here we have two functions, `odds/2`

and `odds/3`

.

Also, pattern matching is used to distinguish the two cases. The `odds/3`

function has two cases, one where N is 0 and the other to handle all other cases.

Lastly, the export directive at the top says that we only want the `odds/2`

function to be public, so the `odds/3`

function is private to this module.

With this definition of the odds function, if we try to evaluate `lotto:odds(100000000, 1000000000).`

, it will complete, although it will take some time. The original recursive odds function takes *O(n)* time and *O(n)* space, which means the amount of time and the amount of space it takes to evaluate the function increases linearly for larger values of n. Due to tail-call optimization, the function still takes *O(n)* time, but takes *O(1)* space, which means the amount of time it takes to evaluate the function still increases linearly with larger values for n increases, but the function always operates in a constant amount of space, regardless of the size of n.

So to summarize, tail-call optimization is a technique that is performed by the compiler/interpreter, usually transparently, to make tail-recursive functions operate in constant space. This is a necessary optimization in order to be able to use recursive functions to perform iterative calculations as opposed to imperative functions that use mutable local variables.

## Comments

1.Another great article. One thing to clarify is that the kaboom is not from out of memory available for the call stack. In JavaScript, the call stack is limited by instruction depth regardless of memory usage. Its a hard and fast limiter on processing depth, unfortunately. A case in point using the recommended accumulator technique can be shown using a large value for both n and p, even the values are clearly processable in the memory allocated. In Firefox, odds(2999, 8000) returns an InternalError because the stack limit has been exceed, not necessarily the memory limit. Safari fails at odds(100000,100000) Full lists of stack size limits at http://www.nczonline.net/blog/2009/05/19/javascript-stack-overflow-error/ (note that newer WebKit runtimes have a lot higher threshold).

2.Translated to Objective-C ;-)

http://gist.github.com/188799

3.I'll preface this by saying I am no expert on this, but this conflicts a bit with how I understood all of this. My understanding is that the first example is, indeed, imperative and iterative. The second example is functional and recursive. And the third example is functional and iterative.

My understanding of a tail call is when a function is called in a tail position of a function. (ie, the last line of a function, or branch...a terminal expression) The tail call can be calling the same function but it does not need to be. Tail call optimization is then a run-time (or compile-time) optimization where the tail call shares a stack frame with the original calling function.

I don't mean that this contradicts what you are saying, but runtime differences between the two functional examples have more to do with the first being recursive and the second being iterative. Tail call optimization would make this more efficient still, but I don't think that is what is being demonstrated.

I fully realize that I could be wrong on this, but I would think that transparently changing from a recursive to iterative logic would not be reliably feasible. Well, at least doing so is not what I understood TCO to be. Am I way off on this?

4.@Jeremy

> I'll preface this by saying I am no expert on this, but this conflicts a bit with how I understood all of this. My understanding is that the first example is, indeed, imperative and iterative. The second example is functional and recursive. And the third example is functional and iterative.

Right

> My understanding of a tail call is when a function is called in a tail position of a function. (ie, the last line of a function, or branch...a terminal expression) The tail call can be calling the same function but it does not need to be. Tail call optimization is then a run-time (or compile-time) optimization where the tail call shares a stack frame with the original calling function.

Right

> I don't mean that this contradicts what you are saying, but runtime differences between the two functional examples have more to do with the first being recursive and the second being iterative. Tail call optimization would make this more efficient still, but I don't think that is what is being demonstrated.

In the recursive example, the function call isn't in the tail position, therefore it's can't be optimized.

> I fully realize that I could be wrong on this, but I would think that transparently changing from a recursive to iterative logic would not be reliably feasible. Well, at least doing so is not what I understood TCO to be. Am I way off on this?

Right, I agree, I think :) The interpreter can't do tail call optimization unless the function call is in the tail position. So I think we agree on everything. Is there something specific that I said that you disagree with?

5.Hello,

thanks for the enlightening post.

I experimented with Javascript tail call optimization, implemented in Javascript itself.

I turns out to work fine, as one could expect from reading your post. In case of interest:

http://glathoud.easypagez.com/publications/tailopt-js/tailopt-js.xhtml

Best regards,

Guillaume

6.> So it turns out that you have to work really hard to get the Erlang VM to blow the stack on a non-tail-recursive call,

Not really. Call this with a few hundred thousand as an argument.

-module(ram_killer).

-export([die_now/1]).

die_now(0) -> 1;

die_now(X) -> 2 * die_now(X-1).

7.This is a very great help. I liked the way you teach us. Java script is quite useful.

## Comments Disabled