Node.js Presentation

February 8, 2010

Last week I presented on node.js at the Baltimore/DC JavaScript Users Group. Here's the video:

Paul Barry on Node.js at February Baltimore JavaScript Meetup from Shea Frederick on Vimeo.

To make it easier to follow along, here are the slides and the code.

Posted in Technology | Tags Javascript, node.js | 2 Comments

Node.js talk at Baltimore/JavaScript Users Group

February 1, 2010

Just a quick heads up that I'll be talking about Node.js at this Wednesday's Baltimore/JavaScript Users Group meeting. This will be an introductory talk but will quickly dive into some code examples with Q & A. The talk, just as node.js itself, centers around using Evented, Asynchronous I/O to get better scalability out of network servers, so it should be of interest to anyone doing server-side programming in any language, Java, Perl, Ruby, Python, etc. See you there!

Posted in Technology | Tags Javascript, node.js | 0 Comments

ExtJS On Rails

September 23, 2009

If you are looking to give ExtJS a spin, I've created a Rails Application Template to setup a Rails app with ExtJS installed. To use it, just do:

rails -m http://tinyurl.com/extjs-rb my-extjs-app

To see a sample of this in use, take a look at this Rails app. It is a port of Chapter 1 of ExtJS In Action to Rails. Assuming you have Rails 2.3.2 installed, you should be able to download the Rails app from that git repo, run script/server and then try out the app at http://localhost:3000.

UPDATE: This code actually refers to the code in the sample chapter on the Manning website. Chapter 1 in the latest version of the book has actually been completely re-written, so I've included the sample chapter in the git repo. Also be sure to check out the author's blog which covers the development of the book.

Posted in Technology | Tags Javascript, Rails, ExtJS | 0 Comments

Infinite Recursion

September 2, 2009

A few days ago I posted an article on Tail Call Optimization. One really quick way to determine if any language/VM/interpreter performs Tail Call Optimization (TCO) is to write a function that calls itself, in other words, creating an infinitely recursive function. If the function just runs forever and doesn't return, the interpreter is doing TCO, otherwise you will get some sort of stack overflow error. So I decided to test a variety of languages to see what happens when you write an infinitely recursive function. First up is ruby:

def forever
  forever
end

forever

It was no surprise to me that running this results in this error:

run.rb:2:in `forever': stack level too deep (SystemStackError)
    from run.rb:2:in `forever'
    from run.rb:5

Ruby doesn't have TCO, I knew that. Next up, Python:

def forever(): forever()

forever()

This has a similar result, but the stack track is a little more revealing:

Traceback (most recent call last):
  File "run.py", line 3, in <module>
    forever()
  File "run.py", line 1, in forever
    def forever(): forever()
  File "run.py", line 1, in forever
    def forever(): forever()
...
  File "run.py", line 1, in forever
    def forever(): forever()
  File "run.py", line 1, in forever
    def forever(): forever()
RuntimeError: maximum recursion depth exceeded

This shows pretty clearly what is going on, the stack frames are pilling up and eventually it gets to the point where the Python interpreter says enough is enough. Interestingly enough, this mailing list thread shows that it's completely feasible to add TCO to Python and Guido just doesn't want to add it.

JavaScript is no surprise either, but we can write our infinitely recursive function with an interesting little twist:

(function(){ arguments.callee() })()

Yes that's right, it's an anonymous recursive function! Running this with SpiderMonkey results in InternalError: too much recursion. So how about Java:

class Run {
  public static void main(String[] args) {
    Run run = new Run();
    run.forever();
  }

  public void forever() {
    forever();
  }
} 

The stack trace for this one looks a bit like the Python one, we see a stack frame for each iteration:

Exception in thread "main" java.lang.StackOverflowError
    at Run.forever(Run.java:8)
    at Run.forever(Run.java:8)
    at Run.forever(Run.java:8)
    at Run.forever(Run.java:8)
...

So that means no TCO on the JVM. So Scala doesn't have TCO, right?

def forever : Unit = forever

forever

Wrong. This one runs forever. Scala does TCO with some bytecode tricks which Nick Wiedenbrueck explains really well in this blog post.

Clojure is a functional Lisp saddled with the problem of no-TCO on the JVM, but it gets around it in a slightly different way than Scala. If you write a tail recursive function like this:

(defn forever [] (forever))

You will get a java.lang.StackOverflowError just as you do in Java. Instead, Clojure provides a language level feature to do recursion:

(defn forever [] (recur))

Calling recur in the function makes it call the function again. recur also checks that it is in the tail recursive position, which ends up being an interesting feature because you have to explicitly say "I expect this to do TCO". In other languages that do it transparently, you might think TCO is happening, but it might not be and you won't find out until you get a stack overflow error. Also, this construct allows us to have recursive anonymous functions:

((fn [] (recur)))

Erlang, being a function language, has TCO as well:

-module(run).

-compile(export_all).

forever() ->
  forever().

Now one thing all of these functional languages that have TCO; Scala, Clojure and Erlang, all have in common is that while the infinitely recursive function runs, it essentially does nothing, but it will peg your CPU utilization to nearly 100%. This next language, Haskell, being the mother of all functional langauges, of course has TCO. Here's the function:

forever = forever

Yes, that's a function. And if you call it with forever, it just sits there and runs forever. But here's the crazy thing, it uses no CPU. My guess is that it has to do with lazy evaluation, but I'm not sure. Any ideas?

Posted in Technology | Tags TCO, Javascript, Scala, Haskell, FunctionalProgramming, Python, TailCallOptimization, Ruby, Java, Erlang, Clojure | 19 Comments

Tail Call Optimization

August 30, 2009

Tail-Call Optimization (a.k.a Tail Recursion) is a technique often used in functional programming languages that promotes the use of recursion rather than imperative loops to perform iterative calculations. When I say imperative loops, I mean using a local variable that you change the value of as you calculate each step of the iteration. A generic term for this kind of variable is an accumulator, because it accumulates the new value at each step of the iteration.

Let's say you are hoping to become the next mega millionare and you want to know your odds of winning. Most lotteries consist of something like pick 5 numbers between 1 and 50. Each number can only come up once in the drawing and if you get all 5 numbers right, you win. The order isn't significant, you just need to get the 5 numbers.

Here's a JavaScript function written in an imperative style to get your odds:

function odds(n, p) {
  var acc = 1
  for(var i = 0; i < n; i++) {
    acc *= (n - i) / (p - i)
  }
  return acc
}

The parameters are n, the number of numbers you have to pick and p is the upper limit of all possible numbers to choose from. So the odds for the "pick 5 of 50" are 4.71974173573222e-7. That's expressed in decimal in scientific notation. To get the more traditional looking "1 in whatever" number, you simply take the inverse, 1/4.71974173573222e-7, which is 1 in 2,118,760.

The mega millions actually has a twist to it with the powerball. You pick 5 numbers of of 56 and then 1 number out of 46. A number that comes up in the first 5 can come up as the powerball. So that means the mega millions odds are 1/(odds(5,56) * odds(1,46)), which is 1 in 175,711,536.

One problem with our function is that it uses a mutable local variable and mutable local variables are evil. Some functional languages, such as Clojure, Haskell and Erlang, don't even allow you to use mutable local variables. So how can we do it without the accumulator? The answer is recursion:

function odds(n, p) {
  if(n == 0) {
    return 1
  } else {
    return (n / p) * odds(n - 1, p - 1)
  }
}

This is a pretty elegant implementation of the function because it mirrors how you might think of the problem. This still works great but it breaks down if we try to calculate the odds of much larger numbers. For example, if you try to calculate the odds of picking 10,000 numbers out of 100,000 numbers with odds(10000, 100000), you will some kind of stack overflow / too much recursion error, depending on your interpreter.

The reason for this becomes clear when you walk through how the interpreter calculates the value. When you say odds(5, 56), the return value is (5/56) * odds(4, 55). The interpreter cannot evaluate this expression until odds(4, 55) is calculated. So the series of expressions that get evaluated are:

odds(5, 56)
(5/56) * odds(4, 55)
(5/56) * (4/55) * odds(3, 54)
(5/56) * (4/55) * (3/54) * odds(2, 53)
(5/56) * (4/55) * (3/54) * (2/53) * odds(1, 52)
(5/56) * (4/55) * (3/54) * (2/53) * (1/52) * odds(0, 51)
(5/56) * (4/55) * (3/54) * (2/53) * (1/52) * 1

As you can see, the lines get wider as the recursive calls build up. This is representative of how the interpreter works. As each recursive call is added, memory is temporarily used up. Once there are no more recursive calls, it can start evaluating the expressions and freeing up the memory. This works fine, but if you are performing the calculation on a large number that generates more recursive calls than the interpreter can hold in memory, then boom goes the dynamite. Most interpreters push each recursive call onto a stack, so when the stack runs out of memory, you get a Stack Overflow error.

So how can we possibly do calculations like this recursively? The answer to that is tail-call optimization. In order to do this, we are going to have to bring back the accumulator, but in this case, it's not going to be a mutable local variable, instead it will be a parameter to the function. Since we don't want callers of our function to have to pass in the initial value of the accumulator, we will define two functions, one "public" and the other "private":

(function(){
  var odds1 = function(n, p, acc) {
    if(n == 0) {
      return acc
    } else {
      return odds1(n - 1, p - 1, (n / p) * acc)
    }
  }

  odds = function(n, p) {
    return odds1(n, p, 1)
  }  
})()

JavaScript doesn't really have a public/private mechanism, but we can use closures to accomplish the same thing. First this is an anonymous function that we create and then call immediately after we create it. This just gives us a local scope. If we didn't have this outer wrapping function, all variables would be global. Then we define a function and assign it to a local variable called odds1. Next we define a function and assign it to a global variable odds. The function odds is "closed over" the variable odds1, which means that it can reference that variable, even when the odds function is called outside the context it is defined in, even though the odds1 variable will not be directly available from that context. To test this out, simply try to print the value of odds and then odds1 from outside of this outer function. The interpreter will say odds is a function but it will say odds1 is undefined, which means odds1 is effectively private.

So when someone calls our function odds, it calls odds1 with an initial value of 1 for the accumulator and then returns that value. Since JavaScript does not have tail-call optimization, that is what actually happens. odds waits for the value of odds1 to be calculated and then returns that value once it gets one. If you look at odds1, it does the same thing, returning the value from a recursive call to itself. And since JavaScript does not have tail-call optimization, this still results in some sort of stack overflow with large enough values for n.

What a tail-call optimized language does is realize that the statement return odds1(n, p, 1) is saying call the function odds1 with the arguments n, p and 1 and then return value. What the language can do is have the odds function return immediately and tell the caller to call odds1 with the arguments n, p and 1 in order to get the value. The important distinction between these two is that the call to odds is eliminated from the call stack before odds1 is called.

The way I think of it is tail-call optimization almost like an HTTP redirect. The call to odds is simply redirected to odds1. Once odds has issued a redirect, it's out of the equation. Without tail-call optimization, the call to odds has to make a call to odds1. While the call it odds1 is being processed, both odds and the original caller are waiting for the response. If the chain of calls becomes to long, the call stack overflows.

This same idea can be applied to the recursive calls within odds1 itself. When odds(5, 56) is called, it will in turn call odds1(5, 56, 1). In odds1, the last thing it will do is call itself recursively, but this time the parameters will be odds1(4, 55, (5/56)). A function that returns a recursive call to itself as the last thing it does is said to be tail-recursive. Our original recursive odds function was not tail-recursive, because it needed to multiple the value of the recursive call by a value after the recursive call returned.

Since odds1 is tail-recursive, the call to odds1(5, 56, 1) can just say call odds1(4, 55, (5/56)) instead and take itself off the call stack. The series of calls looks like this:

odds(5, 56))
odds1(5, 56, 1)
odds1(4, 55, 5/56)
odds1(3, 54, 1/154)
odds1(2, 53, 1/2772)
odds1(1, 52, 1/73458)
odds1(0, 51, 1/3819816)

The value of the accumulator in each of these calls is the result of the multiplication of the previous value and the current n divided by the current p. As you can see here, the width of the expression doesn't increase for each iteration (well, except a little bit for the width of the accumulator value). If the interpreter actually evaluates the expressions this way there is no way to cause a stack overflow error. You can evaluate any of these expressions independently from the others, assuming you make odds1 public.

To see this in action, you will have to use a language with an interpreter that does tail-call optimization. Erlang is one such language. First let's write a non-tail-recursive implementation of the odds function:

-module(lotto).

-compile(export_all).

odds(0, _) -> 1;
odds(N, P) -> (N / P) * odds(N - 1, P - 1).

Now we can see that our function works using the Erlang interactive shell:

$ erlc lotto.erl
$ erl
Erlang R13B01 (erts-5.7.2) [source] [smp:2:2] [rq:2] [async-threads:0] [hipe] [kernel-poll:false]

Eshell V5.7.2  (abort with ^G)
1> lotto:odds(5,56).
2.6179271462290333e-7
2> 1/(lotto:odds(5,56) * lotto:odds(1,46)).
175711536.0

So it turns out that you have to work really hard to get the Erlang VM to blow the stack on a non-tail-recursive call, but I managed to get an error eheap_alloc: Cannot allocate 1140328500 bytes of memory (of type "old_heap"). by trying to evaluate lotto:odds(100000000, 1000000000).. Let's re-write the function to be tail-recursive:

-module(lotto).

-export([odds/2]).

odds(N, P) -> odds(N, P, 1).

odds(0, _, Acc) -> Acc;
odds(N, P, Acc) -> odds(N - 1, P - 1, (N / P) * Acc).

You can see that Erlang gives us a couple of language features to make this a little cleaner than the JavaScript function. First, functions with different arities are completely different functions. Arity is the number of arguments to a function. So here we have two functions, odds/2 and odds/3.

Also, pattern matching is used to distinguish the two cases. The odds/3 function has two cases, one where N is 0 and the other to handle all other cases.

Lastly, the export directive at the top says that we only want the odds/2 function to be public, so the odds/3 function is private to this module.

With this definition of the odds function, if we try to evaluate lotto:odds(100000000, 1000000000)., it will complete, although it will take some time. The original recursive odds function takes O(n) time and O(n) space, which means the amount of time and the amount of space it takes to evaluate the function increases linearly for larger values of n. Due to tail-call optimization, the function still takes O(n) time, but takes O(1) space, which means the amount of time it takes to evaluate the function still increases linearly with larger values for n increases, but the function always operates in a constant amount of space, regardless of the size of n.

So to summarize, tail-call optimization is a technique that is performed by the compiler/interpreter, usually transparently, to make tail-recursive functions operate in constant space. This is a necessary optimization in order to be able to use recursive functions to perform iterative calculations as opposed to imperative functions that use mutable local variables.

Posted in Technology | Tags TCO, Javascript, Recursion, Haskell, FunctionalProgramming, TailCallOptimization, Erlang, Clojure | 14 Comments

  Older Articles >>