Paul Tyma: threads vs nonblocking I/ONotes from Sriram Srinivasan:
It used to be that non-blocking I/O was way better, both in terms of scalability (numbers of sockets), and individual response handling. Then Linux and Windows severely improved their thread context switching overheads; the answer now is that "you must really take a look at blocking thread I/O again: the numbers will surprise you". The person who led this charge was Paul Tyma. In summary, he says you can have 100000 threads on Linux and have terrific performance, and so, you can have a thread per connection.
Now, if the threads were forced to do a lot of context switching rapidly because the connections are really busy, select is preferable. (Actually epoll/kqueue are vastly more efficient alternatives, which is what Java uses under the covers; no one uses the original select any more). Game servers are a good use for async I/O -- very bursty traffic, latency is critical. Or auto-completion services (like for Google's search bar), where connections are rapidly setup and torn down. These would be much much better off with non-blocking I/O. Google does not use blocking I/O, nor even blocking APIs internally, as far as I am aware.
On the other hand, the price you pay for non-blocking I/O is painful programming, even with closures built into a language. It is so much simpler to call write() since you know that the job is done when the method returns. Take a simple use case. Let's say the app wants to read an encoded packet from a socket, and the first two bytes in the byte stream are the length of the packet to follow. In a threaded environment, the job is simple. If read(n) returns after reading exactly n bytes, then our use case translates to
This would be extraordinarily messy in an async setup, esp. when callbacks are involved. Hence Paul Tyma's assertion is well-taken ... is it really worth it to undertake this messy programming complexity if the performance gains (if any) are marginal?