The Low Latency Web

March 26, 2012

500,000 Requests/Sec? Piffle! 1,000,000 Is Better

Filed under: HTTP Servers — lowlatencyweb @ 1:58 pm

Modern HTTP servers are capable of handling 500k requests/sec on commodity hardware. However that article ignored HTTP pipelining which can have a significant impact on performance. Buggy legacy servers and proxies prevent most browsers from enabling pipelining by default, but that is the high-latency past, not the future. In fact we can nearly double the performance of nginx 1.0.14 by enabling pipelining:

Getting to 1M requests/sec required minor tweaking of the client and server compared to the original article. nginx’s worker_processes was reduced from 16 to 14, wrk’s threads were increased from 10 to 11, and 30M requests were made instead of 10M. Maximum performance was reached with a pipeline depth of 8 and 1,100 concurrent connections.

wrk -t 11 -c N -r 30m --pipeline 8 http://localhost:8080/index.html

When pipelining is enabled wrk counts latency as the time from the first request to the last response. In this particular environment less than 400 concurrent connections result in worse latency and throughput with 8 pipelined requests, however with 400+ connections latency and throughput are significantly better.

A SPDY Future

What makes these results even more interesting is the accelerating acceptance of SPDY as a replacement for HTTP 1.1. Chrome has supported SPDY for a while, Firefox and nginx will soon, and there is an experimental Apache module. For dynamic content Jetty and Netty support SPDY today, and Netty’s implementation is already in production use by Twitter.

Persistent connections and support for multiple concurrent requests are inherent to SPDY, so a move to SPDY will reduce the number and frequency of new connections, as well as the latency caused by non-pipelined request/response cycles. This will allow HTTP servers to get closer to their theoretical maximum performance, which looks to be very high indeed for nginx.



  1. Is there a performance analysis showing side by side for the same server setup:

    * HTTP/1.1
    * HTTP/1.1+SSL
    * HTTP/1.1+pipelining
    * HTTP/1.1+SSL+pipelining
    * SPDY

    PS: (“accelerating acceptance of SPDY as a replacement for HTTP 1.1”, SPDY doesn’t exactly replace HTTP. A bit misleading. SPDY is about transport (communication protocol), HTTP is about transfer (application protocol)).

    Comment by karl — March 26, 2012 @ 7:40 pm

    • SPDY is a hybrid protocol, neither completely transport nor application level. It defines a “HTTP-like” layering that isn’t compatible with HTTP 1.1 (request/status line is transformed into headers, Connection and Keep-Alive are no longer valid, chunked encoding is no longer valid, etc). So I’d say it’s reasonable to call SPDY a replacement protocol.

      Comment by lowlatencyweb — March 26, 2012 @ 10:20 pm

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Create a free website or blog at

%d bloggers like this: