The Low Latency Web

March 21, 2012

Modern HTTP Servers Are Fast, EC2 Is Not

Filed under: HTTP Servers — lowlatencyweb @ 9:55 pm

The previous article showed nginx 1.0.14 performance on a dedicated server from SoftLayer. That server was chosen simply because one was available, and 24GB of RAM was completely unnecessary.

It would be more useful to publish results from an enviroment that is easy and cheap to replicate, such as Amazon EC2. The Cluster Compute Eight Extra Large Instance appears to be a good candidate with dual Intel Xeon E5-2670 CPUs and 60.5GB RAM. Spot prices are typically $0.54/hour.

Kernel parameters and nginx config are identical to the previous article, but EC2 instances run an Amazon Linux AMI.

Despite running one of the latest Intel CPUs, with 4 extra cores, the EC2 instance performs very poorly. Some virtualization overhead is expected, however each cc2.8xlarge instance should have an entire physical machine to itself given that the Intel Xeon E5-2670 supports a maximum of 2 CPUs.

Advertisements

6 Comments »

  1. Interesting outcome… Where did you generate the load from? How did latency compare between your load generator and the two different servers respectively?

    Were you able to isolate the bottleneck on EC2?

    Comment by azhawkes — March 22, 2012 @ 3:39 am

  2. Azhawkes, all results are from the client & server running on the same machine, which eliminates the network & drivers as factors. As you can see from the size of the points latency was also quite a bit worse on EC2 at every level.

    EC2 doesn’t give you much visibility into performance, but I can’t think of any obvious things to tune what should be a CPU-bound test.

    Comment by lowlatencyweb — March 22, 2012 @ 4:22 am

    • Makes sense. Did you measure CPU and I/O Wait? I’m not an nginx expert but I couldn’t tell from your config whether it was caching file contents in RAM, or if it had to hit the filesystem. Seems like EC2 would fare much worse than bare metal if the test is disk-intensive.

      I’ve got some EC2 projects coming up, so I’m personally interested as well.

      Comment by azhawkes — March 22, 2012 @ 5:37 am

      • The nginx config has open_file_cache enabled which caches the open fd and stat() info. As the original test shows only one file, the default index.html, was accessed and with 60GB of RAM surely should have been completely cached.

        Comment by lowlatencyweb — March 22, 2012 @ 5:47 am

  3. […] Performance plateaus at around 150,000 requests/sec for HTML output and 180,000 requests/sec for JSON output. Latency is much greater than the ideal case of static content via nginx, but only around 6ms with 1,000 concurrent connections and below 2ms for <= 300 connections. Not bad for the JVM and a commodity server, and JSON generation performance is better than static content on EC2. […]

    Pingback by 150,000 Requests/Sec – Dynamic HTML & JSON Can Be Fast « The Low Latency Web — March 22, 2012 @ 10:45 pm

  4. […] series of articles has drawn out the cargo cultists who insist that HTTP benchmarks must be run over the network, or […]

    Pingback by A Note On Benchmarking « The Low Latency Web — March 23, 2012 @ 2:42 pm


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: