Two weeks ago we published a blog post titled Tuning initcwnd for optimum performance that showed how tuning the initial congestion window parameter (initcwnd) on the server can have a significant improvement in TCP performance. We promised a follow up post with data on the initcwnd setting for various CDN providers. It took us a bit longer than we hoped, but here it is. First we show you the data, then some conclusions and finally we describe our test methodology.

Update (Nov 16, 2011): Fastly now has initcwnd set to 10 on all edge servers.
Update (May 12, 2012): Amazon Cloudfront set initcwnd to 10 since Feb 2012, we verified this today.

CDN initcwnd setting

Below is a chart and table showing the value of the initcwnd setting on the edge servers of several popular CDNs and service providers.

initcwnd of some popular CDNs and service providers

Provider initcwnd
HighWinds 16
Cotendo 11
Fastly 10
Google JS hosting 10
Google Page Speed Service/App Engine 10
CloudFront 10
CDNetworks 8
Internap 7
Akamai 6
Rackspace Cloudfiles 4
Akamai/fbcdn 4
Cachefly 3
NetDNA 3
Cloudflare 3
Azure 2
Edgecast 2
MaxCDN 2
Voxel CDN (Voxcast) 2
YUI JS hosting 2
Level3 2
Limelight 2


Highwinds, Cotendo, Fastly, Google, CloudFront, CDNetworks, Akamai and Internap have a relatively high value for initcwnd, with Highwinds being the leader of the pack. They obviously changed the setting and we are confident they did this for performance reasons. Most CDNs have a value of 4, 3 or 2 and that implies the value is simply the default. We are surpised that even big CDN providers like EdgeCast, Level3 and Limelight use this low, default value for initcwnd.
Note: the value for initcwnd is (obviously) not the only parameter that determines CDN performance. Don't think that Highwinds is the fastest CDN now globally, and Level3 is the slowest. The initcwnd value is just *one* of the performance parameters and we believe it is good to know it's value.

Test Methodology

All tests were conducted on an EC2 instance in Singapore running Ubuntu 11.04 (kernel 2.6.38-11) with its initcwnd and initrwnd set at 40 to make sure we have large window sizes at the receiving end. The test requests advertized RWIN of 58400 bytes.

For each CDN, we made requests to their US egde servers to ensure a high RTT to make it easier reading tcpdumps. For each test, we made few hits to attempt to prime the cache at the egde servers. We then studied the tcpdumps, ran the entire process several times for sanity check.

We used this python script to run tests and capture results using tcpdump. The dumps were manually analyzed in wireshark as described here.