In November 2011, we first published the initcwnd values of CDNs, following our blog post Tuning initcwnd for optimum performance that showed how tuning the initial congestion window parameter (initcwnd) on the server can have a significant improvement in TCP performance.
We have been wanting to do an update for a long time and finally found the time to do it. Today, August 27 2014, we publish our new data, based on tests we ran yesterday. Some CDNs are no longer in the market (Cotendo, Voxel) and new CDNs emerged, so our list of CDNs looks a bit different from 2+ years ago.
First we show you the data, then some conclusions and finally we describe our test methodology.

CDN initcwnd values

Below is a chart showing the value of the initcwnd setting on the edge servers of several CDNs.

initcwnd of CDNs

Note: Google and Microsoft are the services they provide to load JS libraries from their global server network.

Conclusions

Most CDNs have a initcwnd of 10. This is the default value in Linux kernel and apparently many CDNs have found this to work well. CacheFly, Highwinds, MaxCDN, ChinaCache, Akamai, Limelight and Level3 have a higher initwncd value. For some reason(s) they changed the setting and we are confident they did this for performance reasons. Internap has a slightly lower value of 9. And then there is Microsoft's libraries CDN (ajax.aspnetcdn.com): it sends two packets in the first round trip. That is ridiculously low and bad for performance.
CacheFly's behavior is remarkable. The edge server sends out a very large burst of packets - 70. Our test machine advertised a receive window of 262144, so CacheFly clearly takes that into account. The test file was 100 KB and the 70 packets add up to that 100 KB.
Note: the value for initcwnd is (obviously) not the only parameter that determines CDN performance. Don't think that CacheFly is the fastest CDN now globally, and Internap is the slowest. The initcwnd value is just *one* of the performance parameters and we believe it is good to know its value.

Test Methodology

All tests were conducted on a Macbook Air in The Netherlands with a high initrwnd to make sure we have large window sizes at the receiving end. .

For each CDN, we made requests to some far-away POP (US-West or APAC) to ensure a high RTT to make it easier reading tcpdumps. For each test, we made few hits to prime the cache at the edge servers. We then studied the tcpdumps, ran the entire process several times for sanity check.
Some CDNs (Highwinds, CacheFly, Cloudflare and Bitgravity) do Anycast for HTTP globally and use a single IP everywhere and this means we can't use the IP address of a far-away POP in our tests. So, we added an extra 500ms latency using Dummynet. No dummy packet loss was added.

We used this python script to run tests and capture results using tcpdump. The dumps were manually analyzed in wireshark as described here.