We're excited to announce the all new CDN Purge Speed Stats!
CDN Purge Speed Stats provides insight into how fast you can purge (invalidate or delete) a single cached object on various content delivery networks, based on hourly measurements from 3 global locations.
Many CDNs claim purging is 'instant', but is it really?
Is the information provided by the CDN about purge speed correct?
CDN Purge Speed Stats has the answers using extensive, accurate, real-world data.
We currently track purge speed of Amazon CloudFront, CacheFly, Edgio and Gcore,
with more CDNs coming soon.
Read on to learn more about what is our definition of Purge Time and how we measure purge speed, or skip all that and view the CDN Purge Speed Stats.
What is the Definition of Purge Time?
Purge Time is the time between sending the purge request to the CDN API and sending the request to the CDN that returned a cache MISS.
More details are in the next FAQ item.
How We Measure CDN Purge Speed
Our main application runs in the United States and there is a worker application in 3 global locations: United States, United Kingdom and Hong Kong.
These applications live on the Fly.io network.
Every hour, for each CDN, the main app takes the following steps:
- Instruct the worker apps to prime the CDN cache (= get the test object into the CDN cache); log the CDN POP/server that served the cache HIT response
- Send a single file purge request to the CDN API; log the time of sending the purge request as the
- For each location, instruct the worker apps to send requests to the CDN for the test object until cache MISS (= CDN serves the object after a fresh origin pull, as observed from A) the CDN response header(s) that signify the cache status (HIT/MISS/...) and B) the
x-cdnp-time response header the origin serves to CDN and CDN sends through to client which has the timestamp of when origin served the response to CDN; log the time the main app sent the last request to CDN (via worker) as the
- Calculate the Purge Time for each location from the delta of the location's
PurgeStartTime, rounded to a zero-decimal number in seconds
From a CDN user's perspective, a purge starts when submitting the purge request and so the PurgeStartTime is when the main app sends the request to the CDN's API.
After purge, for each location, the main app instructs the worker app to send a request to the CDN until a cache MISS from CDN is observed.
The PurgeEndTime for the location is set to right before sending that last request from main app to worker app because the round-trip time
between main app and origin (via worker and CDN) should not be part of the measured Purge Time.
The worker apps send requests to the CDN in phase 3 (until cache MISS) per the following schedule:
- Immediately, so it's possible to measure a Purge Time of 0 seconds
- Every 1 second the next 5 seconds
- Every 2 seconds the next 10 seconds
- Every 5 seconds the next 45 seconds
The schedule has exponential backoff because when evaluating and comparing Purge Time of CDNs, a median value of e.g. 25 seconds provides the same insight as a median value of 26 or 27 seconds. In our opinion, it's fine for the precision to decrease as Purge Time increases.
If the worker app has not observed an after-purge-cache-MISS after 60 seconds, the CDN Purge Time is set to 65 seconds.
We selected US, UK and HK as locations for the worker apps primarily because we want to measure Purge Time from different continents.
Another benefit is that all major CDNs have POPs here and our origin lives in these locations too,
so the probability of worker <-> CDN requests and CDN <-> origin requests timing out is very low.
We'll continuously improve CDN Purge Speed Stats based on user feedback, analysis of the CDN Purge Speed Stats logs and our creative minds :)
Currently on the roadmap:
- Add more CDNs!
- Provide detailed insights for each CDN
- Track performance of purge requests to CDN API (response time, error/timeout rate)
- Improve visualization of the data
- Measure 'purge all' too
Have an idea for how we can make CDN Purge Speed Stats even better? Let us know on Twitter