In September 2012 Netcraft reported that Amazon had become the largest hosting company in the world based on the number of web-facing computers. In the last eight months, the e-commerce company's tally of web-facing computers has grown by more than a third, reaching 158k. The number of websites hosted on these computers has also increased, from 6.8M in September 2012 to 11.6M in May 2013, a 71% increase.
Although Amazon’s main business is still online retail, Amazon Web Services (AWS), its cloud computing division, has been growing in significance. In Amazon's first quarter of 2013 the Other category (which still includes AWS along with other non-retail activity) was just under 5.0% of its revenue, up from 3.2% at the same point in 2011. The first publicly available AWS service was launched in 2004, but it was not until 2006 that Amazon launched its two core services S3 (data storage) and EC2 (per-hour rental of virtual computer instances). Since then, Amazon has been increasing the number of services provided: in 2012 alone, 159 new services and features were released.
Including its retail infrastructure, the number of web-facing computers at Amazon has grown more than thirty-fold in four years: in May 2009, Netcraft found 4,600 Amazon-controlled web-facing computers; in May 2013, Netcraft found 158k web-facing computers on 164k IP addresses. Netcraft estimates the number of computers behind a group of IP addresses by using a variety of heuristics based on the TCP/IP characteristics seen in the HTTP responses gathered. Hosted on those computers, there are more than 11.6M websites (or hostnames) which corresponds to 2.1M websites with unique content (active sites). Despite being the largest hosting provider by number of web-facing computers, it is dwarfed by Go Daddy, the largest hosting provider when considering the number of websites hosted. Go Daddy has 37M websites on just 23k web-facing computers: the high ratio of websites to web-facing computers may be indicative of Go Daddy's role as a registrar, for which it has a large network of holding pages, and its inexpensive shared hosting platform.
EC2 - Elastic Compute Cloud
EC2, provides on-demand virtual-computer instances billed per hour and is currently available from all nine AWS regions. Each region may correspond to multiple physical data centres which are structured into "Availability Zones". The two largest regions, US East (Northern Virginia) and EU West (Ireland), account for more than three-quarters of all EC2 usage as measured by Netcraft. Sydney, the newest AWS region, now accounts for just under 1% of all measured web-facing computers using AWS, having almost tripled in size in the past four months. In total, more than 156k instances power at least one hostname on 3M domains across the internet.
Launched in 2011, the GovCloud (US) region is specifically intended for more sensitive applications that require additional security and compliance with US regulations. As of May 2013, Netcraft found just 27 web-facing computers within the government cloud, some of which power www.grdregistry.org and www.govdashboard.com. Given its intended role, it would not be surprising if a large proportion of the computers used in the region are not web-facing.
Metric (EC2 Total) February 2013 March 2013 April 2013 May 2013 Growth (4 month) Web-facing Computers/Instances 141,960 145,648 152,041 156,225 10% IP Addresses 144,625 148,837 155,712 160,884 11.2% Domains 2,788,685 2,810,906 2,996,147 3,061,178 9.8% Hostnames 9,489,496 9,938,480 10,649,545 10,925,661 15.1%
Many uses of EC2 such as batch data-processing will not be directly measurably over the internet: Netcraft measures publicly visible computers with corresponding DNS entries and which respond to HTTP requests. Netcraft's Web Server Survey is run at Amazon from the Northern Virginia region, so the region may be over-reported due to services like latency based multi region routing which provide differing responses depending on topological location.
Geographic distribution of computers per EC2 region in May 2013
Data Centre (EC2 - Web Facing Computers) February 2013 March 2013 April 2013 May 2013 Growth (4 month) Asia Pacific (Singapore) 6,576 6,805 6,998 7,290 10.9% Asia Pacific (Sydney) 499 739 1,129 1,427 186% Asia Pacific (Tokyo) 7,342 7,595 8,065 8,601 17.1% EU West (Ireland) 23,778 24,635 25,326 25,942 9.1% South America (Sao Paulo) 2,115 2,263 2,396 2,655 25.6% US East (Northern Virginia) 87,094 88,543 92,426 93,537 7.4% US West (Northern California) 9,325 9,478 9,715 9,695 4% US West (Oregon) 5,217 5,573 5,965 7,051 35.2% GovCloud (Oregon) 14 17 21 27 92.9%
S3 - Simple Storage Service
S3 provides an online file storage service which can be managed programmatically via Amazon's API. Files are logically grouped into containers called buckets which can be made public and accessible over HTTP but default to being private. As with EC2, Netcraft cannot track private use of S3 but is able to survey websites using S3 publicly to serve static files and even entire websites.
Metric (S3 Total) February 2013 March 2013 April 2013 May 2013 Growth (4 month) Domains 41,782 42,561 45,721 48,636 16.4% Hostnames 124,454 127,370 132,962 138,588 11.4%
In May 2013, a total of 139k hostnames were found to be hosted directly on S3, either using a subdomain of s3.amazonaws.com or using a custom CNAME pointing to S3. Of these, 24.7k hostnames, or over 18.5k domains, point to an S3 bucket configured to serve an entire website, as does mediahackers.org. Many more websites are not hosted entirely on S3, but make use of the service to serve static files such as images, stylesheets, or file downloads.
One of the most widely referenced S3 hostnames is used for twitter badges bucket, which was once a common method to display twitter icons on a third-party website. Tumblr, a popular blogging platform recently acquired by Yahoo!, also makes use of S3 to host static media.
CloudFront is a Content Delivery Network which can be used to serve both dynamic and static content from 28 edge locations which are topologically closer to a site's visitors. Caching content reduces the bandwidth and performance requirements on the website's own servers and, by being topologically close to visitors, the latency associated with each HTTP request can be improved.
In the May 2013 survey, more than 63k hostnames were served via CloudFront, more than 60% of which point to an S3 bucket. Amazon uses CloudFront on its own websites, including imdb.com, and also uses it for serving images on Amazon.com. Other than Amazon itself, CloudFront users include: the Toronto Star, a Canadian newspaper, and Pirifrom, the makers of utility program CCleaner, are two of the most visited sites using CloudFront amongst users of the Netcraft Toolbar.
Metric (CloudFront Total) February 2013 March 2013 April 2013 May 2013 Growth (4 month) Domains 22,920 24,079 25,264 26,221 14.4% Hostnames 55,578 57,817 60,475 63,203 13.7%
The number of CloudFront-dedicated IP addresses and computers cannot be easily measured as different results are obtained depending on the location of the request.
Route 53, is a managed Domain Name System (DNS) hosting service. Route 53, named for the TCP and UDP port used for the protocol, hosts DNS records which map from human-readable hostnames to IP addresses. Integrated with the rest of AWS, it allows programmatic access to change DNS records in response to changes elsewhere in a customer's infrastructure. As with CloudFront, Amazon have servers providing this service in edge locations outside of its 9 EC2 regions; Route 53 is available from 28 separate locations.
Metric (Route 53 Total) February 2013 March 2013 April 2013 May 2013 Growth (4 month) Domains 136,698 146,635 161,619 169,111 23.7% Hostnames 3,493,986 3,662,195 3,831,910 4,068,053 16.4%
Over the past four months there has been a steady growth in the number of websites using Route 53 to host their DNS records: it now serves DNS records for 169k domains. Busy sites making use of this service include pinterest.com, a social photo-sharing website which is a heavy user of Amazon's infrastructure; MediaFire, a file uploading and sharing service; and ow.ly a URL shortener.
Heroku is Platform as a Service (PaaS) provider owned by Salesforce. Whilst not operated by Amazon, it makes heavy use of AWS services, especially EC2. Heroku provides an abstracted managed environment for web developers to deploy applications in a number of different languages. In May 2013, Heroku was serving 70K domains directly (not behind a CDN) across 4,786 computers.
Metric (Heroku total) April 2013 May 2013 Growth (2 month) Computers 4,293 4,786 11.5% IP Addresses 4,408 4,972 12.8% Domains 65,821 69,781 6% Hostnames 1,094,578 1,102,663 0.7%
Heroku, as demonstrated in the results from Netcraft's survey, has been available almost exclusively from the Northern Virginia EC2 region. In April, Heroku announced availability of its service in Europe from the AWS EU West region based in Ireland. Only a limited number of Heroku customers have had access to this region during a private beta phase which explains the currently low uptake: only 1% of the computers attributed to Heroku were in the region.
IP Addresses April 2013 May 2013 US East (Northern Virginia) 4,374 4,915 EU West (Ireland) 33 56
Netcraft provides information on the Internet infrastructure, including the hosting industry, and web content technologies. For information on the cloud computing industry including Microsoft Azure, Rackspace Cloud, and Google App Engine, please contact email@example.com.
Certificate revocation is intended to convey a complete withdrawal of trust in an SSL certificate and thereby protect the people using a site against fraud, eavesdropping, and theft. However, some contemporary browsers handle certificate revocation so carelessly that the most frequent users of a site and even its administrators can continue using an revoked certificate for weeks or months without knowing anything is amiss. Recently, this situation was clearly illustrated when a busy e-commerce site was still using an intermediate certificate more than a week after its revocation.
SSL Certificates are used to secure communication between browsers and websites by providing a key with which to encrypt the traffic and by providing third-party verification of the identity of the certificate owner. There are varying levels of verification a third-party Certificate Authority (CA) may carry out, ranging from just confirming control of the domain name (Domain Validation [DV]) to more extensive identity checks (Extended Validation [EV]).
However, an SSL certificate — or any of the certificates which form a chain from the server's certificate to a trusted root installed in the browser or operating system — may need to be revoked. A certificate should be revoked when it has had its private key compromised; the owner of the certificate no longer controls the domain for which it was issued; or the certificate was mistakenly signed. An attacker with access to an un-revoked certificate who also has access to the certificate's private key can perform a man-in-the-middle (MITM) attack by presenting the certificate to unsuspecting users whose browsers will behave as if they were connecting to a legitimate site.
There are two main technologies for browsers to check the revocation status of a particular certificate: using the Online Certificate Status Protocol (OCSP) or looking up the certificate in a Certificate Revocation List (CRL). OCSP provides revocation information about an individual certificate from an issuing CA, whereas CRLs provide a list of revoked certificates and may be received by clients less frequently. Browser support for the two forms of revocation varies from no checking at all to the use of both methods where necessary.
On 30th April 2013 an intermediate certificate issued to Network Associates — which forms part of the chain from an individual certificate back to a trusted root — was revoked by RSA. The intermediate certificate was used to sign multiple McAfee SSL certificates including one for a busy e-commerce website, www.mcafeestore.com. Its revocation should have prevented access to all of the websites using the intermediate including the online store. However, more than a week later nobody had noticed: no tweets or news articles appeared and the certificate was still in place.
The certificate chain for mcafeestore.com, before it was replaced. The highlighted certificate, NAI SSL CA v1, was revoked on 30th April 2013
The intermediate certificate was revoked by RSA by adding its serial number, 54:99:05:bd:ca:2a:ad:e3:82:21:95:d6:aa:ee:b6:5a, to the corresponding CRL. None of the certificates in the chain provide a URL for OCSP, so using the CRL is the only option available. After the CRL was published, browsers should display an error message and prevent access to the website. The reality is somewhat different, however.
Business as usual in Firefox
Firefox does not download CRLs for websites which use the most popular types of SSL certificate (all types of certificate except EV which is usually displayed with a green bar). Without downloading the CRL, Firefox is happy to carry on as usual; letting people visit the website and transfer sensitive personal information relying on a certificate that is no longer valid. In any case even if OCSP were available, by default Firefox will only check the validity of the server's certificate and not attempt to check the entire chain of certificates (again, except for EV certificates).
No warnings for mobile users either on Android or iOS
Mobile browsing now makes up a significant proportion of internet use. Neither Google Chrome on Android nor Safari on iOS present a warning to the user even after being reset. Safari on iOS does not make revocation checks at all except for Extended Validation certificates and did not make requests for the CRL which would have triggered the revocation error message.
Google Chrome: [left to right] default settings, revocation checks enabled on Windows, and revocation checks enabled on Linux
Google Chrome, by default, does not make standard revocation checks for non-EV certificates. Google does aggregate a limited number of CRLs and distributes this via its update mechanism but, at least currently, it does not list the certificate in question or indeed any of the other certificates revoked in the same CRL. For the majority of Chrome users with the default settings, as with Firefox, nothing will appear to be amiss.
For the security conscious, Google Chrome does have the option to enable proper revocation checks, but in this case the end result depends on the platform. On Windows, Google Chrome can make use of Microsoft's CryptoAPI to fetch the CRL and it correctly prevents access to the site. However, RSA's CRL is not delivered in the conventional way: instead of providing the CRL in a binary format, it is encoded into a text-based format which is not the accepted standard. Mozilla's NSS — which is used by Firefox on all platforms and by Google Chrome on Linux — does not support the format. On Linux, Google Chrome does make a request for the CRL but cannot process the response and instead carries on as normal.
Warning to potential customers when visiting the store at https://www.mcafeestore.com
Microsoft's web browser, Internet Explorer is one of the most secure browsers in this context. It fetches revocation information (with a preference for OCSP, but will fallback to CRLs) for the server's certificate and the rest of the certificate chain and, as a consequence of the revocation check, it prevents the user from making their purchase on www.mcafeestore.com.
Opera preventing access to the website
Along with Internet Explorer, Opera is secure by default: it prevents access to the webpage. Opera checks the entirety of the certificate chain using either OCSP or CRLs where appropriate.
However, even with the most secure browser, the most frequent users of a secure website may be able to continue using a website for weeks or months despite one of the certificates in the chain of trust having been revoked. The CRL used in this case can be cached for up to 6 months, leaving frequent users, who will have a cached copy of the CRL, in the dark about the revocation. Going by previous copies of the CRL, the CRL may have last been generated in January 2013 and valid until July 2013. If that is the case and you have visited any website using the same intermediate certificate your browser will not display any warnings and will behave as if the certificate has not been revoked. However, you need not have visited mcafeestore.com before to have a cached CRL; there were 14 other websites with the same intermediate certificate in Netcraft's latest SSL survey.
As long as six months sounds to miss out on important revocation information, browser vendors in control of the list of trusted CAs allow CRLs to have 12-month validity periods when destined for intermediate certificates. CRLs covering individual, or subscriber, certificates are required to be valid for at most 10 days. By its very nature access to the private key corresponding to an intermediate certificate is more useful to an attacker: he can use the private key to sign a certificate for any website he so chooses rather than having access to just a single site. Browsers do have the ability to distrust certificates if they become aware of the compromise, but they may depend on slow update mechanisms to update the trusted set of certificates.
Whilst it may be expensive for an online store to be using a certificate that should not be valid, the consequences for governmental or banking websites could be more severe. If the certificate, or one of the certificates in the chain, were revoked due to a key compromise and there is an active attacker exploiting the lack of revocation checking in modern browsers, the public could be at risk for an extended period of time. The state of revocation amongst modern browsers is sufficiently fragmented to ensure that the entire concept of revocation is on shaky ground — without consistent behaviour and timely updates, if or when the certificate is finally blocked it is too late.
Netcraft waited until the certificate was replaced before publishing this article.
Early last week, Netcraft blocked a website purporting to offer online support for eBay customers. The website made use of a third-party live chat service provided by Volusion, an e-commerce outfit which also provides both free and premium hosted live chat services. By running a live chat service and asking the right questions, a fraudster could coax an unsuspecting victim into revealing sensitive information in addition to their eBay login credentials.
The agent providing "support" claimed that the chat was accessed by clicking a live chat button in eBay's order confirmation email. When Netcraft attempted to question the legitimacy of the live chat, the agent immediately disconnected. eBay's official live chat service is available to eBay members through a secure page on an ebay.com subdomain and is linked to from the eBay website.
An example fraudulent live chat impersonating eBay (left) and the legitimate version (right); both have valid SSL certificates
Later, the site showed a place-holder company logo and the eBay branding had disappeared.
This attack is interesting as several well-known companies outsource their live chat support, including Sky, a British broadcaster and ISP (LivePerson), Western Union (Oracle), and Rackspace (BoldChat). This, combined with a valid SSL certificate, could be convincing enough to deceive people accustomed to seeing third-party domain names for live chat applications. In addition, free or trial deployments can be obtained for these third-party services quickly — some without identification or credit cards — allowing a social engineer to carry out this attack easily and anonymously.
Live chat social engineering is not a novel technique for fraudsters: last December, a replacement Kindle was falsely ordered via the official Amazon live chat by a fraudster with only limited knowledge of the victim. A similar scam was seen in February this year. A forum dedicated to social engineering has a thread allegedly making offers to buy Amazon order numbers, which could be used in future attacks.
Netcraft advises people to never reveal sensitive information such as passwords or PINs in live chats, even if asked. A legitimate company will not require this information. If in doubt, challenge them to verify who they say they are. Only access live chats from companies' own sites: do not access them from third-party websites or emails.
You can protect yourself against the latest phishing attacks by installing Netcraft's Anti-Phishing Extension and help protect the internet community by reporting potential phishing sites to Netcraft by email to firstname.lastname@example.org or at http://toolbar.netcraft.com/report_url. Netcraft can also help protect both brand owners and hosting companies.
In the May 2013 survey we received responses from 672,837,096 sites, which is 23.8M more than last month.
Apache had the largest growth this month, gaining 28.3M websites and increasing its market share by 2.41 percentage points to 53.4%. The majority of this growth was attributable to Apache Traffic Server (ATS), which gained 28M websites and increased its market share from 0.03% to 4.2%. Nearly all of the Apache Traffic Server growth occurred at Go Daddy — 75% of websites hosted by Go Daddy now use ATS and Go Daddy now hosts 99% of all sites using this server software.
Originally created as a commercial product by Inktomi in 1997, Apache Traffic Server is an extensible multi-threaded event-driven caching proxy server which is claimed to scale well on modern multi-core systems. Yahoo! acquired Inktomi in 2005, and in November 2009, the project was donated to the Apache Software Foundation.
The vast majority of the ATS served websites at Go Daddy were previously served by Microsoft IIS, resulting in the rather noticeable loss of 3.26 percentage points of market share. Microsoft IIS's market share is now 16.7%. Despite the loss at Go Daddy it gained more new sites than any competitor this month, with 43% of all new websites being served on Microsoft IIS, while accounting for only 30% of expired websites (this includes inactive blogs, as well as sites which no longer exist).
nginx reached a new milestone this month: it is now used by more than 100M websites, and within the Million Busiest Websites has overtaken Microsoft IIS to take second place with a market share of 13.5%. Overall, nginx's market share now stands at 15.5%, just 1.2 percentage points behind Microsoft, helped by a growth of 8.3M sites this month.
The latest stable version, nginx 1.4.0, was released last week, integrating OCSP stapling and experimental SPDY draft 2 support. nginx is used extensively by the WordPress.com blog hosting service, whose owners – Automattic – sponsored development of the ngx_http_spdy_module. Development of OCSP stapling support was sponsored by Comodo, DigiCert, and GlobalSign.
Developer April 2013 Percent May 2013 Percent Change Apache 331,112,893 51.01% 359,441,468 53.42% 2.41 Microsoft 129,516,421 19.95% 112,303,412 16.69% -3.26 nginx 96,115,847 14.81% 104,411,087 15.52% 0.71 22,707,568 3.50% 23,029,260 3.42% -0.08
Rank Company site OS Outage hh:mm:ss Failed Req% DNS Connect First byte Total 1 Swishmail FreeBSD 0:00:00 0.000 0.106 0.062 0.124 0.267 2 INetU Windows Server 2008 0:00:00 0.000 0.125 0.073 0.236 0.454 3 iWeb Linux 0:00:00 0.003 0.127 0.071 0.142 0.142 4 Server Intellect Windows Server 2008 0:00:00 0.003 0.074 0.092 0.185 0.464 5 Midphase Linux 0:00:00 0.003 0.215 0.109 0.222 0.338 6 Qube Managed Services Linux 0:00:00 0.006 0.100 0.046 0.093 0.093 7 Bigstep Linux 0:00:00 0.006 0.266 0.071 0.143 0.143 8 Hyve Managed Hosting Linux 0:00:00 0.006 0.252 0.074 0.145 0.151 9 Datapipe FreeBSD 0:00:00 0.009 0.068 0.016 0.032 0.049 10 Pair Networks FreeBSD 0:00:00 0.016 0.231 0.077 0.157 0.486
Swishmail had the most reliable hosting company site in April 2013, with no failed requests. Swishmail has a presence in three New York data centres which proved to be resilient when Swishmail stayed online in October whilst being hit by Hurricane Sandy, despite New York being in the centre of much of the damage. Swishmail offers a variety of managed web hosting plans in addition to its core service of enterprise-grade email hosting. Swishmail has been monitored by Netcraft since April 2007.
In second place is INetU which also had no failed requests, but it missed the top spot by just 11ms due to using the average connect time as the tie-breaker. INetU offers dedicated managed hosting services and cloud hosting services from ten data centres in the US and Europe including a new data centre in Seattle. Netcraft has been monitoring INetU since June 2003.
iWeb is in third place again following last month's success, it narrowly missed second place by having a single failed request. iWeb is based in Montréal where it has four data centres.
Newcomers Bigstep and Midphase have made their debut top 10 entries, after being monitored for one month and six months respectively. Hyve placed 8th this month, its third appearance since Netcraft began monitoring it in November having maintained 100% uptime over 5 months.
Swishmail, April's most reliable hosting company, runs its site on FreeBSD. Two other sites in this month's top ten are running FreeBSD – Datapipe, which was top last month and has an impressive 100% uptime over 7 years, and Pair Networks. Both INetU in second place, and Server Intellect in fourth place, are running Windows Server 2008. The remaining five – including iWeb in third place – use Linux.
Netcraft measures and makes available the response times of around forty leading hosting providers' sites. The performance measurements are made at fifteen minute intervals from separate points around the internet, and averages are calculated over the immediately preceding 24 hour period.
From a customer's point of view, the percentage of failed requests is more pertinent than outages on hosting companies' own sites, as this gives a pointer to reliability of routing, and this is why we choose to rank our table by fewest failed requests, rather than shortest periods of outage. In the event the number of failed requests are equal then sites are ranked by average connection times.
Information on the measurement process and current measurements is available.
Rank Company site OS Outage
DNS Connect First
Total 1 ocsp.starfieldtech.com Linux 0:00:00 0.003 0.076 0.024 0.043 0.043 2 ocsp.verisign.com Citrix Netscaler 0:00:00 0.006 0.051 0.081 0.162 0.162 3 ocsp.thawte.com Citrix Netscaler 0:00:00 0.006 0.041 0.083 0.164 0.164 4 ocsp.godaddy.com Linux 0:00:00 0.015 0.161 0.025 0.044 0.044 5 ocsp.startssl.com/sub/class4/server/ca Linux 0:00:00 0.018 0.068 0.011 0.056 0.056 6 evsecure-ocsp.verisign.com Citrix Netscaler 0:00:00 0.018 0.228 0.082 0.163 0.163 7 ocsp.trendmicro.com/tmca Citrix Netscaler 0:00:00 0.018 0.050 0.099 0.200 0.201 8 evintl-ocsp.verisign.com Citrix Netscaler 0:00:00 0.024 0.261 0.082 0.162 0.162 9 ocsp.startssl.com/sub/class2/server/ca Linux 0:00:00 0.027 0.049 0.011 0.057 0.057 10 ocsp.xi.tcclass2-ii.trustcenter.de Linux 0:00:00 0.027 0.199 0.090 0.197 0.197
The Online Certificate Status Protocol (OCSP) is an alternative method to Certificate Revocation Lists (CRLs) for obtaining the revocation status of an individual SSL certificate. Fast and reliable OCSP responders are essential for both Certificate Authorities (CAs) and their customers — a slow OCSP response will introduce an additional delay before many browsers can start sending and receiving encrypted traffic over an HTTPS connection.
Starfield Technologies, a Go Daddy brand, had the most reliable OCSP responder last month with only a single failed request and an average connection time of 24ms. Starfield Technologies was founded in 2003 as the technology research branch of Go Daddy. Go Daddy customers have the option to choose which issuing organization to use when buying an SSL certificate. Although both Go Daddy and Starfield appear to share the same OCSP responder infrastructure, ocsp.godaddy.com had five failed requests, however this was still fewer than StartCom, Symantec, and Trend Micro. Both Go Daddy and Starfield issue certificates in all three certificate assurance categories: Domain Validation (DV), Organisation Validation (OV), and Extended Validation (EV). Starfield is most prominent in the EV sector — more than 15% of all EV certificates issued within the group are issued by Starfield — but it remains only a small part of Go Daddy's SSL certificate business: Starfield accounts for just 10% of certificates issued.
StartCom had the shortest average connect time (11ms) of all monitored CAs last month after having moved its OCSP infrastructure at the end of February. StartCom, as well as Entrust, now delivers its OCSP responses via the Akamai CDN (Content Delivery Network), reducing the OCSP connection overhead to a minimum by serving content from as topologically close as possible to the client. GlobalSign is a CloudFlare evangelist, using CloudFlare's CDN platform for its OCSP and CRL infrastructure as well as their own corporate website.
Many of the monitored OCSP responders are served by Citrix Netscaler devices. Citrix Netscaler is a hardware appliance that provides, amongst other features, load balancing and firewall functions. The use of such load balancing technology is no surprise — a single certificate on a popular site that does not use OCSP stapling could generate a significant number of OCSP requests, causing a CA's responder to experience high volumes of traffic.
In many circumstances each connection to an HTTPS site could trigger multiple OCSP requests: a request for the server's certificate and one for each intermediate certificate. OCSP responses are typically valid for a week, so some caching is possible. Caching can reduce both the burden on OCSP responders and increase the perceived performance of HTTPS websites to users, but is limited to repeat visits. OCSP Stapling is designed to improve performance by allowing the web site's server to “staple” the OCSP response to the TLS handshake, removing the need for the client to connect to the CA's OCSP responder.
Netcraft measures and makes available the OCSP and CRL end point response times of all the major Certificate Authorities (CAs). The performance measurements are made at fifteen minute intervals from separate points around the internet, and averages are calculated over the immediately preceding 24 hour period.