DigitalOcean becomes the second largest hosting company in the world

DigitalOcean has grown to become the second-largest hosting company in the world in terms of web-facing computers, and shows no signs of slowing down.

The virtual private server provider has shown phenomenal growth over the past two-and-a-half years. First seen in our December 2012 survey, DigitalOcean today hosts more than 163,000 web-facing computers, according to Netcraft's May 2015 Hosting Provider Server Count. This gives it a small lead over French company OVH, which has been pushed down into third place.

Amazing growth at DigitalOcean

Amazing growth at DigitalOcean

DigitalOcean's only remaining challenge will be to usurp Amazon Web Services, which has been the largest hosting company since September 2012. However, it could be quite some time until we see DigitalOcean threatening to gain this ultimate victory: Although DigitalOcean started growing at a faster rate than Amazon towards the end of 2013, Amazon still has more than twice as many web-facing computers than DigitalOcean today.

Nonetheless, DigitalOcean seems committed to growing as fast as it can. Since October 2014, when we reported that DigitalOcean had become the fourth largest hosting company, DigitalOcean has introduced several new features to attract developers to its platform. Its metadata service enables Droplets (virtual private servers) to query information about themselves and bootstrap new servers, and a new DigitalOcean DNS service brought more scalability and reliability to creating and resolving DNS entries, allowing near-instantaneous propagation of domain names.

Other companies are also helping to fuel growth at DigitalOcean. Mesosphere created an automated provisioning tool which lets customers use DigitalOcean's resources to create self-healing environments that offer fault tolerance and scalability with minimal configuration. Mesosphere's API makes it possible to manage thousands of Droplets as if they were a single computer, and with DigitalOcean's low pricing models and SSD-only storage, it's understandable how this arrangement can appeal to particularly power-hungry developers.

In January, DigitalOcean introduced its first non-Linux operating system, FreeBSD. Although less commonly used these days, FreeBSD has garnered a reputation for reliability and it was not unusual to see web-facing FreeBSD servers with literally years of uptime in the past. In April, DigitalOcean launched the second version of its API, which lets developers programmatically control their Droplets and resources within the DigitalOcean cloud by sending simple HTTP requests.

DigitalOcean added a new Frankfurt region in April 2015.

DigitalOcean added a new Frankfurt region in April 2015.

More recently, DigitalOcean introduced a new European hosting region in Frankfurt, Germany. This is placed on the German Commercial Internet Exchange (DE-CIX), which is the largest internet exchange point worldwide by peak traffic, allowing Droplets hosted in this region to offer good connectivity to neighbouring countries. (An earlier announcement of an underwater Atlantis datacenter sadly turned out to be an April Fool's joke, despite the obvious benefits of free cooling).

Even so, Amazon still clearly dwarfs DigitalOcean in terms of variety of features and value-added services. Notably, Amazon offers a larger variety of operating systems on its EC2 cloud instances (including Microsoft Windows), and its global infrastructure is spread much wider. For example, EC2 instances can be hosted in America, Ireland, Germany, Singapore, Japan, Australia, Brazil, China or even within an isolated GloudGov US region, which allows US government agencies to move sensitive workloads into the cloud whilst fulfilling specific regulatory and compliance requirements. As well as these EC2 regions, Amazon also offers additional AWS Edge Locations to be used by its CloudFront content delivery network and its Route 53 DNS service.

Yet, as well as its low pricing, part of the appeal of using DigitalOcean could lie within its relative simplicity compared with Amazon's bewilderingly vast array of AWS services (AppStream, CloudFormation, ElastiCache, Glacier, Kinesis, Cognito, Simple Workflow Service, SimpleDB, SQS and Data Pipeline to name but a few). Signing up and provisioning a new Droplet on DigitalOcean is remarkably quick and easy, and likely fulfils the needs of many users. DigitalOcean's consistent and strong growth serves as testament to this, and will make the next year very interesting for the two at the top.

June 2015 Web Server Survey

In the June 2015 survey we received responses from 863,105,652 sites and 5,346,650 web-facing computers, representing an increase of 5.2 million websites and 65,000 additional computers.

Microsoft was responsible for the majority of this month's hostname growth, with a gain of 6.6 million sites, but only contributed 11,700 additional web-facing computers. This has caused Microsoft's market share by hostnames to overtake its declining market share by computers, with both standing at just under 30%.

Apache led this month's web-facing computer growth with a net gain of 24,800, while nginx followed closely with 22,800. This has resulted in nginx's market share increasing by 0.28 to 12.4%, and despite showing the largest net growth, Apache's share fell slightly.

Apache, Microsoft and nginx together account for more than 88% of all web-facing computers in the world, making these vendors by far the most popular choices. However, nginx is the only vendor experiencing consistent increases in market share, up by 3 percentage points over the last year while both Apache and Microsoft have seen losses. The next most commonly used server is lighttpd (pronounced "lighty"), which is used by a mere 0.46% of web-facing computers.

nginx's market share has also been steadily increasing within the top million websites. Its share now stands at 21.9%, and although Apache's use within the million busiest sites has been steadily declining this decade, Apache looks likely to retain the lead for at least a few more years.

Three months after the death of Sir Terry Pratchett, approximately 84,000 websites are now serving the X-Clacks-Overhead: GNU Terry Pratchett header in tribute. Invisible to the majority of users, this HTTP header is a reference to the Discworld novel Going Postal, which features a series of communication towers called the clacks.

In the book, a similar header ("GNU John Dearheart"), is transmitted around the clacks after the inventor's son is killed in an accident while working on a clacks tower. The G means send the message on, N means do not log the message, and U means turn the message around at the end of the line and send it back again — this ensures that the message is transmitted indefinitely, allowing his son to be memorialised forever. Similarly, by transmitting Pratchett's name around the internet, the sites participating in this HTTP header tribute hope to keep his legacy alive. After all, as it says in the book, "A man is not dead while his name is still spoken."

One of the most popular sites to use the X-Clacks-Overhead: GNU Terry Pratchett header is www.theguardian.com, which alone reached more than 5 million unique browsers per day in 2014. With each header taking up 40 bytes of an uncompressed HTTP request, all of the sites involved in the tribute could be generating terrabytes of additional bandwidth usage every day.

Total number of websites

Web server market share

DeveloperMay 2015PercentJune 2015PercentChange
Apache336,813,95939.26%334,731,03538.78%-0.48
Microsoft247,784,66828.88%254,408,17929.48%0.59
nginx123,697,64514.42%122,965,52214.25%-0.17
Google20,103,0682.34%20,130,7322.33%-0.01
Continue reading

January 2015 Web Server Survey

In the January 2015 survey we received responses from 876,812,666 sites and 5,061,365 web-facing computers.

This is the lowest website count since last January, and the third month in a row which has seen a significant drop in the total number of websites. As was the case in the last two months, the loss was heavily concentrated at just a few hosting companies, and a single IP address that was previously hosting parked websites was responsible for over 50% of the drop.

Microsoft continues to be impacted most by the decline. Having overtaken Apache in the July 2014 survey their market share now stands at just 27.5%, giving Apache a lead of more than 12 percentage points.

Microsoft's decline seems far less dramatic when looking at the number of web-facing computers that use its server software. A net loss of 6,200 computers this month resulted in its computer share falling by only 0.28 percentage points, while Apache's went up by 0.18 to 47.5%.

These losses included many sites running on Microsoft IIS 6.0, which along with Windows Server 2003, will reach the end of its Extended Support period in July. Further abandonment of these platforms is therefore expected in the first half of this year, although Microsoft does offer custom support relationships which go beyond the Extended Support period.

Apache made an impressive gain of 22,000 web-facing computers this month. Half of this net growth can be attributed to the Russian social networking company V Kontakte, which hosts nearly 13,000 computers. Almost all of these were running nginx last month, but 11,000 have since defected to Apache, leaving less than 2,000 of V Kontakte's computers still using nginx.

OVH is still the second largest hosting company in terms of web-facing computers (although DigitalOcean is hot on its heels), but demand for its own relatively new .ovh top-level domain appears to be waning. Last month, we reported that the number of sites using the new .ovh TLD had shot up from 6,000 to 63,000. These sites were spread across just under 50,000 unique .ovh domains, and the number of domains grew by only 2,000 this month.

Only the first 50,000 .ovh domains were given away for free, while subsequent ones were charged at EUR 0.99. Despite being less than a third of the planned usual price of EUR 2.99, this shows how even a tiny cost can have a dramatic impact on slowing down the uptake in domain registrations.

Other new top-level domains which have shown early signs of strong hostname growth include .click, .restaurant, .help, .property, .top, .gifts, .quebec, .market and .ooo, each of which were almost non-existent last month but now number in their thousands.

The proliferation of new top level domains is evidently generating a lot of money for registrars and ICANN, but for some parties it has caused expenditure that was previously unnecessary. Take the new .hosting TLD for example: you would expect this domain to only be of interest to hosting companies, but US bank Wells Fargo has also registered some .hosting domains, including wellsfargo.hosting, wellsfargoadvisors.hosting and wellsfargohomemortgage.hosting. These domains are not used to serve any content, and instead redirect customers to Wells Fargo's main site at wellsfargo.com. The sole purpose of registering these domains appears to be to stop any other party from doing so, which protects the bank's brand and prevents the domains being used to host phishing sites.

In a similar move, Microsoft has also registered several .hosting domains including xbox.hosting, bing.hosting, windows.hosting, skype.hosting, kinect.hosting and dynamics.hosting. Browsing to any of these domains causes the user to be redirected to bing.com, which displays search results for the second-level string (i.e. "xbox", "windows", etc.).

Of course, with many other new TLDs continually popping up, brand protection becomes an increasingly costly exercise. Microsoft has also recently registered hundreds of other nonsensical domains which are used to redirect browsers to bing.com, such as lumia.ninja, lync.lawyer, xboxone.guitars, windowsphone.futbol, microsoft.airforce, azure.luxury, yammer.singles, xboxlive.codes, halo.tattoo, internetexplorer.fishing, and so on.

However, the race to register domain names is not always won by Microsoft — bing.click is a prime example of a domain that someone else got to first. This domain is currently offered for sale, highlighting the fact that it's not just ICANN and the registrars that stand to gain money from the influx of new TLDs.

Total number of websites

Web server market share

DeveloperDecember 2014PercentJanuary 2015PercentChange
Apache358,159,40539.11%348,460,75339.74%0.63
Microsoft272,967,29429.81%241,276,34727.52%-2.29
nginx132,467,76314.47%128,083,92014.61%0.14
Google20,011,2602.19%20,209,6492.30%0.12
Continue reading

July 2015 Web Server Survey

In the July 2015 survey we received responses from 849,602,745 sites and 5,350,323 web-facing computers. This represents a net loss of 13.5 million websites, but a gain of 3,700 additional computers.

One of the most significant changes in July was the net loss of nearly 13,000 web-facing computers powered by Microsoft web server software, accompanied by a decline of more than 29 million hostnames. The loss was predominantly seen for servers running Microsoft IIS 6.0, 7.0 and 7.5. These versions of IIS are used by Windows Server 2003, which is no longer supported, and Windows Server 2008 (including 2008 R2), whose mainstream support ended in January. The latest stable release of IIS (version 8.5) is however continuing to grow, this month increasing by over 9,000 web facing computers.

This month's decline has brought Microsoft's market share of hostnames down by nearly 3 percentage points, increasing Apache's lead. However, Apache's own market share also fell slightly, largely due to gains made by nginx and Tengine.

nginx gained 8.5 million sites this month, but more remarkably, it gained over 14,000 web-facing computers, with the largest gains in the US, China, Germany and the UK. Compounding Microsoft's losses, nearly 1.8 million existing websites switched from using Microsoft IIS to nginx in July.

nginx also fared well amongst the top million websites, where it gained a further 3,771 sites, causing losses for Apache, Microsoft and Google. Nonetheless, Apache is still used by nearly half of the top million sites, with its market share being almost 26 percentage points ahead of nginx.

Tengine now powers more websites than Google's web server software, after the number of sites using it grew by 7 million to a total of more than 25 million this month. The open source Tengine web server is based on nginx, and used extensively by the online marketplace Taobao. It currently supports all features found in nginx 1.6.2, plus several other features required by Taobao that were not able to be implemented as nginx modules. Neither nginx nor Tengine support HTTP/2 yet, but they were both early supporters of Google's SPDY protocol, on which HTTP/2 is based. nginx plans to provide support for HTTP/2 by the end of this year, and so it is likely that Tengine may also follow suit at a later date.

Tengine 2.1.0 is the latest development version of Taobao's nginx fork, but despite being released more than six months ago, only 25,000 websites currently claim to be using it. In contrast, Tengine 1.4.2 — which was released in 2012 and is also a development version — is used by nearly 10 million sites, making it by far the most commonly deployed version. The latest stable release, Tengine 1.5.2, is the second most commonly used version, but accounts for just under 200,000 sites.

But like Apache, more than half of the sites running Tengine do not reveal which version they are running, and so the true distribution of version numbers could vary greatly. For instance, 2.7 million of these version-less Tengine websites are used to host Taobao stores directly under the taobao.com domain (e.g. baobeiit.taobao.com). Given that Tengine was created by Taobao in order to provide the features they need, it is not unreasonable to assume that these sites might be using the latest release, or at least a relatively recent one.

Despite being used by a large number of sites, Tengine was found on only 4,240 web-facing computers in July 2015. Three-quarters of these computers are located in China, while nearly 10% are located in the US.

Total number of websites

Web server market share

DeveloperJune 2015PercentJuly 2015PercentChange
Apache334,731,03538.78%325,696,51438.34%-0.45
Microsoft254,408,17929.48%225,282,71326.52%-2.96
nginx122,965,52214.25%131,460,06315.47%1.23
Google20,130,7322.33%20,255,4242.38%0.05
Continue reading

is.gd goes down, takes a billion shortened URLs with it

The popular is.gd URL shortening service has been offline for more than two days, taking with it more than a billion shortened URLs. Shortly before the site disappeared on Sunday, the homepage reported that its links have been accessed nearly 50 billion times.

The shortened links generated are usually not more than 18 characters long, including the protocol http://. These links are commonly used in tweets, emails, and text messages where long URLs are impractical. Despite the fact the shortened links do not work, many previously-created is.gd shortened URLs are still appearing on Twitter.

is.gd is owned by and supported by UK hosting provider Memset, who planned to support it as a free service indefinitely. Notably, its sister site, v.gd, is still up and running. Other free services provided by Memset include TweetDownload, TweetDelete and the statistics calculator Tweetails.

For security reasons, both is.gd and v.gd disallow the shortening of URLs which use the data: and javascript: protocols. Nevertheless, the service is still abused by fraudsters who use the shortened URLs to direct victims to phishing sites. Some fraudsters have appended a query string to the shortened URL in an attempt to make it look similar to those used by the phishing target. For example, the following is.gd URL was used to redirect victims to a Taobao phishing site:

http://is.gd/Tb###U?2.taobao.com/item.htm?spm=2007.1000337

Throughout April, is.gd was the fifth phishiest URL shortening service. By far the phishiest was tinyurl.com, which pointed to 17 times as many phishing sites, making it account for 60% of all phishing activity amongst the top five URL shortening services. Privately-held bit.ly, Google's goo.gl and GoDaddy's x.co also pointed to more phishing sites than is.gd.

Three years ago, the is.gd service suffered a shorter outage of a few hours. This was caused by the failure of some of the virtual machines in its frontend cloud, which were responsible for accepting HTTP requests from a load balancer.

Update 21/05/2014: is.gd is now back online. An explanation for the outage can be found at http://is.gd/news.php

Certificate revocation: Why browsers remain affected by Heartbleed

More than 80,000 SSL certificates were revoked in the week following the publication of the Heartbleed bug, but the certificate revocation mechanisms used by major browsers could still leave Internet users vulnerable to impersonation attacks. Little has changed since Netcraft last reported on certificate revocation behaviour.

Why is revocation necessary?

The Heartbleed bug made it possible for remote attackers to steal private keys from vulnerable servers. Most web server access logs are unlikely to show any evidence of such a compromise, and so certificates used on previously-vulnerable web servers should be replaced without delay.

However, even if the certificate is replaced, the secure site could still be vulnerable. If the pre-Heartbleed certificate had been compromised, it will remain usable by an attacker until its natural expiry date, which could be years away. A correctly positioned attacker, with knowledge of the old certificate's private key and the ability to intercept a victim's internet traffic, can use the old certificate to impersonate the target site.

Certificate authorities can curtail the lifetime of the compromised certificate by revoking the certificate. In principle, a revoked certificate should not be trusted by browsers, which would protect users from misuse of the certificate. The realities of revocation behaviour in browsers, however, could leave some internet users vulnerable to attack with compromised certificates.

The Heartbleed bug is currently the largest cause of certificate revocations, but other reasons for revoking certificates can include the use of weak signature algorithms, fraudulent issuance, or otherwise breaching the requirements laid out by the CA/Browser Forum.

How does revocation checking work?

There are two main technologies for browsers to check the revocation status of a particular certificate: the Online Certificate Status Protocol (OCSP) and Certificate Revocation Lists (CRLs). OCSP provides real-time revocation information about an individual certificate from an issuing certificate authority, whereas CRLs provide a list of revoked certificates which is typically retrieved by clients less frequently.

Of the major browsers, only Internet Explorer and Opera behave correctly in a wide variety of revocation scenarios, including where end-entity and intermediate certificates had been revoked only via a CRL or only via OCSP. The remaining browsers — Google Chrome, Safari, and Firefox — all have less consistent behaviour when checking the revocation status of SSL certificates.


Firefox blocks access to certificates which have been revoked via OCSP.

OCSP, the more recent standard, is effectively the revocation method of choice on the internet: providing the URL to a CRL in individual certificates is optional in the Baseline Requirements, and only Opera and Internet Explorer consistently check them when OCSP is not available. The latest version of Firefox removed the last vestiges of CRL checking: previously CRLs were checked only for EV certificates when OCSP failed.

Although CRLs have some disadvantages — their size for one — they do offer some key advantages over OCSP: CRLs can be downloaded ahead of time on a trusted network and, unlike OCSP, CRLs do not reveal which sites you are visiting to the certificate authority. Google's decision to disable OCSP checking by default was also partly due to these privacy concerns.

OCSP stapling is an alternative approach to distributing OCSP responses. By including a recent OCSP response in its own TLS/SSL handshake, a website can remove the need for each visitor to make a separate connection to the certificate authority. As well as improving performance, stapled responses remove the privacy concerns surrounding standard OCSP leaking user IPs to certificate authorities. However, only 24% of all SSL certificates found in the most recent Netcraft SSL survey were used on websites that stapled an OCSP response.

Google has shunned the traditional methods of revocation: whilst Chrome does check the status of EV certificates, revocation checking is not enabled by default for any other type of certificate. Instead, Chrome uses its own updating mechanism to maintain an aggregated list of revoked certificates gathered by crawling CRLs. This is a subset of all revocations and is intended to cover only the most important.

Is revocation checking useful for certificates potentially compromised by Heartbleed?

As explained by Adam Langley, online revocation checking can easily be blocked if the compromised certificate is being used in a man-in-the-middle attack. An attacker able to intercept traffic to the targeted website will likely also be able to block OCSP requests. If the victim is using a browser which does not hard-fail (which is the default setting of all major browsers) when an OCSP response isn't received, the attacker will be able to use a revoked certificate as normal.

However, the same logic does not apply to CRLs: if the CRL was downloaded earlier when on a trusted network, a revoked certificate used in a man-in-the-middle attack will not be trusted. This requires the certificate to have been revoked before the CRL was downloaded; however, many CRLs can be cached for a significant length of time (up to 10 days in the Baseline Requirements). Although, if a new CRL is needed, its download can be blocked just as effectively as OCSP's can be. When CRLs are used, an attacker cannot rely on the certificate passing validation: a subset of users, those with cached CRLs, will be prevented from continuing on the attacker's site. The same logic also applies to Google's CRLSets, including the ability to block updates.

As such, despite the difficulties of revocation checking in the MITM scenario, it is still critical for site owners to revoke certificates. If the certificate is revoked, an attackers job is made that much more difficult: he must chose sites with certificates issued without a CRL distribution point (which is permissible under the Baseline Requirements) or that are not covered by Google's CRLSets, and his victims must be using a browser that checks neither. Certificates that are not revoked are unlikely to ever be included in more effective revocation methods such as CRLSets.

Should I enable revocation checking in Chrome?

Whilst OCSP is easily blocked in man-in-the-middle attacks, if revocation checking is enabled, Chrome (on both Windows and Linux) will check CRLs for certificates that do not support OCSP. It is likely that you will have cached CRLs for websites you have visited recently — if you move onto an untrusted network, you will be protected by the CRLs that were downloaded earlier. Over 4% of currently valid certificates are only revocable by CRL, including login.skype.com. Unfortunately, for the majority of sites where OCSP is available CRLs will not be downloaded, any OCSP requests made can be blocked, and the attacker can continue as if the certificate is not revoked.

Perfect OCSP checks: A chicken and egg problem

By default, all browsers take the "soft-fail" approach to OCSP checks. A revoked certificate will be regarded as valid if the OCSP request fails. While this sounds like unsafe behaviour, browser vendors are reluctant to force a hard-fail approach because of the problems it can cause. For example, paid-for internet connections, such as WiFi hotspots or hotel room connections, that use captive portals are one of the major chicken-and-egg scenarios. Before a user can access the internet, he must visit a secure payment page, but this would fail because the OCSP responder used by the site's certificate cannot be reached until after he has paid. There are methods to resolve this problem, including OCSP stapling and less restrictive blocking; however, such solutions are unlikely to adopted quickly.


Firefox can be forced to use a hard-fail approach to OCSP checking, but this setting is not enabled by default.

It is critical that OCSP responders have 100% uptime, as any outage whatsoever could provide a window of opportunity to misuse compromised revoked certificates. Netcraft publishes a list of OCSP responder sites ordered by failures over the past day. Partly due to the reliability concerns, the Mozilla Foundation suggests that there is some way to go before a hard-fail approach can be enabled by default.

Despite the drawbacks of soft-fail OCSP checking, there are circumstances in which a soft-fail approach can still be useful. For example, it might be desirable to revoke a domain-validated certificate which had been issued to a deceptive domain name (e.g. paypol.com), or when a domain changes hands. In the absence of any man-in-the-middle attackers, soft-fail OCSP is likely to be effective.

Irrevocable certificates

Browsers that do not support CRLs, such as Firefox, are not able to determine whether or not the 4% of certificates without OCSP responder URLs have been revoked. Only if an OCSP response has been stapled to the TLS connection can such browsers check the revocation status. Given the majority of certificates (76%) are served without a stapled OCSP response, such certificates are effectively irrevocable for a large proportion of internet users. As a result, the compromised certificates can be misused for fraud up until their natural expiry dates. A smaller number of certificates fail to specify URLs for either method of revocation, which makes them completely irrevocable in all browsers which rely on these technologies.

It is likely that browser vendors will be forced to take additional steps to ensure that irrevocable certificates are correctly regarded as invalid. Such measures were taken in 2011, when Mozilla released new versions of Firefox which explicitly blacklisted some of the fraudulent certificates generated by the Comodo Hacker, even though the affected certificates had already been revoked by the issuer. One of the fraudulent certificates released to the public impersonated Firefox's addons site at addons.mozilla.org. Google's CRLSet gives it the ability to distribute such revocations without relying on any certificate authority to revoke the certificate.

Accenture was using a CRL-only Extended Validation certificate on its website at https://apps.accenture.com using a vulnerable version of OpenSSL (1.0.1e). The potentially compromised certificate was subsequently replaced with a new certificate issued on 14 April, and the previous certificate (serial number 0x0100000000013b03d6adfeff5c37) was revoked. The serial number was added to the CRL at http://crl.omniroot.com/PublicSureServerEV.crl. If an attacker had managed to compromise the private key used by the old certificate, he can continue impersonating apps.accenture.com with a seemingly valid SSL certificate until its natural expiry date in November 2014 for victims using browsers which do not check CRLs, which includes Firefox 28. The only indication that revocation checking has not been completed is the lack of the EV browser cues. This certificate is present in Google's CRLSet, and so Google Chrome users are protected against its misuse.

A currently deployed EV certificate without OCSP in Firefox 28 (left). The EV browser cues are not displayed in Firefox as the revocation status has not been checked. Internet Explorer (right), which has checked the revocation status on the CRL, does display the additional green bar with the company's name.

Apple's Safari web browser also does not perform any CRL revocation checks for Extended Validation certificates despite doing so for non-EV certificates. This behaviour may be based on the Baseline Requirements and the EV guidelines, which have mandated that EV certificates contain an OCSP responder URL for some time. As a consequence, the certificate previously used on apps.accenture.com is also irrevocable in Safari. In addition, despite making no revocation checks, Safari retains the EV browser cues rather than downgrading to standard SSL.

Problems revoking intermediate certificates

Digital certificates are verified using a chain of trust. At the top of the chain is the root CA's public key, which is built into the browser. The corresponding private keys can be used by the root CA to sign an intermediate certificate one step down the chain. At the very bottom of the chain is the certificate for the website itself, which is signed by the sub-CA whose intermediate certificate is immediately above the site's certificate. A single chain of trust can have multiple intermediate certificates chained together in order to form a path from the website's certificate to a trusted root.

An example of an SSL certificate's chain. This one is used by www.mcafeecustomerrewards.com.

Browsers must trust each level of the chain: all intermediate certificates in the chain must ultimately be signed by a root CA in order for the website's certificate to be trusted. Most root certificate authorities are understandably paranoid about the security of their private keys, and so root certificates are rarely compromised directly. Smaller certificate authorities, however, may not have as much funding or expertise, and may be more likely to suffer from security breaches which could result in the disclosure of an intermediate certificate's private key.

If the private key of a sub-CA's intermediate certificate is leaked, it has serious implications for the whole internet. A fraudster could use the certificate's private key to issue arbitrary publicly trusted certificates, essentially allowing him to impersonate any website on the planet. It is imperative that compromised intermediate certificates are immediately revoked, but it difficult to achieve this in practice.

For example, when a Firefox user visits www.mcafeecustomerrewards.com, a website which has a non-EV certificate, Firefox will only make an OCSP request for the website's certificate. This means that the revoked intermediate certificate (McAfee Public CA v1) will continue to be trusted by Firefox, and the only way to resolve this would be for Mozilla to release a new version of Firefox. The same behaviour is seen in Google Chrome unless revocation checking is enabled, as the intermediate certificate is not in Google's CRLSet. When Chrome has revocation checking turned on, the certificate is correctly marked as revoked.

    Serial Number: 55A1BA093A529CB41F12EB6A1FF71EF6
        Revocation Date: Oct  7 14:03:19 2013 GMT
        CRL entry extensions:
            X509v3 CRL Reason Code:
                Cessation Of Operation
            Invalidity Date:
                Oct  7 14:03:09 2013 GMT

The entry for McAfee Public CA v1 in http://www.rsasecurity.com/products/keon/repository/certificate_status/RSA_Security_2048_v3.CRL.

www.mcafeecustomerrewards.com uses a certificate which has been signed by a revoked intermediate certificate (McAfee Public CA v1). Firefox displays the site without showing any warnings.

Google Chrome revocation bug

Although Google Chrome does not perform OCSP checks by default, it does perform them in the case of Extended Validation certificates (unless the certificate is already covered by the CRLSet). However, the Linux version of Google Chrome does not prevent access to sites using a revoked EV certificate when not covered by the CRLSet. Despite the browser sending an OCSP request and receiving a 'revoked' response, it mishandles the results and fails to block access. Instead, the EV browser cues (the green bar) is removed. Netcraft reported this apparent bug to Google in August 2013, but it was classed as low severity and has yet to be fixed on Linux.

The Windows version of Chrome (on left) behaves correctly and blocks access to a site with a revoked EV certificate. However, Chrome on Linux (on right) does not display any errors when a site uses a revoked EV certificate; it merely downgrades the UI from EV to standard SSL.

Where can we go from here?

Each of the currently available revocation methods has significant disadvantages: CRLs are potentially very large; OCSP can be blocked easily; and CRLSets are not intended to provide complete coverage. To those looking to move towards hard-fail, despite being far from pervasive, OCSP stapling could offer the answer. When combined with must-staple, currently an Internet draft, it would enable per-site, opt-in hard-fail behaviour. However, this solution is limited by the length of time (the Baseline Requirements limit the validity to 10 days) an attacker can use a cached 'good' OCSP response saved just before the certificate was revoked.

In the meantime, CRLSets, if they provided wider coverage, would be a more robust alternative to soft-fail OCSP checking. Mozilla is also looking to join Google by move towards a CRLSet-like mechanism for some of the revocation checking in Firefox.

Even soft-fail OCSP checking can be made more robust by removing any secure indicators (such as padlocks) when visiting a site without up-to-date revocation information.