February 2016 Web Server Survey

In the February 2016 survey we received responses from 933,892,520 sites and 5,796,210 web-facing computers.

Microsoft has edged closer towards Apache, with an increase of 16.1 million sites bringing its total up by 6.14% to 279 million. Apache's relatively modest growth of 0.66% has put Microsoft within 3 percentage points of Apache's leading market share of 32.8%.

In terms of web-facing computers, Apache maintains a much clearer lead with a 47.8% share of the market. Microsoft also takes second place by this metric, albeit with a share that is more than 20 percentage points behind Apache. However, both Apache and Microsoft suffered small losses in market share as nginx continues to exhibit strong growth: This month, nginx gained 21,100 computers, increasing its market share by 0.26 points to 13.96%.

nginx 1.9.11 mainline was released on 9th February. This version introduced support for dynamic modules, enabling selective loading of both third-party and some native modules at runtime. In previous versions, nginx modules had to be statically linked into an nginx binary built from source, causing the module to be loaded every time even if it was not going to be used.

The latest version of Microsoft Internet Information Services, IIS 10.0, is still very rare in the wild, as its primary deployment platform (Windows Server 2016) has yet to be released. Fewer than 5,000 websites are currently using IIS 10.0, and these are being served either from technical preview versions of Windows Server 2016, or from Windows 10 machines.

The latest technical preview version of Windows Server 2016 also supports a headless deployment option known as Nano Server. This is a stripped-down version of Windows Server, without a graphical interface and a few other features that are not essential for modern web applications. As a result, it typically requires fewer updates to be installed – and consequently, fewer reboots, too.

Despite losing a small amount of market share, Apache also showed a reasonable growth of 15,600 computers. Similar to last month, a significant proportion of this growth was due to the appearance of more Western Digital My Cloud consumer storage devices.

The total number of My Cloud devices in the survey now stands at 583,400, which is 68,400 more than last month; however, the number of devices that are exposed directly to the internet grew by only 11,100.

Western Digital is using Amazon AWS to host the servers that proxy requests to My Cloud devices in Relay mode. Most of these relay servers have been configured to serve a few thousand devices each, and so the 331,000 devices that are currently using Relay mode contribute fewer than 200 computers towards Apache's total.

Interestingly, while most web-facing My Cloud devices are hosted in the US, more than half of the *.wd2go.com hostnames used by the relay servers are hosted in Amazon's EU regions.

Total number of websites

Web server market share

DeveloperJanuary 2016PercentFebruary 2016PercentChange
Continue reading

September 2016 Web Server Survey

In the September 2016 survey we received responses from 1,285,759,146 sites and 6,118,785 web-facing computers, reflecting large gains in both metrics: 132 million additional sites, and 138,000 more computers.

Microsoft made up the majority of this month's website growth, with the largest gain of 97 million sites, although it showed only modest increases of 5,200 web-facing computers and 693,000 active sites.

Apache was responsible for most of this month's additional web-facing computers, increasing its count by 87,000 to 2.8 million (+3.2%). Similarly, nginx made a 3.0% gain of 30,000 computers. However, Microsoft's 0.3% gain was not enough to stop its share falling by half a percentage point to 25.3% as a result of the gains made by Apache and nginx.

Although nginx made a healthy gain in web-facing computers, it lost more than 5 million active sites and 5,600 sites within the top-million. 27.6% of the busiest million sites now use nginx (-0.56 pp from last month), while Apache retains its lead with a 42.5% share.

Along with nginx, all of the major web server vendors suffered losses within the top million sites, largely due to the growth of OpenResty this month. More than 10,000 of the top million sites are now using OpenResty, compared with fewer than 4,000 last month, after millions of Tumblr blogs switched from nginx. As well as tumblr.com, basecamp.com — the home of the Basecamp web-based project management tool — ranks amongst the most visited sites to use OpenResty.

Tumblr's adoption of OpenResty has caused the web server to leap up the rankings to become the seventh largest web server vendor by websites, and fifth by active sites. This month, 87% of all OpenResty sites appear under the tumblr.com domain.

Although most OpenResty sites reside under the tumblr.com domain, the number of unique domains using OpenResty also increased noticeably this month.

Although most OpenResty sites reside under the tumblr.com domain, the number of unique domains using OpenResty also increased noticeably this month.

Switching from nginx to OpenResty is not such a paradigm shift as moving to, say, Apache or Microsoft IIS. The OpenResty web application platform is built around the standard nginx core, which offers some familiarity, as well as allowing the use of third-party nginx modules. One of the key additional features provided by OpenResty is the integration of the LuaJIT compiler and many Lua libraries – this gives scope for high performance web applications to be run completely within the bundled nginx server, where developers can take advantage of non-blocking I/O.

Another web server that has gained prominence over the past year is Cowboy, a small and fast modular HTTP server written in Erlang. Optimised for low latency and low memory usage, it is currently the fifth most common web server software installed on web-facing computers that accept HTTP connections. Most of the computers used by Cowboy servers are powered by the Heroku Cloud Application Platform and hosted at Amazon Web Services.

Total number of websites

Web server market share

DeveloperAugust 2016PercentSeptember 2016PercentChange
Continue reading

HTTP Public Key Pinning: You’re doing it wrong!

HTTP Public Key Pinning (HPKP) is a security feature that can prevent fraudulently issued TLS certificates from being used to impersonate existing secure websites.

Our previous article detailed how this technology works, and looked at some of the sites that have dared to use this powerful but risky feature. Notably, very few sites are making use of HPKP: Only 0.09% of the certificates in Netcraft's March 2016 SSL Survey are served with HPKP headers, which equates to fewer than 4,100 certificates in total.

But more surprisingly, around a third of these sites are using the HPKP header incorrectly, which effectively disables HPKP. Consequently, the total number of certificates that are actually using HPKP is effectively less than 3,000.

Firefox's developer console reveals that this site has failed to include a backup pin, and so its HPKP policy is ignored by the browser.Failing to include a backup pin is the most common type of mistake made by sites that try to use HPKP.

Firefox's developer console reveals that this site has failed to include a backup pin, and so its HPKP policy is ignored by the browser.
Failing to include a backup pin is the most common type of mistake made by sites that try to use HPKP.

HPKP is the best way of protecting a site from being impersonated by mis-issued certificates, but it is easy for this protection to backfire with severe consequences. Fortunately, most misconfigurations simply mean that a site's HPKP policy will be ignored by browsers. The site's administrators might not realise it, but this situation is essentially the same as not using HPKP at all.

How can it go wrong?

Our previous article demonstrated a few high-profile websites that were using HPKP to varying degrees. However, plenty of other sites have bungled HPKP to the extent that it simply does not work.

Zero max-age

Every HPKP policy must specify a max-age directive, which suggests how long a browser should regard the website as a "Known Pinned Host". The most commonly used max-age value is 5184000 seconds (60 days). Nearly 1,200 servers use this value, while around 900 use 2592000 seconds (30 days).

But around 70 sites feature pointlessly short max-age values, such as 5 or 10 seconds. These durations are far too short to be effective, as a victim's browser would rapidly forget about these known pinned hosts.

Additionally, a few sites explicitly specify a max-age of zero along with their public key pins. These sites are therefore not protected by HPKP, and are in some cases needlessly sending this header to every client request. It is possible that they are desperately trying to remove a previously set HPKP policy, but this approach obviously cannot be relied upon to remove cached pins from browsers that do not visit the site in the meantime. These sites would therefore have to continue using a certificate chain that conforms to their previous HPKP policy, or run the risk of locking out a few stragglers.

One of the sites that sets a zero max-age is https://vodsmarket.com. Even if this max-age were to be increased, HPKP would still not be enabled because there is only one pinned public key:

Public-Key-Pins: pin-sha256="sbKjNAOqGTDfcyW1mBsy9IOtS2XS4AE+RJsm+LcR+mU="; max-age=0;

Another example can be seen on https://wondershift.biz, which pins two certificates' public keys. Again, even if the max-age were to be increased, this policy would still not take effect because there are no backup pins specified (both of the pinned keys appear in the site's certificate chain):

Public-Key-Pins: pin-sha256="L7mpy8M0VvQcWm7Yyx1LFK/+Ao280UZkz5U38Qk5G5g=";

Wrong pin directives

Each pinned public key must be specified via a separate pin-sha256 directive, and each value must be a SHA256 hash; but more than 1% of servers that try to use HPKP fail to specify these pins correctly.

For example, the Department of Technology at Aichi University of Education exhibits the following header on https://www.auetech.aichi-edu.ac.jp:

Public-Key-Pins: YEnyhAxjrMAeVokI+23XQv1lzV3IBb3zs+BA2EUeLFI=";

This header appears to include a single public key hash, but it omits the pin-sha256 directive entirely. No browser will make any sense of this attempted policy.

In another example, the Fast Forward Imaging Customer Interface at https://endor.ffwimaging.com does something very peculiar. It uses a pin-sha512 directive, which is not supported by the RFC – but in any case, the value it is set to is clearly not a SHA512 hash:

Public-Key-Pins: pin-sha512="base64+info1="; max-age=31536000; includeSubDomains

Some sites try to use SHA1 public key hashes, which are also unsupported:

Public-Key-Pins: pin-sha1='ewWxG0o6PsfOgu9uOCmZ0znd8h4='; max-age=2592000; includeSubdomains

This one uses pin-sha instead of pin-sha256:

Public-Key-Pins: pin-sha="xZ4wUjthUJ0YMBsdGg/bXHUjpEec5s+tHDNnNtdkwq8=";
    max-age=5184000; includeSubDomains

And this one refers to the algorithm "SHA245", which does not exist:

Public-Key-Pins: pin-sha245="pyCA+ftfVu/P+92tEhZWnVJ4BGO78XWwNhyynshV9C4=";
    max-age=31536000; includeSubDomains

The above example was most likely a typo, as is the following example, which specifies a ping-sha256 value:

Public-Key-Pins: ping-sha256="5C8kvU039KouVrl52D0eZSGf4Onjo4Khs8tmyTlV3nU=";
    max-age=2592000; includeSubDomains

These are careless mistakes, but it is notable that these types of mistake alone account for more than 1% of all certificates that set the Public-Key-Pins header. The net effect of these mistakes is that HPKP is not enabled on these sites.

Only one pinned public key

As we emphasised in our previous article, it is essential that a secure site should specify at least two public key pins when deploying HPKP. At least one of these should be a backup pin, so that the website can recover from losing control of its deployed certificate. If the website owner still possesses the private key for one of the backup certificates, the site can revert to using one of the other pinned public keys without any browsers refusing to connect.

But 25% of servers that use HPKP specify only one public key pin. This means that HPKP will not be enabled on the sites that use these certificates.

To prevent sites from inadvertently locking out all of their visitors, and to force the use of backup pins, browsers should only cache a site's pinned public keys if the Public-Key-Pins header contains two or more hashes. At least one of these must correspond to a certificate that is in the site's certificate chain, and at least one must be a backup pin (if a hash cannot be found in the certificate chain, then the browser will assume it is a backup pin without verifying its existence).

https://xcloud.zone is an example of a site that only sets one public key pin:

Public-Key-Pins: pin-sha256="DKvbzsurIZ5t5PvMaiEGfGF8dD2MA7aTUH9dbVtTN28=";
    max-age=2592000; includeSubDomains

This single pin corresponds to the subscriber certificate issued to xcloud.zone. Despite the 30-day max-age value, this lonely public key hash will never be cached by a browser. Consequently, HPKP is not enabled on this site, and the header might as well be missing entirely.

No pins at all

As well as the 1,000+ servers that only have one pinned public key, some HPKP headers neglect to specify any pins at all, and a few try to set values that are not actually hashes (which has the same effect as not setting any pins at all). For example, the Hide My Ass! forum at https://forum.hidemyass.com sets the following:

Public-Key-Pins: pin-sha256="<Subject Public Key Information (SPKI)>";
    max-age=2592000; includeSubDomains

The ProPublica SecureDrop site at https://securedrop.propublica.org also made a subtle mistake last month by forgetting to enclose its pinned public key hashes in double-quotes:

Public-Key-Pins: max-age=86400;

The HPKP RFC mandates that the Base64-encoded public key hashes must be quoted strings, so the above policy would not have worked. ProPublica has since fixed this problem, as well as adding a third pin to the header.

ProPublica is an independent newsroom that produces investigative journalism in the public interest. It provides a SecureDrop site to allow tips or documents to be submitted securely; however, until recently the HPKP policy on this site was ineffectual.

ProPublica is an independent newsroom that produces investigative journalism in the public interest. It provides a SecureDrop site to allow tips or documents to be submitted securely; however, until recently the HPKP policy on this site was ineffectual.

If companies that specialise in online privacy and secure anonymous filesharing are making these kinds of mistake on their own websites, it's not surprising that so many other websites are also getting it wrong.

At least two pins, but no backup pins

A valid HPKP policy must specify at least two pins, and at least one of these must be a backup pin. A browser will assume that a pin corresponds to a backup certificate if none of the certificates in the site's certificate chain correspond to that pin.

The Samba mailing list website fails to include any backup pins. Consequently, its HPKP policy is not enforced.

The Samba mailing list website fails to include any backup pins. Consequently, its HPKP policy is not enforced.

The Samba mailing lists site at https://lists.samba.org specifies two pinned public key hashes, but both of these appear in its certificate chain. Consequently, a browser will not apply this policy because there is no evidence of a backup pin. HPKP is effectively disabled on this site.

Incidentally, the Let's Encrypt Authority X1 cross-signed intermediate certificate has the most commonly pinned public key in our survey. More than 9% feature this in their set of pins, although it should never be pinned exclusively because Let's Encrypt is not guaranteed to always use their X1 certificate. Topically, just a few days ago, Let's Encrypt started to issue all certificates via its new Let's Encrypt Authority X3 intermediate certificate in order to be compatible with older Windows XP clients; but fortunately, the new X3 certificate uses the same keys as the X1 certificate, and so any site that had pinned the public key of the X1 certificate will continue to be accessible when it renews its subscriber certificate, without having to change its current HPKP policy.

The next most common pin belongs to the COMODO RSA Domain Validation Secure Server CA certificate. This pin is used by more than 6% of servers in our survey, all of which – despite the use of HPKP – could be vulnerable to man-in-the-middle attacks if Comodo were to be hacked again.

Pinning only the public keys of subscriber certificates would offer the best security against these kinds of attack, but it is fairly common to also pin the keys of root and intermediate certificates to reduce the risk of "bricking" a website in the event of a key loss. This approach is very common among Let's Encrypt customers, as the default letsencrypt client software generates a new key pair each time a certificate is renewed. If the public key of the subscriber certificate were to be pinned, the pinning would no longer be valid when it is renewed.

Setting HPKP policies over HTTP

Some sites set HPKP headers over unencrypted HTTP connections, which is also ineffectual. For example, the Internet Storm Center website at www.dshield.org sets the following header on its HTTP site:

Public-Key-Pins: pin-sha256="oBPvhtvElQwtqQAFCzmHX7iaOgvmPfYDRPEMP5zVMBQ=";
    max-age=2592000; report-uri="https://isc.sans.org/badkey.html"

The Public Key Pinning Extension for HTTP RFC states that browsers must ignore HPKP headers that are received over non-secure transport, and so the above header has no effect other than to consume additional bandwidth.

2.2.2.  HTTP Request Type
  Pinned Hosts SHOULD NOT include the PKP header field in HTTP
  responses conveyed over non-secure transport.  UAs MUST ignore any
  PKP header received in an HTTP response conveyed over non-secure

One very good reason for ignoring HPKP policies that are set over unencrypted connections is to prevent "hostile pinning" by man-in-the-middle attackers. If an attacker were to inject a set of pins that the site owner does not control—and if the browser were to blindly cache these values—he would be able to create a junk policy on behalf of that website. This would prevent clients from accessing the site for a long period, without the attacker having to maintain his position as a man-in-the-middle.

If a visitor instead browses to https://www.dshield.org (using HTTPS), an HSTS policy is applied which forces future requests to use HTTPS. The HTTPS site also sets an HPKP header which is then accepted and cached by compatible browsers. However, as the HTTP site does not automatically redirect to the HTTPS site, it is likely that many visitors will never benefit from these HSTS or HPKP polices, even though they are correctly implemented on the HTTPS site.

In another bizarre example, HPKP headers are set by the HTTP site at http://www.msvmgroup.com, even though there is no corresponding HTTPS website (it does accept connections on port 443, but does not present a subscriber certificate that is valid for this hostname).

Not quite got round to it yet...

A few sites that use the Public-Key-Pins header have not quite got around to implementing it yet, such as https://justamagic.ru, which sets the following value:

Public-Key-Pins: TODO

Using HPKP headers to broadcast skepticism

One security company's website – https://websec-test.com – uses the Public-Key-Pins header to express its own skepticisms over the usefulness of HPKP:

Public-Key-Pins: This is like the most useless header I have ever seen.
    Preventing MITM, c'mon, whoever can't trust his own network shouldn't
    enter sensitive data anywhere.

Violation reports that will never be received

The Public-Key-Pins header supports an optional report-uri directive. In the event of a pin validation failure, the user's browser should send a report to this address, in addition to blocking access to the site. These reports are obviously valuable, as they will usually be the first indication that something is wrong.

However, if the report-uri address uses HTTPS, and is also known pinned host, the browser must also carry out pinning checks on this address when the report is sent. This makes it foolish to specify a report-uri that uses the same hostname as the site that is using HPKP.

An example of this configuration blunder can be seen on https://yahvehyireh.com, which sets the following Public-Key-Pins header:

Public-Key-Pins: pin-sha256="y+PfuAS+Dx0OspfM9POCW/HRIqMqsa83jeXaOECu1Ns=";
    includeSubDomains; max-age=0;

This header instructs the browser to send pinning validation failure reports to https://yahvehyireh.com/incoming/hpkp/index.php. However, if there were to be a pinning validation failure on yahvehyireh.com, then the browser would be unable to send any reports because the report-uri itself would also fail the pinning checks by virtue of using the same hostname.

Incidentally, Chrome 46 introduced support for a newer header, Public-Key-Pins-Report-Only, which instructs the browser to perform identical pinning checks to those specified by the Public-Key-Pins header, but it will never block a request when no pinned keys are encountered; instead, the browser will send a report to a URL specified by a report-uri parameter, and the user will be allowed to continue browsing the site. This mechanism would make it safe for site administrators to test the deployment of HPKP on their sites, without inadvertently introducing a denial of service.


The proportion of secure servers that use HPKP headers is woefully low at only 0.09%, but to make matters worse, many of these few HPKP policies have been implemented incorrectly and do not work as intended.

Without delving into developer settings, browsers offer no visible indications that a site has an invalid HPKP policy, and so it is likely that many website administrators have no idea that their attempts at implementing HPKP have failed. Around a third of the sites that attempt to set an HPKP policy have got it wrong, and consequently behave as if there was no HPKP policy at all. Every response from these servers will include the unnecessary overhead of a header containing a policy that will ultimately be ignored by all browsers.

But there is still hope for the masses: A more viable alternative to HPKP might arise from an Internet-Draft entitled TLS Server Identity Pinning with Tickets. It proposes to extend TLS with opaque tickets, similar to those being used for TLS session resumption, as a way to pin a server's identity. This feature would allow a client to ensure that it is connecting to the right server, even in the presence of a fraudulently issued certificate, but has a significant advantage over HPKP in that no manual management actions would be required. If this draft comes to fruition, and is subsequently implemented by browsers and servers, this ticket-based approach to pinning could potentially see a greater uptake than HPKP has.

Netcraft offers a range of services that can be used to detect and defeat large-scale pharming attacks, and security testing services that identify man-in-the-middle vulnerabilities in web application and mobile apps. Contact security-sales@netcraft.com for more information.

95% of HTTPS servers vulnerable to trivial MITM attacks

Only 1 in 20 HTTPS servers correctly implements HTTP Strict Transport Security, a widely-supported security feature that prevents visitors making unencrypted HTTP connections to a server.

The remaining 95% are therefore vulnerable to trivial connection hijacking attacks, which can be exploited to carry out effective phishing, pharming and man-in-the-middle attacks. An attacker can exploit these vulnerabilities whenever a user inadvertently tries to access a secure site via HTTP, and so the attacker does not even need to spoof a valid TLS certificate. Because no crypto-wizardry is required to hijack an HTTP connection, these attacks are far easier to carry out than those that target TLS, such as the recently announced DROWN attack.


The growth of HTTPS has been a mostly positive step in the evolution of the internet, enabling encrypted communications between more users and websites than ever before. Many high profile sites now use HTTPS by default, and millions of TLS certificates are currently in use on the web. With companies like Let's Encrypt offering free certificates and automated management tools, it is also easier than ever to deploy an HTTPS website that will be trusted by all modern browsers.

The primary purpose of a TLS certificate is to allow a browser to verify that it is communicating with the correct website. For example, if https://www.example.com uses a valid TLS certificate, then a man-in-the-middle attacker would not be able to hijack a browser's connection to this site unless he is also able to obtain a valid certificate for that domain.

A man-in-the-middle attack like this is generally not possible if the customer uses HTTPS.

A man-in-the-middle attack like this is generally not possible if the initial request from the customer uses HTTPS.

It would be extremely difficult for the attacker to obtain a valid certificate for a domain he does not control, and using an invalid certificate would cause the victim's browser to display an appropriate warning message. Consequently, man-in-the-middle attacks against HTTPS services are hard to pull off, and often not very successful. However, there are plenty of realistic opportunities to use the unencrypted HTTP protocol to attack most HTTPS websites.

HTTP Strict Transport Security (HSTS)

Encrypted communications are an essential requirement for banks and other financial websites, but HTTPS alone is not sufficient to defend these sites against man-in-the-middle attacks. Astonishingly, many banking websites lurk amongst the 95% of HTTPS servers that lack a simple feature that renders them still vulnerable to pharming and man-in-the-middle attacks. This missing feature is HTTP Strict Transport Security (HSTS), and only 1 in 20 secure servers currently make use of it, even though it is supported by practically all modern browsers.

Each secure website that does not implement an HSTS policy can be attacked simply by hijacking an HTTP connection that is destined for it. This is a surprisingly feasible attack vector, as there are many ways in which a user can inadvertently end up connecting via HTTP instead of HTTPS.

Manually typed URLs often result in an initial insecure request, as most users do not explicitly type in the protocol string (http:// or https://). When no protocol is given, the browser will default to HTTP – unless there is an appropriate HSTS policy in force.

To improve accessibility, most secure websites also run an HTTP service to redirect users to the corresponding HTTPS site – but this makes them particularly prone to man-in-the-middle attacks if there is no HSTS policy in force. Not only would many users be accustomed to visiting the HTTP site first, but anyone else who visits the site via an old bookmark or search engine result might also initially access the site via an insecure HTTP address. Whenever this happens, the attacker can hijack the initial HTTP request and prevent the customer being redirected to the secure HTTPS website.

This type of attack can be automated with the sslstrip tool, which transparently hijacks HTTP traffic on a network and converts HTTPS links and redirects into HTTP. This type of exploit is sometimes regarded as a protocol downgrade attack, but strictly speaking, it is not: rather than downgrading the protocol, it simply prevents the HTTP protocol being upgraded to HTTPS.

NatWest's online banking website at www.nwolb.com lacks an HSTS policy and also offers an HTTP service to redirect its customers to the HTTPS site. This setup is vulnerable to the type of man-in-the-middle attack described above.

NatWest's online banking website at www.nwolb.com lacks an HSTS policy and also offers an HTTP service to redirect its customers to the HTTPS site. This setup is vulnerable to the type of man-in-the-middle attack described above.

Vulnerable sites can be attacked on a massive scale by compromising home routers or DNS servers to point the target hostname at a server that is controlled by the attacker (a so-called "pharming" attack). Some smaller scale attacks can be carried out very easily – for example, if an attacker sets up a rogue Wi-Fi access point to provide internet access to nearby victims, he can easily influence the results of their DNS lookups.

Even if a secure website uses HTTPS exclusively (i.e. with no HTTP service at all), then man-in-the-middle attacks are still possible. For example, if a victim manually types www.examplebank.com into his browser's address bar—without prefixing it with https://—the browser will attempt to make an unencrypted HTTP connection to http://www.examplebank.com, even if the genuine site does not run an HTTP service. If this hostname has been pharmed, or is otherwise subjected to a man-in-the-middle attack, the attacker can hijack the request nonetheless and eavesdrop the connection as it is relayed to the genuine secure site, or serve phishing content directly to the victim.

In short, failing to implement an HSTS policy on a secure website means attackers can carry out man-in-the-middle attacks without having to obtain a valid TLS certificate. Many victims would fall for these attacks, as they can be executed over an unencrypted HTTP connection, thus avoiding any of the browser's tell-tale warnings about invalid certificates.

Implementing HSTS: A simple one-liner

The trivial man-in-the-middle attacks described above can be thwarted by implementing an appropriate HSTS policy. A secure website can do this simply by setting a single HTTP header in its responses:

    Strict-Transport-Security: max-age=31536000;

This header can only be set over an HTTPS connection, and instructs compatible browsers to only access the site over HTTPS for the next year (31,536,000 seconds = 1 year). This is the most common max-age value, used by nearly half of all HTTPS servers. After this HSTS policy has been applied, even if a user manually prefixes the site's hostname with http://, the browser will ignore this and access the site over HTTPS instead.

The combination of HSTS and HTTPS therefore provides a good defence against pharming attacks, as the attacker will not be able to redirect and intercept plaintext HTTP traffic when a client obeys the HSTS policy, nor will he be able to present a valid TLS certificate for the site he is impersonating.

The attacker cannot even rely on a small proportion his victims unwisely ignoring the use of an invalid certificate, as browsers must regard this situation as a hard fail when an HSTS policy is in force. The browser will simply not let the victim access the site if it finds an invalid certificate, nor will it allow an exception to be added.

When Google Chrome encounters an invalid certificate for a site that has an effective HSTS policy, the victim is not allowed to bypass the browser's warning message or add an exception.

When Google Chrome encounters an invalid certificate for a site that has an effective HSTS policy, the victim is not allowed to bypass the browser's warning message or add an exception.

To prevent other types of attack, it is also wise to add the includeSubDomains directive to ensure that every possible subdomain of a site is protected by HSTS. This mitigates cookie injection and session fixation attacks that could be executed by impersonating an HTTP site on a non-existent subdomain such as foo.www.example.com, and using it to set a cookie which would be sent to the secure site at https://www.example.com. This directive can be enabled like so:

    Strict-Transport-Security: max-age=31536000; includeSubDomains

However, some thought is required before taking the carte blanche approach of including all subdomains in an HSTS policy. The website's administrators must ensure that every single one of its subdomains supports HTTPS for at least the duration specified by the max-age parameter, otherwise users of these subdomains risk being locked out.

Setting an HSTS policy will also protect first time visitors who habitually use search bars or search engines to reach their destination. For example, typing "paypal" into Google's HTTPS search engine will yield a link to https://www.paypal.com, because Google will always link to the HTTPS version of a website if an appropriate HSTS policy exists.

HSTS preloading

HSTS is clearly an important security feature, but there are several circumstances under which its benefits will not work. Because HSTS directives are delivered via an HTTP header (over an HTTPS connection), HSTS can only instruct a browser to only use HTTPS after the browser's first visit to a secure website.

Men-in-the-middle can therefore still carry out attacks against users who have:

  • Never before visited the site.
  • Recently reinstalled their operating system.
  • Recently reinstalled their browser.
  • Switched to a new browser.
  • Switched to a new device (e.g. mobile phone).
  • Deleted their browser's cache.
  • Not visited the site within the past year (or however long the max-age period lasts).

These vulnerabilities can be eliminated by using HSTS Preloading, which ensures that the site's HSTS policy is distributed to supported browsers before the customer's first visit.

Website administrators can use the form at https://hstspreload.appspot.com/ to request for domains to be included in the HSTS Preload list maintained by Google. Each site must have a valid certificate, redirect all HTTP traffic to HTTPS, and serve all subdomains over HTTPS. The HSTS header served from each site must specify a max-age of at least 18 weeks (10,886,400 seconds) and include the preload and includeSubdomains directives.

It can take several months for domains to be reviewed and propagated to the latest stable versions of Firefox, Safari, Internet Explorer, Edge and Chrome. When domains are added to the preload list, all users of these browsers will benefit from the security offered by HSTS, even if they have never visited the sites before.


HSTS is widely supported, but not widely implemented. Nearly all modern browsers obey HSTS policies, including Internet Explorer 11, Microsoft Edge, Firefox, Chrome, Safari and Opera – yet less than 5% of secure websites enable this important security feature.

Secure websites that do not use HSTS are trivial to attack if the attacker can hijack a victim's web traffic, but it is even easier to defeat such attacks by implementing an HSTS policy. This begs the question of why so few websites are using HSTS.

The HSTS specification (RFC 6797) was published in 2012, and so it can hardly be considered a new technology any more. Nonetheless, many website administrators might still be unaware of its existence, or may not yet feel ready to commit to running an HTTPS-only website. These are probably the most significant reasons for its low uptake.

Some website administrators have even disabled HSTS by explicitly setting a max-age of 0 seconds. This has the effect of switching off any previously established HSTS policies, but this backpedalling can only take proper effect if every client revisits the secure site after the max-age has been set to zero. When a site implements an HSTS policy, it is effectively committed to maintaining its HTTPS service for as long as the largest max-age it has ever specified, otherwise it risks denying access to infrequent visitors. Nearly 4% of all HTTPS servers that use the Strict-Transport-Security header currently set a max-age of zero, including Twitter's t.co URL-shortener.

Browser support for HSTS can also introduce some privacy concerns. By initiating requests to several distinct hostnames (some of which enable HSTS), a hostile webpage can establish a "supercookie" to uniquely identify the client browser during subsequent visits, even if the user deletes the browser's conventional cookies. The browser will remember which pattern of hostnames had HSTS enabled, thus allowing the supercookie to persist. However, this privacy concern only affects clients and does not serve as an excuse for websites to avoid implementing their own HSTS policies.

Implementing an HSTS policy is very simple and there are no practical downsides when a site already operates entirely over HTTPS. This makes it even more surprising to see many banks failing to use HSTS, especially on their online banking platforms. This demonstrates poor security practices where it matters the most, as these are likely to be primary targets of pharming attacks.

Netcraft offers a range of services that can be used to detect and defeat large-scale pharming attacks, and security testing services that identify man-in-the-middle vulnerabilities in web application and mobile apps. Contact security-sales@netcraft.com for more information.

March 2016 Web Server Survey

In the March 2016 survey we received responses from 1,003,887,790 sites and 5,782,080 web-facing computers. This reflects a gain of nearly 70 million sites, but a loss of 14,100 computers.

This is the second time the total number of sites has reached more than a billion. This milestone was first reached in September 2014, although it was short-lived: By November 2014, the total fell back below one billion, and had stayed that way until the current month. During the intervening period, the total fell as low as 849 million sites in April 2015.

The total number of websites is typically prone to large fluctuations. Domain holding companies, typo squatters, spammers and link farmers can cause millions of sites to be deployed in a short space of time, without any significant outlay, but these types of site are intrinsically uninteresting to humans. Netcraft's active sites metric counters the effect of these by discounting sites that appear to be automatically generated. This leads to a more-stable metric that better illustrates real, practical use of the web.

The number of active sites currently stands at just 171 million, meaning around 1 in 6 sites are active. The total fell by 764,000 this month, but nginx stands out as being the only major vendor to increase its active site count — by an impressive 699,000. This has increased its active sites share to 16.4%, while Apache's loss of nearly a million active sites took its leading share down to 49.2%.

Typifying nginx's rise amongst active sites, it also showed the only growth in web-facing computers amongst the major server vendors. This month's survey found more than 15,000 additional computers running nginx on the web, while Microsoft's loss of 30,000 computers was the primary cause of the overall loss in this metric. Thankfully, the majority of this decline consisted of Windows Server 2003 computers, which arguably helps improve the safety of the internet — this server software is no longer supported by Microsoft.

China accounts for over 30% of all web-facing computers that run Windows Server 2003, making it the largest user of this obsolete operating system; however, more than half of this month's Windows Server 2003 losses were seen in China, which has helped to bring this share down slightly.

Apache's computer growth was relatively modest at only 447 computers, but Microsoft's large loss caused Apache's market share to increase by 0.12 to 47.9%. nginx's gain of 15,000 computers took its market share up by 0.30 to 14.3%, but Microsoft remains a fair way ahead of nginx with a 26.6% share of the market.

Total number of websites

Web server market share

DeveloperFebruary 2016PercentMarch 2016PercentChange
Continue reading

Certificate revocation: Why browsers remain affected by Heartbleed

More than 80,000 SSL certificates were revoked in the week following the publication of the Heartbleed bug, but the certificate revocation mechanisms used by major browsers could still leave Internet users vulnerable to impersonation attacks. Little has changed since Netcraft last reported on certificate revocation behaviour.

Why is revocation necessary?

The Heartbleed bug made it possible for remote attackers to steal private keys from vulnerable servers. Most web server access logs are unlikely to show any evidence of such a compromise, and so certificates used on previously-vulnerable web servers should be replaced without delay.

However, even if the certificate is replaced, the secure site could still be vulnerable. If the pre-Heartbleed certificate had been compromised, it will remain usable by an attacker until its natural expiry date, which could be years away. A correctly positioned attacker, with knowledge of the old certificate's private key and the ability to intercept a victim's internet traffic, can use the old certificate to impersonate the target site.

Certificate authorities can curtail the lifetime of the compromised certificate by revoking the certificate. In principle, a revoked certificate should not be trusted by browsers, which would protect users from misuse of the certificate. The realities of revocation behaviour in browsers, however, could leave some internet users vulnerable to attack with compromised certificates.

The Heartbleed bug is currently the largest cause of certificate revocations, but other reasons for revoking certificates can include the use of weak signature algorithms, fraudulent issuance, or otherwise breaching the requirements laid out by the CA/Browser Forum.

How does revocation checking work?

There are two main technologies for browsers to check the revocation status of a particular certificate: the Online Certificate Status Protocol (OCSP) and Certificate Revocation Lists (CRLs). OCSP provides real-time revocation information about an individual certificate from an issuing certificate authority, whereas CRLs provide a list of revoked certificates which is typically retrieved by clients less frequently.

Of the major browsers, only Internet Explorer and Opera behave correctly in a wide variety of revocation scenarios, including where end-entity and intermediate certificates had been revoked only via a CRL or only via OCSP. The remaining browsers — Google Chrome, Safari, and Firefox — all have less consistent behaviour when checking the revocation status of SSL certificates.

Firefox blocks access to certificates which have been revoked via OCSP.

OCSP, the more recent standard, is effectively the revocation method of choice on the internet: providing the URL to a CRL in individual certificates is optional in the Baseline Requirements, and only Opera and Internet Explorer consistently check them when OCSP is not available. The latest version of Firefox removed the last vestiges of CRL checking: previously CRLs were checked only for EV certificates when OCSP failed.

Although CRLs have some disadvantages — their size for one — they do offer some key advantages over OCSP: CRLs can be downloaded ahead of time on a trusted network and, unlike OCSP, CRLs do not reveal which sites you are visiting to the certificate authority. Google's decision to disable OCSP checking by default was also partly due to these privacy concerns.

OCSP stapling is an alternative approach to distributing OCSP responses. By including a recent OCSP response in its own TLS/SSL handshake, a website can remove the need for each visitor to make a separate connection to the certificate authority. As well as improving performance, stapled responses remove the privacy concerns surrounding standard OCSP leaking user IPs to certificate authorities. However, only 24% of all SSL certificates found in the most recent Netcraft SSL survey were used on websites that stapled an OCSP response.

Google has shunned the traditional methods of revocation: whilst Chrome does check the status of EV certificates, revocation checking is not enabled by default for any other type of certificate. Instead, Chrome uses its own updating mechanism to maintain an aggregated list of revoked certificates gathered by crawling CRLs. This is a subset of all revocations and is intended to cover only the most important.

Is revocation checking useful for certificates potentially compromised by Heartbleed?

As explained by Adam Langley, online revocation checking can easily be blocked if the compromised certificate is being used in a man-in-the-middle attack. An attacker able to intercept traffic to the targeted website will likely also be able to block OCSP requests. If the victim is using a browser which does not hard-fail (which is the default setting of all major browsers) when an OCSP response isn't received, the attacker will be able to use a revoked certificate as normal.

However, the same logic does not apply to CRLs: if the CRL was downloaded earlier when on a trusted network, a revoked certificate used in a man-in-the-middle attack will not be trusted. This requires the certificate to have been revoked before the CRL was downloaded; however, many CRLs can be cached for a significant length of time (up to 10 days in the Baseline Requirements). Although, if a new CRL is needed, its download can be blocked just as effectively as OCSP's can be. When CRLs are used, an attacker cannot rely on the certificate passing validation: a subset of users, those with cached CRLs, will be prevented from continuing on the attacker's site. The same logic also applies to Google's CRLSets, including the ability to block updates.

As such, despite the difficulties of revocation checking in the MITM scenario, it is still critical for site owners to revoke certificates. If the certificate is revoked, an attackers job is made that much more difficult: he must chose sites with certificates issued without a CRL distribution point (which is permissible under the Baseline Requirements) or that are not covered by Google's CRLSets, and his victims must be using a browser that checks neither. Certificates that are not revoked are unlikely to ever be included in more effective revocation methods such as CRLSets.

Should I enable revocation checking in Chrome?

Whilst OCSP is easily blocked in man-in-the-middle attacks, if revocation checking is enabled, Chrome (on both Windows and Linux) will check CRLs for certificates that do not support OCSP. It is likely that you will have cached CRLs for websites you have visited recently — if you move onto an untrusted network, you will be protected by the CRLs that were downloaded earlier. Over 4% of currently valid certificates are only revocable by CRL, including login.skype.com. Unfortunately, for the majority of sites where OCSP is available CRLs will not be downloaded, any OCSP requests made can be blocked, and the attacker can continue as if the certificate is not revoked.

Perfect OCSP checks: A chicken and egg problem

By default, all browsers take the "soft-fail" approach to OCSP checks. A revoked certificate will be regarded as valid if the OCSP request fails. While this sounds like unsafe behaviour, browser vendors are reluctant to force a hard-fail approach because of the problems it can cause. For example, paid-for internet connections, such as WiFi hotspots or hotel room connections, that use captive portals are one of the major chicken-and-egg scenarios. Before a user can access the internet, he must visit a secure payment page, but this would fail because the OCSP responder used by the site's certificate cannot be reached until after he has paid. There are methods to resolve this problem, including OCSP stapling and less restrictive blocking; however, such solutions are unlikely to adopted quickly.

Firefox can be forced to use a hard-fail approach to OCSP checking, but this setting is not enabled by default.

It is critical that OCSP responders have 100% uptime, as any outage whatsoever could provide a window of opportunity to misuse compromised revoked certificates. Netcraft publishes a list of OCSP responder sites ordered by failures over the past day. Partly due to the reliability concerns, the Mozilla Foundation suggests that there is some way to go before a hard-fail approach can be enabled by default.

Despite the drawbacks of soft-fail OCSP checking, there are circumstances in which a soft-fail approach can still be useful. For example, it might be desirable to revoke a domain-validated certificate which had been issued to a deceptive domain name (e.g. paypol.com), or when a domain changes hands. In the absence of any man-in-the-middle attackers, soft-fail OCSP is likely to be effective.

Irrevocable certificates

Browsers that do not support CRLs, such as Firefox, are not able to determine whether or not the 4% of certificates without OCSP responder URLs have been revoked. Only if an OCSP response has been stapled to the TLS connection can such browsers check the revocation status. Given the majority of certificates (76%) are served without a stapled OCSP response, such certificates are effectively irrevocable for a large proportion of internet users. As a result, the compromised certificates can be misused for fraud up until their natural expiry dates. A smaller number of certificates fail to specify URLs for either method of revocation, which makes them completely irrevocable in all browsers which rely on these technologies.

It is likely that browser vendors will be forced to take additional steps to ensure that irrevocable certificates are correctly regarded as invalid. Such measures were taken in 2011, when Mozilla released new versions of Firefox which explicitly blacklisted some of the fraudulent certificates generated by the Comodo Hacker, even though the affected certificates had already been revoked by the issuer. One of the fraudulent certificates released to the public impersonated Firefox's addons site at addons.mozilla.org. Google's CRLSet gives it the ability to distribute such revocations without relying on any certificate authority to revoke the certificate.

Accenture was using a CRL-only Extended Validation certificate on its website at https://apps.accenture.com using a vulnerable version of OpenSSL (1.0.1e). The potentially compromised certificate was subsequently replaced with a new certificate issued on 14 April, and the previous certificate (serial number 0x0100000000013b03d6adfeff5c37) was revoked. The serial number was added to the CRL at http://crl.omniroot.com/PublicSureServerEV.crl. If an attacker had managed to compromise the private key used by the old certificate, he can continue impersonating apps.accenture.com with a seemingly valid SSL certificate until its natural expiry date in November 2014 for victims using browsers which do not check CRLs, which includes Firefox 28. The only indication that revocation checking has not been completed is the lack of the EV browser cues. This certificate is present in Google's CRLSet, and so Google Chrome users are protected against its misuse.

A currently deployed EV certificate without OCSP in Firefox 28 (left). The EV browser cues are not displayed in Firefox as the revocation status has not been checked. Internet Explorer (right), which has checked the revocation status on the CRL, does display the additional green bar with the company's name.

Apple's Safari web browser also does not perform any CRL revocation checks for Extended Validation certificates despite doing so for non-EV certificates. This behaviour may be based on the Baseline Requirements and the EV guidelines, which have mandated that EV certificates contain an OCSP responder URL for some time. As a consequence, the certificate previously used on apps.accenture.com is also irrevocable in Safari. In addition, despite making no revocation checks, Safari retains the EV browser cues rather than downgrading to standard SSL.

Problems revoking intermediate certificates

Digital certificates are verified using a chain of trust. At the top of the chain is the root CA's public key, which is built into the browser. The corresponding private keys can be used by the root CA to sign an intermediate certificate one step down the chain. At the very bottom of the chain is the certificate for the website itself, which is signed by the sub-CA whose intermediate certificate is immediately above the site's certificate. A single chain of trust can have multiple intermediate certificates chained together in order to form a path from the website's certificate to a trusted root.

An example of an SSL certificate's chain. This one is used by www.mcafeecustomerrewards.com.

Browsers must trust each level of the chain: all intermediate certificates in the chain must ultimately be signed by a root CA in order for the website's certificate to be trusted. Most root certificate authorities are understandably paranoid about the security of their private keys, and so root certificates are rarely compromised directly. Smaller certificate authorities, however, may not have as much funding or expertise, and may be more likely to suffer from security breaches which could result in the disclosure of an intermediate certificate's private key.

If the private key of a sub-CA's intermediate certificate is leaked, it has serious implications for the whole internet. A fraudster could use the certificate's private key to issue arbitrary publicly trusted certificates, essentially allowing him to impersonate any website on the planet. It is imperative that compromised intermediate certificates are immediately revoked, but it difficult to achieve this in practice.

For example, when a Firefox user visits www.mcafeecustomerrewards.com, a website which has a non-EV certificate, Firefox will only make an OCSP request for the website's certificate. This means that the revoked intermediate certificate (McAfee Public CA v1) will continue to be trusted by Firefox, and the only way to resolve this would be for Mozilla to release a new version of Firefox. The same behaviour is seen in Google Chrome unless revocation checking is enabled, as the intermediate certificate is not in Google's CRLSet. When Chrome has revocation checking turned on, the certificate is correctly marked as revoked.

    Serial Number: 55A1BA093A529CB41F12EB6A1FF71EF6
        Revocation Date: Oct  7 14:03:19 2013 GMT
        CRL entry extensions:
            X509v3 CRL Reason Code:
                Cessation Of Operation
            Invalidity Date:
                Oct  7 14:03:09 2013 GMT

The entry for McAfee Public CA v1 in http://www.rsasecurity.com/products/keon/repository/certificate_status/RSA_Security_2048_v3.CRL.

www.mcafeecustomerrewards.com uses a certificate which has been signed by a revoked intermediate certificate (McAfee Public CA v1). Firefox displays the site without showing any warnings.

Google Chrome revocation bug

Although Google Chrome does not perform OCSP checks by default, it does perform them in the case of Extended Validation certificates (unless the certificate is already covered by the CRLSet). However, the Linux version of Google Chrome does not prevent access to sites using a revoked EV certificate when not covered by the CRLSet. Despite the browser sending an OCSP request and receiving a 'revoked' response, it mishandles the results and fails to block access. Instead, the EV browser cues (the green bar) is removed. Netcraft reported this apparent bug to Google in August 2013, but it was classed as low severity and has yet to be fixed on Linux.

The Windows version of Chrome (on left) behaves correctly and blocks access to a site with a revoked EV certificate. However, Chrome on Linux (on right) does not display any errors when a site uses a revoked EV certificate; it merely downgrades the UI from EV to standard SSL.

Where can we go from here?

Each of the currently available revocation methods has significant disadvantages: CRLs are potentially very large; OCSP can be blocked easily; and CRLSets are not intended to provide complete coverage. To those looking to move towards hard-fail, despite being far from pervasive, OCSP stapling could offer the answer. When combined with must-staple, currently an Internet draft, it would enable per-site, opt-in hard-fail behaviour. However, this solution is limited by the length of time (the Baseline Requirements limit the validity to 10 days) an attacker can use a cached 'good' OCSP response saved just before the certificate was revoked.

In the meantime, CRLSets, if they provided wider coverage, would be a more robust alternative to soft-fail OCSP checking. Mozilla is also looking to join Google by move towards a CRLSet-like mechanism for some of the revocation checking in Firefox.

Even soft-fail OCSP checking can be made more robust by removing any secure indicators (such as padlocks) when visiting a site without up-to-date revocation information.