95% of HTTPS servers vulnerable to trivial MITM attacks

Only 1 in 20 HTTPS servers correctly implements HTTP Strict Transport Security, a widely-supported security feature that prevents visitors making unencrypted HTTP connections to a server.

The remaining 95% are therefore vulnerable to trivial connection hijacking attacks, which can be exploited to carry out effective phishing, pharming and man-in-the-middle attacks. An attacker can exploit these vulnerabilities whenever a user inadvertently tries to access a secure site via HTTP, and so the attacker does not even need to spoof a valid TLS certificate. Because no crypto-wizardry is required to hijack an HTTP connection, these attacks are far easier to carry out than those that target TLS, such as the recently announced DROWN attack.

Background

The growth of HTTPS has been a mostly positive step in the evolution of the internet, enabling encrypted communications between more users and websites than ever before. Many high profile sites now use HTTPS by default, and millions of TLS certificates are currently in use on the web. With companies like Let's Encrypt offering free certificates and automated management tools, it is also easier than ever to deploy an HTTPS website that will be trusted by all modern browsers.

The primary purpose of a TLS certificate is to allow a browser to verify that it is communicating with the correct website. For example, if https://www.example.com uses a valid TLS certificate, then a man-in-the-middle attacker would not be able to hijack a browser's connection to this site unless he is also able to obtain a valid certificate for that domain.

A man-in-the-middle attack like this is generally not possible if the customer uses HTTPS.

A man-in-the-middle attack like this is generally not possible if the initial request from the customer uses HTTPS.

It would be extremely difficult for the attacker to obtain a valid certificate for a domain he does not control, and using an invalid certificate would cause the victim's browser to display an appropriate warning message. Consequently, man-in-the-middle attacks against HTTPS services are hard to pull off, and often not very successful. However, there are plenty of realistic opportunities to use the unencrypted HTTP protocol to attack most HTTPS websites.

HTTP Strict Transport Security (HSTS)

Encrypted communications are an essential requirement for banks and other financial websites, but HTTPS alone is not sufficient to defend these sites against man-in-the-middle attacks. Astonishingly, many banking websites lurk amongst the 95% of HTTPS servers that lack a simple feature that renders them still vulnerable to pharming and man-in-the-middle attacks. This missing feature is HTTP Strict Transport Security (HSTS), and only 1 in 20 secure servers currently make use of it, even though it is supported by practically all modern browsers.

Each secure website that does not implement an HSTS policy can be attacked simply by hijacking an HTTP connection that is destined for it. This is a surprisingly feasible attack vector, as there are many ways in which a user can inadvertently end up connecting via HTTP instead of HTTPS.

Manually typed URLs often result in an initial insecure request, as most users do not explicitly type in the protocol string (http:// or https://). When no protocol is given, the browser will default to HTTP – unless there is an appropriate HSTS policy in force.

To improve accessibility, most secure websites also run an HTTP service to redirect users to the corresponding HTTPS site – but this makes them particularly prone to man-in-the-middle attacks if there is no HSTS policy in force. Not only would many users be accustomed to visiting the HTTP site first, but anyone else who visits the site via an old bookmark or search engine result might also initially access the site via an insecure HTTP address. Whenever this happens, the attacker can hijack the initial HTTP request and prevent the customer being redirected to the secure HTTPS website.

This type of attack can be automated with the sslstrip tool, which transparently hijacks HTTP traffic on a network and converts HTTPS links and redirects into HTTP. This type of exploit is sometimes regarded as a protocol downgrade attack, but strictly speaking, it is not: rather than downgrading the protocol, it simply prevents the HTTP protocol being upgraded to HTTPS.

NatWest's online banking website at www.nwolb.com lacks an HSTS policy and also offers an HTTP service to redirect its customers to the HTTPS site. This setup is vulnerable to the type of man-in-the-middle attack described above.

NatWest's online banking website at www.nwolb.com lacks an HSTS policy and also offers an HTTP service to redirect its customers to the HTTPS site. This setup is vulnerable to the type of man-in-the-middle attack described above.

Vulnerable sites can be attacked on a massive scale by compromising home routers or DNS servers to point the target hostname at a server that is controlled by the attacker (a so-called "pharming" attack). Some smaller scale attacks can be carried out very easily – for example, if an attacker sets up a rogue Wi-Fi access point to provide internet access to nearby victims, he can easily influence the results of their DNS lookups.

Even if a secure website uses HTTPS exclusively (i.e. with no HTTP service at all), then man-in-the-middle attacks are still possible. For example, if a victim manually types www.examplebank.com into his browser's address bar—without prefixing it with https://—the browser will attempt to make an unencrypted HTTP connection to http://www.examplebank.com, even if the genuine site does not run an HTTP service. If this hostname has been pharmed, or is otherwise subjected to a man-in-the-middle attack, the attacker can hijack the request nonetheless and eavesdrop the connection as it is relayed to the genuine secure site, or serve phishing content directly to the victim.

In short, failing to implement an HSTS policy on a secure website means attackers can carry out man-in-the-middle attacks without having to obtain a valid TLS certificate. Many victims would fall for these attacks, as they can be executed over an unencrypted HTTP connection, thus avoiding any of the browser's tell-tale warnings about invalid certificates.

Implementing HSTS: A simple one-liner

The trivial man-in-the-middle attacks described above can be thwarted by implementing an appropriate HSTS policy. A secure website can do this simply by setting a single HTTP header in its responses:

    Strict-Transport-Security: max-age=31536000;

This header can only be set over an HTTPS connection, and instructs compatible browsers to only access the site over HTTPS for the next year (31,536,000 seconds = 1 year). This is the most common max-age value, used by nearly half of all HTTPS servers. After this HSTS policy has been applied, even if a user manually prefixes the site's hostname with http://, the browser will ignore this and access the site over HTTPS instead.

The combination of HSTS and HTTPS therefore provides a good defence against pharming attacks, as the attacker will not be able to redirect and intercept plaintext HTTP traffic when a client obeys the HSTS policy, nor will he be able to present a valid TLS certificate for the site he is impersonating.

The attacker cannot even rely on a small proportion his victims unwisely ignoring the use of an invalid certificate, as browsers must regard this situation as a hard fail when an HSTS policy is in force. The browser will simply not let the victim access the site if it finds an invalid certificate, nor will it allow an exception to be added.

When Google Chrome encounters an invalid certificate for a site that has an effective HSTS policy, the victim is not allowed to bypass the browser's warning message or add an exception.

When Google Chrome encounters an invalid certificate for a site that has an effective HSTS policy, the victim is not allowed to bypass the browser's warning message or add an exception.

To prevent other types of attack, it is also wise to add the includeSubDomains directive to ensure that every possible subdomain of a site is protected by HSTS. This mitigates cookie injection and session fixation attacks that could be executed by impersonating an HTTP site on a non-existent subdomain such as foo.www.example.com, and using it to set a cookie which would be sent to the secure site at https://www.example.com. This directive can be enabled like so:

    Strict-Transport-Security: max-age=31536000; includeSubDomains

However, some thought is required before taking the carte blanche approach of including all subdomains in an HSTS policy. The website's administrators must ensure that every single one of its subdomains supports HTTPS for at least the duration specified by the max-age parameter, otherwise users of these subdomains risk being locked out.

Setting an HSTS policy will also protect first time visitors who habitually use search bars or search engines to reach their destination. For example, typing "paypal" into Google's HTTPS search engine will yield a link to https://www.paypal.com, because Google will always link to the HTTPS version of a website if an appropriate HSTS policy exists.

HSTS preloading

HSTS is clearly an important security feature, but there are several circumstances under which its benefits will not work. Because HSTS directives are delivered via an HTTP header (over an HTTPS connection), HSTS can only instruct a browser to only use HTTPS after the browser's first visit to a secure website.

Men-in-the-middle can therefore still carry out attacks against users who have:

  • Never before visited the site.
  • Recently reinstalled their operating system.
  • Recently reinstalled their browser.
  • Switched to a new browser.
  • Switched to a new device (e.g. mobile phone).
  • Deleted their browser's cache.
  • Not visited the site within the past year (or however long the max-age period lasts).

These vulnerabilities can be eliminated by using HSTS Preloading, which ensures that the site's HSTS policy is distributed to supported browsers before the customer's first visit.

Website administrators can use the form at https://hstspreload.appspot.com/ to request for domains to be included in the HSTS Preload list maintained by Google. Each site must have a valid certificate, redirect all HTTP traffic to HTTPS, and serve all subdomains over HTTPS. The HSTS header served from each site must specify a max-age of at least 18 weeks (10,886,400 seconds) and include the preload and includeSubdomains directives.

It can take several months for domains to be reviewed and propagated to the latest stable versions of Firefox, Safari, Internet Explorer, Edge and Chrome. When domains are added to the preload list, all users of these browsers will benefit from the security offered by HSTS, even if they have never visited the sites before.

Conclusions

HSTS is widely supported, but not widely implemented. Nearly all modern browsers obey HSTS policies, including Internet Explorer 11, Microsoft Edge, Firefox, Chrome, Safari and Opera – yet less than 5% of secure websites enable this important security feature.

Secure websites that do not use HSTS are trivial to attack if the attacker can hijack a victim's web traffic, but it is even easier to defeat such attacks by implementing an HSTS policy. This begs the question of why so few websites are using HSTS.

The HSTS specification (RFC 6797) was published in 2012, and so it can hardly be considered a new technology any more. Nonetheless, many website administrators might still be unaware of its existence, or may not yet feel ready to commit to running an HTTPS-only website. These are probably the most significant reasons for its low uptake.

Some website administrators have even disabled HSTS by explicitly setting a max-age of 0 seconds. This has the effect of switching off any previously established HSTS policies, but this backpedalling can only take proper effect if every client revisits the secure site after the max-age has been set to zero. When a site implements an HSTS policy, it is effectively committed to maintaining its HTTPS service for as long as the largest max-age it has ever specified, otherwise it risks denying access to infrequent visitors. Nearly 4% of all HTTPS servers that use the Strict-Transport-Security header currently set a max-age of zero, including Twitter's t.co URL-shortener.

Browser support for HSTS can also introduce some privacy concerns. By initiating requests to several distinct hostnames (some of which enable HSTS), a hostile webpage can establish a "supercookie" to uniquely identify the client browser during subsequent visits, even if the user deletes the browser's conventional cookies. The browser will remember which pattern of hostnames had HSTS enabled, thus allowing the supercookie to persist. However, this privacy concern only affects clients and does not serve as an excuse for websites to avoid implementing their own HSTS policies.

Implementing an HSTS policy is very simple and there are no practical downsides when a site already operates entirely over HTTPS. This makes it even more surprising to see many banks failing to use HSTS, especially on their online banking platforms. This demonstrates poor security practices where it matters the most, as these are likely to be primary targets of pharming attacks.

Netcraft offers a range of services that can be used to detect and defeat large-scale pharming attacks, and security testing services that identify man-in-the-middle vulnerabilities in web application and mobile apps. Contact security-sales@netcraft.com for more information.

November 2017 Web Server Survey

In the November 2017 survey we received responses from 1,819,412,110 sites and 6,893,323 web-facing computers, reflecting a gain of 4.17M sites and 6,961 computers.

This month’s web server survey saw Microsoft’s market share amongst all sites fall by 12.64 percentage points due to a loss of 228M sites. Despite this, Microsoft still retains its place with the largest market share by this metric of 36.80%, with Apache trailing at 24.38%. The majority of the loss occurred at just one hosting provider where over 190M Microsoft sites were lost.

This change isn’t reflected in the active sites metric which only saw minor changes amongst the main web server vendors. Microsoft lost only 0.03 percentage points of its market share with a drop of 261k active sites. Apache leads in the active sites metric by a considerable margin, increasing its share slightly this month to 44.55%.

Amongst the top million busiest sites Microsoft experienced a small increase in market share, pausing its general decline in this market. nginx experienced the largest growth with an increase of 2,133 of the top million sites.

nginx also saw the largest increase in number of web-facing computers, gaining 25k and pulling 1 percentage point of market share clear of Microsoft, which it overtook last month. Apache also experienced a gain in computers, albeit smaller at just 7k. It remains considerably ahead with a 42.38% market share.

New gTLDs Seen for the First Time

This month the controversial new .search gTLD being run by Google’s Charleston Road Registry subsidiary was found for the first time, with www.nic.search responding to the survey. Google hopes it will be able to run .search as a dotless domain which will automatically redirect users to their search engine of choice. This proposal has been criticised for going against ICANN’s own rules, which prohibits this functionality due to the potential for conflicts with existing names on internal networks. This feature could also cause confusion for users who have come to expect that typing words into their address bar will perform a search query for that term.

It is currently uncertain whether or not Google will be allowed to run the .search TLD as a dotless domain, however with the launch of the first site on this TLD this month Google is one step closer to the provision of this service.

Total number of websites

Web server market share

DeveloperOctober 2017PercentNovember 2017PercentChange
Microsoft897,467,51749.44%669,517,17736.80%-12.64
Apache340,811,23518.78%443,521,99524.38%5.60
nginx333,942,60418.40%367,687,48920.21%1.81
Google21,127,0781.16%20,333,6041.12%-0.05
Continue reading

December 2017 Web Server Survey

In the December 2017 survey we received responses from 1,734,290,608 sites across 212,870,632 unique domain names and 7,014,428 web-facing computers. This reflects a gain of 5.34 million domains and 121,000 computers.

Web Server Developers - Market Share of Domains

The number of hostnames in use on the web has been a headline metric since the inception of the Web Server Survey, but it has been subjected to quite large fluctuations in recent years. Netcraft has therefore introduced the number of unique domains as an additional metric that provides a more stable view of the web.

The domains metric is not influenced by wildcarded domains or other large numbers of sites that can be hosted under a single domain name with minimal effort; but unlike the active sites metric, the domains metric still takes account of sites that are still under construction, or running hosting company or domain registrar holding pages.

Web server market share for domains

The noticeable spike in Apache-powered domains in May 2013 was caused by the largest hosting company of the time, GoDaddy, switching a large number of its domains from Microsoft IIS to Apache Traffic Server (ATS) . GoDaddy switched back to using IIS 7.5 a few months later.

Today, Apache still has the largest market share by number of domains, with 81.4 million giving it a market share of 38.2%. It also saw the largest gain this month, increasing its total by 1.53 million. This growth was closely followed by nginx, with a gain of 1.09 million domains increasing its total to 47.5 million. While Microsoft leads by overall number of hostnames, it lags in 3rd position when considering the number of unique domains those sites run on, with a total of 22.8 million.

Web-facing Computers

The number of web-facing computers provides an alternative view that corresponds more closely to the install base of each server vendor.

With 1.63 million web-facing computers, nginx is already 97,800 computers ahead of Microsoft since it took second place in October, but Apache remains much further ahead with a total of 2.98 million. Apache experienced the largest gain of 58,000 computers this month, closely followed by nginx with 49,000, and with Microsoft trailing with an increase of just 22,000.

Web server market share for computers

Web Server Updates

Microsoft's Internet Information Services platform has benefitted from a few improvements since the publication of last month's survey. The newest version of the IIS Administration API (2.2.0) introduced new endpoints that make it easy to monitor the health of a web server, as well as the individual websites and application pools running on it. There is also a new configuration endpoint for the files API, which allows the API's root folders to be configured – this means administrators no longer have to edit a file to configure which sections of the file system can be accessed via the API.

Version 1.0 of the IIS CORS Module, which works on IIS 7.5 or later, was also released in November. This enables support for the Cross-Origin Resource Sharing protocol, which lets webpages make use of resources that are hosted on other websites, such as web fonts and AJAX endpoints. If a website hosts these resources without setting a suitable CORS policy, the default same-origin policy enforced by all browsers would prevent other websites from accessing them.

The latest version of the open source LiteSpeed HTTP server, OpenLiteSpeed 1.4.28 (stable), was released on 8 November. This release adds multithreading APIs for LSIAPI – the API that allows it to support third-party modules. Although there are only 12,400 web-facing computers running LiteSpeed, these computers host 2.42 million domains. It is not clear how many of these computers are already running LiteSpeed 1.4.28, as this server does not expose version information in its headers.

lighttpd 1.4.48 was subsequently released on 11 November. This adds a new mod_authn_sasl module, which provides Simple Authentication and Security Layer (SASL) authentication similar to Apache's libapache2-mod-authn-sasl module. With 20,800 web-facing computers running lighttpd, it has a greater install base than LiteSpeed, but its market share of domains is noticeably smaller with a count of 565,000.

nginx 1.13.7 was released on 21 November, although this addresses several bugs rather than introducing any new features. There are, however, several new features in the latest version of its commercially supported product, NGINX Plus Release 14, which was announced on 12 December. This release features several improvements, including an updated live monitoring dashboard and JSON support in its nginScript scripting language; and there is also a technology preview of its extended clustering support, which lets NGINX Plus instances in a cluster share state information.

Total number of websites

Web server market share

DeveloperNovember 2017PercentDecember 2017PercentChange
Microsoft669,517,17736.80%535,762,81330.89%-5.91
Apache443,521,99524.38%446,418,87825.74%1.36
nginx367,687,48920.21%395,881,69022.83%2.62
Google20,333,6041.12%21,308,0691.23%0.11
Continue reading

Google’s POODLE affects oodles

97% of SSL web servers are likely to be vulnerable to POODLE, a vulnerability that can be exploited in version 3 of the SSL protocol. POODLE, in common with BEAST, allows a man-in-the-middle attacker to extract secrets from SSL sessions by forcing the victim's browser into making many thousands of similar requests. As a result of the fallback behaviour in all major browsers, connections to web servers that support both SSL 3 and more modern versions of the protocol are also at risk.

The Secure Sockets Layer (SSL) protocol is used by millions of websites to protect confidential data in transit across the internet using strong cryptography. The protocol was designed by Netscape in the mid 1990s and was first released to the public as SSL 2 in February 1995. It was quickly replaced by SSL 3 in 1996 after serious security flaws were discovered. SSL 3 was replaced by the IETF-defined Transport Layer Security (TLS) version 1.0 in January 1999 with relatively few changes. Since TLS 1's release, TLS 1.1 and TLS 1.2 have succeeded it and should be used in its place wherever possible.

sslv3-vulnerable6

POODLE's bark may be worse than its bite

Unlike Heartbleed, POODLE can be used to attack client-server connections and is inherent to the protocol itself, rather than any one implementation such as OpenSSL or Microsoft's SChannel. In order to exploit it, an attacker must modify the victim's network traffic, know how the targeted secret information is structured (such as where a session cookie appears) and be able to force the victim into making a large number of requests.

Each SSL connection is split up into a number of chunks, known as SSL records. When using a block cipher, such as Triple DES in CBC mode, each block is mixed in with the next block and the record then padded to be a whole number of blocks long (8-bytes in the case of Triple DES). An attacker with network access can carefully manipulate the ordering of the cipher-blocks within a record to influence the decryption and exploit the padding oracle. If the attacker has been lucky (there's a 1 in 256 chance), she will have matched the correct value for the padding length in her manipulated record and correctly guessed the value of a single byte of the secret. This can be repeated to reveal the entire targeted secret.

SSL 3's padding is particularly easy to exploit as it relies on a single byte at the end of the padding, the padding length. Consequently an attacker must force the victim to make only 256×n requests for n bytes of secret to be revealed. TLS 1.0 changed this padding mechanism, requiring the padding bytes themselves to have a specific value making the attack far less likely to succeed.

The POODLE vulnerability makes session hijacking attacks against web applications reasonably feasible for a correctly-positioned attacker. For example, a typical 32-byte session cookie could be retrieved after eavesdropping just over 8,000 HTTPS requests using SSL 3. This could be achieved by tricking the victim into visiting a specially crafted web page which uses JavaScript to send the necessary requests.

Use of SSL v3

Within the top 1,000 SSL sites, SSL 3 remained very widely supported yesterday, with 97% of SSL sites accepting an SSL 3 handshake. CitiBank and Bank of America both support SSL 3 exclusively and presumably are vulnerable.

move-to-tls1

A number of SSL sites have already reacted to this vulnerability by disabling support for SSL 3, including CloudFlare and LinkedIn. On Tuesday 14th, the most common configuration within the top 1,000 SSL sites was to support SSL 3.0 all the way through to TLS 1.2, with almost two-thirds of popular sites taking this approach. One day later, this remains the most popular configuration; however, TLS 1.0 is now the minimum version for 11%.

Microsoft Internet Explorer 6 does not support TLS 1.0 or greater by default and may be the most notable victim of disabling SSL 3 internet-wide. Now 13 years old, IE6 was the default browser released with Windows Server 2003 and Windows XP in 2001 and will remain supported in Windows Server 2003 until July 2015. Despite its age and the end of Microsoft's support for Windows XP, IE6 remains popular, accounting for more 3.8% of web visits worldwide, and 12.5% in China. This vulnerability may ring the death knell for IE6 and Windows XP.

However, unless SSL 3 is completely disabled on the server side, a client supporting SSL 3 may still be vulnerable even if the server supports more recent versions of TLS. An attacker can take advantage of browser fallback behaviour to force otherwise secure connections to use SSL 3 in place of TLS version 1 or above.

SSL version negotiation

At the start of an SSL connection, servers and clients mutually agree upon a version of SSL/TLS to use for the remainder of the connection. The client's first message to the server includes its maximum supported version of the protocol, the server then compares the client's maximum version against its own maximum version to pick the highest mutually supported version.

While this mechanism protects against version downgrade attacks in theory, most browsers have an additional fallback mechanism that retries a connection attempt with successively lower version numbers until it succeeds in negotiating a connection or it reaches the lowest acceptable version. This additional fallback mechanism has proven necessary for practical interoperability with some TLS servers and corporate man-in-the-middle devices which, rather than gracefully downgrading when presented with a non-supported version of TLS, they instead terminate the connection prematurely.

An attacker with appropriate network access can exploit this behaviour to force a TLS connection to be downgraded by forging Handshake Alert messages. The browser will take the Handshake Alert message as a signal that the remote server (or some intermediate device) has version negotiation bugs and the browser will retry the connection with a lower maximum version in the initial Client Hello message.

handshake-alert

Operation of a forced downgrade to SSL 3 against a modern browser.

The fallback mechanism was previously not a security issue as it never results in the use of a protocol version that neither the client nor server will accept. However, those with clients that have not yet been updated to disable support for SSL 3 are relying on the server to have disabled SSL 3. What remains is a chicken and egg problem, where modern clients support SSL 3 in order to retain support for legacy servers, and modern servers retain support for SSL 3 for legacy clients.

There is, however, a proposed solution in the form of an indicator (an SCSV) in the fallback connection to inform compatible servers that this connection is a fallback and to reject the connection unless the fallback was expected. Google Chrome and Google's web sites already support this SCSV indicator.


Firefox 32

Chrome 40

IE 11

Opera 25

Safari 7.1
TLS 1.2 TLS 1.2 x 3 TLS 1.2 TLS 1.2 x 3 TLS 1.2
TLS 1.1 TLS 1.1 TLS 1.1
TLS 1.0 TLS 1.0 TLS 1.0 TLS 1.0 TLS 1.0
SSL 3.0 SSL 3.0 SSL 3.0 SSL 3.0 SSL 3.0

Comparison of browser fallback behaviour

We tested five major browsers with an attack based on the forged Handshake Alert method outlined above, and found that each browser has a variant of this fallback behaviour. Both Chrome and Opera try TLS 1.2 three times before trying to downgrade the maximum supported version, whereas the remainder immediately started downgrading. Curiously, Internet Explorer and Safari both skip TLS 1.1 and jump straight from TLS 1.2 to TLS 1.0.

Mitigation

Mitigation can take many forms: the fallback SCSV, disabling SSL 3 fallback, disabling SSL 3 in the client side, disabling SSL 3 in the server side, and disabling CBC cipher suites in SSL version 3. Each solution has its own problems, but the current trend is to disable SSL 3 entirely.

Disabling only the CBC cipher suites in SSL 3 leaves system administrators with a dilemma: RC4 is the only other practical choice and it has its fair share of problems making it an undesirable alternative. The SCSV requires support from both clients and servers, so may take some time before it is widely deployed enough to mitigate this vulnerability; it will also likely not be applied to legacy browsers such as IE 6.

Apache httpd can be configured to disable SSL 3 as follows:

SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2 -SSLv2 -SSLv3
Microsoft IIS and nginx can also be configured to avoid negotiating SSL version 3.

Firefox can be configured to disable support for SSL 3 by altering security.tls.version.min from 0 (SSL 3) to 1 (TLS 1) in about:config.

firefox-disable

Internet Explorer can also be configured to disable support using the Advanced tab in the Internet Options dialogue (found in the Control Panel). In a similar way, IE 6 users can also enable support for TLS 1.0.

internet-options-disable

Chrome can be configured to not use SSL 3 using a command line flag, --ssl-version-min=tls1.

Site Report

You can check which SSL sites are still using SSL 3 using the Netcraft Site Report:

Netcraft site report
URL:

HTTP Public Key Pinning: You’re doing it wrong!

HTTP Public Key Pinning (HPKP) is a security feature that can prevent fraudulently issued TLS certificates from being used to impersonate existing secure websites.

Our previous article detailed how this technology works, and looked at some of the sites that have dared to use this powerful but risky feature. Notably, very few sites are making use of HPKP: Only 0.09% of the certificates in Netcraft's March 2016 SSL Survey are served with HPKP headers, which equates to fewer than 4,100 certificates in total.

But more surprisingly, around a third of these sites are using the HPKP header incorrectly, which effectively disables HPKP. Consequently, the total number of certificates that are actually using HPKP is effectively less than 3,000.

Firefox's developer console reveals that this site has failed to include a backup pin, and so its HPKP policy is ignored by the browser.Failing to include a backup pin is the most common type of mistake made by sites that try to use HPKP.

Firefox's developer console reveals that this site has failed to include a backup pin, and so its HPKP policy is ignored by the browser.
Failing to include a backup pin is the most common type of mistake made by sites that try to use HPKP.

HPKP is the best way of protecting a site from being impersonated by mis-issued certificates, but it is easy for this protection to backfire with severe consequences. Fortunately, most misconfigurations simply mean that a site's HPKP policy will be ignored by browsers. The site's administrators might not realise it, but this situation is essentially the same as not using HPKP at all.

How can it go wrong?

Our previous article demonstrated a few high-profile websites that were using HPKP to varying degrees. However, plenty of other sites have bungled HPKP to the extent that it simply does not work.

Zero max-age

Every HPKP policy must specify a max-age directive, which suggests how long a browser should regard the website as a "Known Pinned Host". The most commonly used max-age value is 5184000 seconds (60 days). Nearly 1,200 servers use this value, while around 900 use 2592000 seconds (30 days).

But around 70 sites feature pointlessly short max-age values, such as 5 or 10 seconds. These durations are far too short to be effective, as a victim's browser would rapidly forget about these known pinned hosts.

Additionally, a few sites explicitly specify a max-age of zero along with their public key pins. These sites are therefore not protected by HPKP, and are in some cases needlessly sending this header to every client request. It is possible that they are desperately trying to remove a previously set HPKP policy, but this approach obviously cannot be relied upon to remove cached pins from browsers that do not visit the site in the meantime. These sites would therefore have to continue using a certificate chain that conforms to their previous HPKP policy, or run the risk of locking out a few stragglers.

One of the sites that sets a zero max-age is https://vodsmarket.com. Even if this max-age were to be increased, HPKP would still not be enabled because there is only one pinned public key:

Public-Key-Pins: pin-sha256="sbKjNAOqGTDfcyW1mBsy9IOtS2XS4AE+RJsm+LcR+mU="; max-age=0;

Another example can be seen on https://wondershift.biz, which pins two certificates' public keys. Again, even if the max-age were to be increased, this policy would still not take effect because there are no backup pins specified (both of the pinned keys appear in the site's certificate chain):

Public-Key-Pins: pin-sha256="L7mpy8M0VvQcWm7Yyx1LFK/+Ao280UZkz5U38Qk5G5g=";
    pin-sha256="EohwrK1N7rr3bRQphPj4j2cel+B2d0NNbM9PWHNDXpM=";
    includeSubDomains;
    max-age=0;
    report-uri="https://yahvehyireh.com/incoming/hpkp/index.php"

Wrong pin directives

Each pinned public key must be specified via a separate pin-sha256 directive, and each value must be a SHA256 hash; but more than 1% of servers that try to use HPKP fail to specify these pins correctly.

For example, the Department of Technology at Aichi University of Education exhibits the following header on https://www.auetech.aichi-edu.ac.jp:

Public-Key-Pins: YEnyhAxjrMAeVokI+23XQv1lzV3IBb3zs+BA2EUeLFI=";
    max-age=5184000;
    includeSubDomains

This header appears to include a single public key hash, but it omits the pin-sha256 directive entirely. No browser will make any sense of this attempted policy.

In another example, the Fast Forward Imaging Customer Interface at https://endor.ffwimaging.com does something very peculiar. It uses a pin-sha512 directive, which is not supported by the RFC – but in any case, the value it is set to is clearly not a SHA512 hash:

Public-Key-Pins: pin-sha512="base64+info1="; max-age=31536000; includeSubDomains

Some sites try to use SHA1 public key hashes, which are also unsupported:

Public-Key-Pins: pin-sha1='ewWxG0o6PsfOgu9uOCmZ0znd8h4='; max-age=2592000; includeSubdomains

This one uses pin-sha instead of pin-sha256:

Public-Key-Pins: pin-sha="xZ4wUjthUJ0YMBsdGg/bXHUjpEec5s+tHDNnNtdkwq8=";
    max-age=5184000; includeSubDomains

And this one refers to the algorithm "SHA245", which does not exist:

Public-Key-Pins: pin-sha245="pyCA+ftfVu/P+92tEhZWnVJ4BGO78XWwNhyynshV9C4=";
    max-age=31536000; includeSubDomains

The above example was most likely a typo, as is the following example, which specifies a ping-sha256 value:

Public-Key-Pins: ping-sha256="5C8kvU039KouVrl52D0eZSGf4Onjo4Khs8tmyTlV3nU=";
    max-age=2592000; includeSubDomains

These are careless mistakes, but it is notable that these types of mistake alone account for more than 1% of all certificates that set the Public-Key-Pins header. The net effect of these mistakes is that HPKP is not enabled on these sites.

Only one pinned public key

As we emphasised in our previous article, it is essential that a secure site should specify at least two public key pins when deploying HPKP. At least one of these should be a backup pin, so that the website can recover from losing control of its deployed certificate. If the website owner still possesses the private key for one of the backup certificates, the site can revert to using one of the other pinned public keys without any browsers refusing to connect.

But 25% of servers that use HPKP specify only one public key pin. This means that HPKP will not be enabled on the sites that use these certificates.

To prevent sites from inadvertently locking out all of their visitors, and to force the use of backup pins, browsers should only cache a site's pinned public keys if the Public-Key-Pins header contains two or more hashes. At least one of these must correspond to a certificate that is in the site's certificate chain, and at least one must be a backup pin (if a hash cannot be found in the certificate chain, then the browser will assume it is a backup pin without verifying its existence).

https://xcloud.zone is an example of a site that only sets one public key pin:

Public-Key-Pins: pin-sha256="DKvbzsurIZ5t5PvMaiEGfGF8dD2MA7aTUH9dbVtTN28=";
    max-age=2592000; includeSubDomains

This single pin corresponds to the subscriber certificate issued to xcloud.zone. Despite the 30-day max-age value, this lonely public key hash will never be cached by a browser. Consequently, HPKP is not enabled on this site, and the header might as well be missing entirely.

No pins at all

As well as the 1,000+ servers that only have one pinned public key, some HPKP headers neglect to specify any pins at all, and a few try to set values that are not actually hashes (which has the same effect as not setting any pins at all). For example, the Hide My Ass! forum at https://forum.hidemyass.com sets the following:

Public-Key-Pins: pin-sha256="<Subject Public Key Information (SPKI)>";
    max-age=2592000; includeSubDomains

The ProPublica SecureDrop site at https://securedrop.propublica.org also made a subtle mistake last month by forgetting to enclose its pinned public key hashes in double-quotes:

Public-Key-Pins: max-age=86400;
    pin-sha256=rhdxr9/utGWqudj8bNbG3sEcyMYn5wspiI5mZWkHE8A=
    pin-sha256=lT09gPUeQfbYrlxRtpsHrjDblj9Rpz+u7ajfCrg4qDM=

The HPKP RFC mandates that the Base64-encoded public key hashes must be quoted strings, so the above policy would not have worked. ProPublica has since fixed this problem, as well as adding a third pin to the header.

ProPublica is an independent newsroom that produces investigative journalism in the public interest. It provides a SecureDrop site to allow tips or documents to be submitted securely; however, until recently the HPKP policy on this site was ineffectual.

ProPublica is an independent newsroom that produces investigative journalism in the public interest. It provides a SecureDrop site to allow tips or documents to be submitted securely; however, until recently the HPKP policy on this site was ineffectual.

If companies that specialise in online privacy and secure anonymous filesharing are making these kinds of mistake on their own websites, it's not surprising that so many other websites are also getting it wrong.

At least two pins, but no backup pins

A valid HPKP policy must specify at least two pins, and at least one of these must be a backup pin. A browser will assume that a pin corresponds to a backup certificate if none of the certificates in the site's certificate chain correspond to that pin.

The Samba mailing list website fails to include any backup pins. Consequently, its HPKP policy is not enforced.

The Samba mailing list website fails to include any backup pins. Consequently, its HPKP policy is not enforced.

The Samba mailing lists site at https://lists.samba.org specifies two pinned public key hashes, but both of these appear in its certificate chain. Consequently, a browser will not apply this policy because there is no evidence of a backup pin. HPKP is effectively disabled on this site.

Incidentally, the Let's Encrypt Authority X1 cross-signed intermediate certificate has the most commonly pinned public key in our survey. More than 9% feature this in their set of pins, although it should never be pinned exclusively because Let's Encrypt is not guaranteed to always use their X1 certificate. Topically, just a few days ago, Let's Encrypt started to issue all certificates via its new Let's Encrypt Authority X3 intermediate certificate in order to be compatible with older Windows XP clients; but fortunately, the new X3 certificate uses the same keys as the X1 certificate, and so any site that had pinned the public key of the X1 certificate will continue to be accessible when it renews its subscriber certificate, without having to change its current HPKP policy.

The next most common pin belongs to the COMODO RSA Domain Validation Secure Server CA certificate. This pin is used by more than 6% of servers in our survey, all of which – despite the use of HPKP – could be vulnerable to man-in-the-middle attacks if Comodo were to be hacked again.

Pinning only the public keys of subscriber certificates would offer the best security against these kinds of attack, but it is fairly common to also pin the keys of root and intermediate certificates to reduce the risk of "bricking" a website in the event of a key loss. This approach is very common among Let's Encrypt customers, as the default letsencrypt client software generates a new key pair each time a certificate is renewed. If the public key of the subscriber certificate were to be pinned, the pinning would no longer be valid when it is renewed.

Setting HPKP policies over HTTP

Some sites set HPKP headers over unencrypted HTTP connections, which is also ineffectual. For example, the Internet Storm Center website at www.dshield.org sets the following header on its HTTP site:

Public-Key-Pins: pin-sha256="oBPvhtvElQwtqQAFCzmHX7iaOgvmPfYDRPEMP5zVMBQ=";
    pin-sha256="Ofki57ad70COg0ke3x80cbJ62Tt3c/f3skTimJdpnTw=";
    max-age=2592000; report-uri="https://isc.sans.org/badkey.html"

The Public Key Pinning Extension for HTTP RFC states that browsers must ignore HPKP headers that are received over non-secure transport, and so the above header has no effect other than to consume additional bandwidth.

2.2.2.  HTTP Request Type
  Pinned Hosts SHOULD NOT include the PKP header field in HTTP
  responses conveyed over non-secure transport.  UAs MUST ignore any
  PKP header received in an HTTP response conveyed over non-secure
  transport.

One very good reason for ignoring HPKP policies that are set over unencrypted connections is to prevent "hostile pinning" by man-in-the-middle attackers. If an attacker were to inject a set of pins that the site owner does not control—and if the browser were to blindly cache these values—he would be able to create a junk policy on behalf of that website. This would prevent clients from accessing the site for a long period, without the attacker having to maintain his position as a man-in-the-middle.

If a visitor instead browses to https://www.dshield.org (using HTTPS), an HSTS policy is applied which forces future requests to use HTTPS. The HTTPS site also sets an HPKP header which is then accepted and cached by compatible browsers. However, as the HTTP site does not automatically redirect to the HTTPS site, it is likely that many visitors will never benefit from these HSTS or HPKP polices, even though they are correctly implemented on the HTTPS site.

In another bizarre example, HPKP headers are set by the HTTP site at http://www.msvmgroup.com, even though there is no corresponding HTTPS website (it does accept connections on port 443, but does not present a subscriber certificate that is valid for this hostname).

Not quite got round to it yet...

A few sites that use the Public-Key-Pins header have not quite got around to implementing it yet, such as https://justamagic.ru, which sets the following value:

Public-Key-Pins: TODO

Using HPKP headers to broadcast skepticism

One security company's website – https://websec-test.com – uses the Public-Key-Pins header to express its own skepticisms over the usefulness of HPKP:

Public-Key-Pins: This is like the most useless header I have ever seen.
    Preventing MITM, c'mon, whoever can't trust his own network shouldn't
    enter sensitive data anywhere.

Violation reports that will never be received

The Public-Key-Pins header supports an optional report-uri directive. In the event of a pin validation failure, the user's browser should send a report to this address, in addition to blocking access to the site. These reports are obviously valuable, as they will usually be the first indication that something is wrong.

However, if the report-uri address uses HTTPS, and is also known pinned host, the browser must also carry out pinning checks on this address when the report is sent. This makes it foolish to specify a report-uri that uses the same hostname as the site that is using HPKP.

An example of this configuration blunder can be seen on https://yahvehyireh.com, which sets the following Public-Key-Pins header:

Public-Key-Pins: pin-sha256="y+PfuAS+Dx0OspfM9POCW/HRIqMqsa83jeXaOECu1Ns=";
    pin-sha256="klO23nT2ehFDXCfx3eHTDRESMz3asj1muO+4aIdjiuY=";
    pin-sha256="EohwrK1N7rr3bRQphPj4j2cel+B2d0NNbM9PWHNDXpM=";
    includeSubDomains; max-age=0;
     report-uri="https://yahvehyireh.com/incoming/hpkp/index.php"

This header instructs the browser to send pinning validation failure reports to https://yahvehyireh.com/incoming/hpkp/index.php. However, if there were to be a pinning validation failure on yahvehyireh.com, then the browser would be unable to send any reports because the report-uri itself would also fail the pinning checks by virtue of using the same hostname.

Incidentally, Chrome 46 introduced support for a newer header, Public-Key-Pins-Report-Only, which instructs the browser to perform identical pinning checks to those specified by the Public-Key-Pins header, but it will never block a request when no pinned keys are encountered; instead, the browser will send a report to a URL specified by a report-uri parameter, and the user will be allowed to continue browsing the site. This mechanism would make it safe for site administrators to test the deployment of HPKP on their sites, without inadvertently introducing a denial of service.

Summary

The proportion of secure servers that use HPKP headers is woefully low at only 0.09%, but to make matters worse, many of these few HPKP policies have been implemented incorrectly and do not work as intended.

Without delving into developer settings, browsers offer no visible indications that a site has an invalid HPKP policy, and so it is likely that many website administrators have no idea that their attempts at implementing HPKP have failed. Around a third of the sites that attempt to set an HPKP policy have got it wrong, and consequently behave as if there was no HPKP policy at all. Every response from these servers will include the unnecessary overhead of a header containing a policy that will ultimately be ignored by all browsers.

But there is still hope for the masses: A more viable alternative to HPKP might arise from an Internet-Draft entitled TLS Server Identity Pinning with Tickets. It proposes to extend TLS with opaque tickets, similar to those being used for TLS session resumption, as a way to pin a server's identity. This feature would allow a client to ensure that it is connecting to the right server, even in the presence of a fraudulently issued certificate, but has a significant advantage over HPKP in that no manual management actions would be required. If this draft comes to fruition, and is subsequently implemented by browsers and servers, this ticket-based approach to pinning could potentially see a greater uptake than HPKP has.

Netcraft offers a range of services that can be used to detect and defeat large-scale pharming attacks, and security testing services that identify man-in-the-middle vulnerabilities in web application and mobile apps. Contact security-sales@netcraft.com for more information.

October 2017 Web Server Survey

In the October 2017 survey we received responses from 1,815,237,491 sites and 6,886,362 web-facing computers, reflecting a gain of 10.2 million sites and 88,300 computers.

Web-facing computers: nginx takes second place from Microsoft

nginx made the largest gains in websites, active sites, and web-facing computers this month, as well as increasing its presence among the top million sites. Most notably, the additional 42,100 web-facing computers it gained has taken its total up to 1.55 million computers, putting it ahead of Microsoft for the first time.

Overtaking Microsoft means that nginx is now the second largest server vendor in terms of web-facing computers. With its remarkably consistent growth, nginx is likely to retain this newfound position for some time – not least because Microsoft's web-facing computer share has been on a general decline since 2010.

In the other metrics, nginx gained 18.4 million sites, 941,000 active sites, and slightly increased its share of the top million sites to 29.43%. It stays ranked in 2nd place within the top million sites and active sites, but 3rd in all sites.

While Microsoft's loss of 3,470 web-facing computers helped propel nginx into second place, it made more significant losses in other metrics – it lost 30.1 million sites this month, although this corresponds to a loss of only 85,500 active sites.

New releases

Apache 2.4.29 was released on 23 October. This security, feature and bug fix release represents the latest version of the current 2.4.x branch. As usual, it is recommended over all previous releases, but it is difficult to track how many website administrators take heed of this advice.

For instance, many Apache servers do not reveal via their Server headers or error pages which version has been installed, while others may have been updated with backported patches that do not affect the displayed version number. Consequently, a large number of Apache servers claim to be running older versions than they really are. Only 12.6% of the 341 million sites running Apache claim to be running a 2.4.x release, whereas the true proportion is likely to be much higher, given that almost two-thirds of Apache-using sites do not disclose any version number.

Apache continues to lead the market in terms of active sites and web-facing computers, where it has market shares of 44.5% and 42.3%. It also has the largest presence among the top million sites, with 386,000 of these using Apache.

Another new release this month was LiteSpeed Web Server 5.2.2 (stable), which was released on 17 October. This addresses a couple of bug fixes and improves compatibility with the latest version of the popular cPanel web-based control panel.

As well as its commercially supported LiteSpeed Web Server, LiteSpeed Technologies Inc also provides OpenLiteSpeed, which is freely available under the GPL version 3 licence. LiteSpeed is currently the 7th largest vendor in terms of hostnames and active sites: Nearly 11.5 million sites in the survey are powered by LiteSpeed, and 2.7 million (24%) of these are deemed to be active sites.

One of LiteSpeed's most prominent gains was made in November last year, when a large number of hostnames under the .science top-level domain switched to it from Taobao's Tengine web server. This caused LiteSpeed's market share of sites to leap from 0.39% to 3.29%, although it has since settled back down to 0.63%. Nonetheless, this is still noticeably larger than its share of web-facing computers, which currently stands at 0.17%.

nginx 1.13.6 (mainline) and nginx 1.12.2 (stable) were also released in October. Both releases consist solely of bug fixes.

NGINX Unit

Alongside the new releases of nginx, the nginx.org homepage unusually announced the release of a different product this month: NGINX Unit 0.2 Beta.

NGINX Unit is a lightweight dynamic web application server designed to run applications written in Python, PHP, Go, JavaScript, Java and Ruby, although the current beta release does not support all of these languages, nor does it support TLS, routing or proxying yet.

Precompiled binaries are available for CentOS 7 and Ubuntu 16.04, but as it is a beta release, it is not recommended for use in a production environment. Consequently, it is unlikely to have much of a presence on the web in the near future; also, for performance reasons, it is likely that NGINX Unit would be installed behind a regular nginx web server acting as a reverse proxy.

Total number of websites

Web server market share

DeveloperSeptember 2017PercentOctober 2017PercentChange
Microsoft927,540,45451.39%897,467,51749.44%-1.94
Apache329,105,83218.23%340,811,23518.78%0.54
nginx315,530,74617.48%333,942,60418.40%0.92
Google20,906,8491.16%21,127,0781.16%0.01
Continue reading