95% of HTTPS servers vulnerable to trivial MITM attacks

Only 1 in 20 HTTPS servers correctly implements HTTP Strict Transport Security, a widely-supported security feature that prevents visitors making unencrypted HTTP connections to a server.

The remaining 95% are therefore vulnerable to trivial connection hijacking attacks, which can be exploited to carry out effective phishing, pharming and man-in-the-middle attacks. An attacker can exploit these vulnerabilities whenever a user inadvertently tries to access a secure site via HTTP, and so the attacker does not even need to spoof a valid TLS certificate. Because no crypto-wizardry is required to hijack an HTTP connection, these attacks are far easier to carry out than those that target TLS, such as the recently announced DROWN attack.

Background

The growth of HTTPS has been a mostly positive step in the evolution of the internet, enabling encrypted communications between more users and websites than ever before. Many high profile sites now use HTTPS by default, and millions of TLS certificates are currently in use on the web. With companies like Let's Encrypt offering free certificates and automated management tools, it is also easier than ever to deploy an HTTPS website that will be trusted by all modern browsers.

The primary purpose of a TLS certificate is to allow a browser to verify that it is communicating with the correct website. For example, if https://www.example.com uses a valid TLS certificate, then a man-in-the-middle attacker would not be able to hijack a browser's connection to this site unless he is also able to obtain a valid certificate for that domain.

A man-in-the-middle attack like this is generally not possible if the customer uses HTTPS.

A man-in-the-middle attack like this is generally not possible if the initial request from the customer uses HTTPS.

It would be extremely difficult for the attacker to obtain a valid certificate for a domain he does not control, and using an invalid certificate would cause the victim's browser to display an appropriate warning message. Consequently, man-in-the-middle attacks against HTTPS services are hard to pull off, and often not very successful. However, there are plenty of realistic opportunities to use the unencrypted HTTP protocol to attack most HTTPS websites.

HTTP Strict Transport Security (HSTS)

Encrypted communications are an essential requirement for banks and other financial websites, but HTTPS alone is not sufficient to defend these sites against man-in-the-middle attacks. Astonishingly, many banking websites lurk amongst the 95% of HTTPS servers that lack a simple feature that renders them still vulnerable to pharming and man-in-the-middle attacks. This missing feature is HTTP Strict Transport Security (HSTS), and only 1 in 20 secure servers currently make use of it, even though it is supported by practically all modern browsers.

Each secure website that does not implement an HSTS policy can be attacked simply by hijacking an HTTP connection that is destined for it. This is a surprisingly feasible attack vector, as there are many ways in which a user can inadvertently end up connecting via HTTP instead of HTTPS.

Manually typed URLs often result in an initial insecure request, as most users do not explicitly type in the protocol string (http:// or https://). When no protocol is given, the browser will default to HTTP – unless there is an appropriate HSTS policy in force.

To improve accessibility, most secure websites also run an HTTP service to redirect users to the corresponding HTTPS site – but this makes them particularly prone to man-in-the-middle attacks if there is no HSTS policy in force. Not only would many users be accustomed to visiting the HTTP site first, but anyone else who visits the site via an old bookmark or search engine result might also initially access the site via an insecure HTTP address. Whenever this happens, the attacker can hijack the initial HTTP request and prevent the customer being redirected to the secure HTTPS website.

This type of attack can be automated with the sslstrip tool, which transparently hijacks HTTP traffic on a network and converts HTTPS links and redirects into HTTP. This type of exploit is sometimes regarded as a protocol downgrade attack, but strictly speaking, it is not: rather than downgrading the protocol, it simply prevents the HTTP protocol being upgraded to HTTPS.

NatWest's online banking website at www.nwolb.com lacks an HSTS policy and also offers an HTTP service to redirect its customers to the HTTPS site. This setup is vulnerable to the type of man-in-the-middle attack described above.

NatWest's online banking website at www.nwolb.com lacks an HSTS policy and also offers an HTTP service to redirect its customers to the HTTPS site. This setup is vulnerable to the type of man-in-the-middle attack described above.

Vulnerable sites can be attacked on a massive scale by compromising home routers or DNS servers to point the target hostname at a server that is controlled by the attacker (a so-called "pharming" attack). Some smaller scale attacks can be carried out very easily – for example, if an attacker sets up a rogue Wi-Fi access point to provide internet access to nearby victims, he can easily influence the results of their DNS lookups.

Even if a secure website uses HTTPS exclusively (i.e. with no HTTP service at all), then man-in-the-middle attacks are still possible. For example, if a victim manually types www.examplebank.com into his browser's address bar—without prefixing it with https://—the browser will attempt to make an unencrypted HTTP connection to http://www.examplebank.com, even if the genuine site does not run an HTTP service. If this hostname has been pharmed, or is otherwise subjected to a man-in-the-middle attack, the attacker can hijack the request nonetheless and eavesdrop the connection as it is relayed to the genuine secure site, or serve phishing content directly to the victim.

In short, failing to implement an HSTS policy on a secure website means attackers can carry out man-in-the-middle attacks without having to obtain a valid TLS certificate. Many victims would fall for these attacks, as they can be executed over an unencrypted HTTP connection, thus avoiding any of the browser's tell-tale warnings about invalid certificates.

Implementing HSTS: A simple one-liner

The trivial man-in-the-middle attacks described above can be thwarted by implementing an appropriate HSTS policy. A secure website can do this simply by setting a single HTTP header in its responses:

    Strict-Transport-Security: max-age=31536000;

This header can only be set over an HTTPS connection, and instructs compatible browsers to only access the site over HTTPS for the next year (31,536,000 seconds = 1 year). This is the most common max-age value, used by nearly half of all HTTPS servers. After this HSTS policy has been applied, even if a user manually prefixes the site's hostname with http://, the browser will ignore this and access the site over HTTPS instead.

The combination of HSTS and HTTPS therefore provides a good defence against pharming attacks, as the attacker will not be able to redirect and intercept plaintext HTTP traffic when a client obeys the HSTS policy, nor will he be able to present a valid TLS certificate for the site he is impersonating.

The attacker cannot even rely on a small proportion his victims unwisely ignoring the use of an invalid certificate, as browsers must regard this situation as a hard fail when an HSTS policy is in force. The browser will simply not let the victim access the site if it finds an invalid certificate, nor will it allow an exception to be added.

When Google Chrome encounters an invalid certificate for a site that has an effective HSTS policy, the victim is not allowed to bypass the browser's warning message or add an exception.

When Google Chrome encounters an invalid certificate for a site that has an effective HSTS policy, the victim is not allowed to bypass the browser's warning message or add an exception.

To prevent other types of attack, it is also wise to add the includeSubDomains directive to ensure that every possible subdomain of a site is protected by HSTS. This mitigates cookie injection and session fixation attacks that could be executed by impersonating an HTTP site on a non-existent subdomain such as foo.www.example.com, and using it to set a cookie which would be sent to the secure site at https://www.example.com. This directive can be enabled like so:

    Strict-Transport-Security: max-age=31536000; includeSubDomains

However, some thought is required before taking the carte blanche approach of including all subdomains in an HSTS policy. The website's administrators must ensure that every single one of its subdomains supports HTTPS for at least the duration specified by the max-age parameter, otherwise users of these subdomains risk being locked out.

Setting an HSTS policy will also protect first time visitors who habitually use search bars or search engines to reach their destination. For example, typing "paypal" into Google's HTTPS search engine will yield a link to https://www.paypal.com, because Google will always link to the HTTPS version of a website if an appropriate HSTS policy exists.

HSTS preloading

HSTS is clearly an important security feature, but there are several circumstances under which its benefits will not work. Because HSTS directives are delivered via an HTTP header (over an HTTPS connection), HSTS can only instruct a browser to only use HTTPS after the browser's first visit to a secure website.

Men-in-the-middle can therefore still carry out attacks against users who have:

  • Never before visited the site.
  • Recently reinstalled their operating system.
  • Recently reinstalled their browser.
  • Switched to a new browser.
  • Switched to a new device (e.g. mobile phone).
  • Deleted their browser's cache.
  • Not visited the site within the past year (or however long the max-age period lasts).

These vulnerabilities can be eliminated by using HSTS Preloading, which ensures that the site's HSTS policy is distributed to supported browsers before the customer's first visit.

Website administrators can use the form at https://hstspreload.appspot.com/ to request for domains to be included in the HSTS Preload list maintained by Google. Each site must have a valid certificate, redirect all HTTP traffic to HTTPS, and serve all subdomains over HTTPS. The HSTS header served from each site must specify a max-age of at least 18 weeks (10,886,400 seconds) and include the preload and includeSubdomains directives.

It can take several months for domains to be reviewed and propagated to the latest stable versions of Firefox, Safari, Internet Explorer, Edge and Chrome. When domains are added to the preload list, all users of these browsers will benefit from the security offered by HSTS, even if they have never visited the sites before.

Conclusions

HSTS is widely supported, but not widely implemented. Nearly all modern browsers obey HSTS policies, including Internet Explorer 11, Microsoft Edge, Firefox, Chrome, Safari and Opera – yet less than 5% of secure websites enable this important security feature.

Secure websites that do not use HSTS are trivial to attack if the attacker can hijack a victim's web traffic, but it is even easier to defeat such attacks by implementing an HSTS policy. This begs the question of why so few websites are using HSTS.

The HSTS specification (RFC 6797) was published in 2012, and so it can hardly be considered a new technology any more. Nonetheless, many website administrators might still be unaware of its existence, or may not yet feel ready to commit to running an HTTPS-only website. These are probably the most significant reasons for its low uptake.

Some website administrators have even disabled HSTS by explicitly setting a max-age of 0 seconds. This has the effect of switching off any previously established HSTS policies, but this backpedalling can only take proper effect if every client revisits the secure site after the max-age has been set to zero. When a site implements an HSTS policy, it is effectively committed to maintaining its HTTPS service for as long as the largest max-age it has ever specified, otherwise it risks denying access to infrequent visitors. Nearly 4% of all HTTPS servers that use the Strict-Transport-Security header currently set a max-age of zero, including Twitter's t.co URL-shortener.

Browser support for HSTS can also introduce some privacy concerns. By initiating requests to several distinct hostnames (some of which enable HSTS), a hostile webpage can establish a "supercookie" to uniquely identify the client browser during subsequent visits, even if the user deletes the browser's conventional cookies. The browser will remember which pattern of hostnames had HSTS enabled, thus allowing the supercookie to persist. However, this privacy concern only affects clients and does not serve as an excuse for websites to avoid implementing their own HSTS policies.

Implementing an HSTS policy is very simple and there are no practical downsides when a site already operates entirely over HTTPS. This makes it even more surprising to see many banks failing to use HSTS, especially on their online banking platforms. This demonstrates poor security practices where it matters the most, as these are likely to be primary targets of pharming attacks.

Netcraft offers a range of services that can be used to detect and defeat large-scale pharming attacks, and security testing services that identify man-in-the-middle vulnerabilities in web application and mobile apps. Contact security-sales@netcraft.com for more information.

April 2018 Web Server Survey

In the April 2018 survey we received responses from 1,783,239,123 sites across 214,513,048 unique domain names and 7,387,066 web-facing computers. This reflects a gain of 12.8 million sites and 53,500 computers, but a loss of 261,000 domains.

Microsoft dominated this month's hostname growth, with 25.1 million additional hostnames bringing its leading market share up by 1.15 percentage points to 36.9%. Meanwhile, Apache lost 8.2 million sites and nginx lost 5.7 million.

Microsoft fared less well in most other metrics, however. Despite its large increase in hostnames, Microsoft's domain count fell by 1.4 million, and it also suffered a loss of 5,360 web-facing computers and 51,300 active sites. Nonetheless, its presence within the top million sites grew by 517 sites.

nginx may have lost 5.7 million hostnames, but it showed the strongest growth in some of the most important metrics. This included a gain of 46,700 web-facing computers, 3.8 million domains, and an additional 4,280 sites in the top million. The noticeable uptick in nginx-powered domains this month has increased its market share of domains by 1.81 percentage points to 22.5%, leaving it only 3.5 points behind Microsoft. nginx has demonstrated fairly consistent domain growth since this metric was introduced in 2009, and if these trends continue, it could feasibly take second place from Microsoft within a year.

Apache suffered losses in every metric this month, including a loss of 3.0 million domains and 1.1 million active sites, along with 2,840 sites within the top million. Nonetheless, it maintains a comfortable lead in every metric except hostnames, where its 25.6% market share is 11.4 points behind Microsoft's.

Some of the highest-traffic sites using Apache today include news website www.bbc.com; financial sites like www.xe.com and www.paypal.com; the Steam online gaming store at store.steampowered.com and its community forum at steamcommunity.com; and sites used by ad networks, like ads.pubmatic.com and c.betrad.com.

Apache Tomcat – the hidden backend

More than 450 million websites are currently using the Apache HTTP server, but this is not the only web server product offered by the Apache Software Foundation. The Apache Tomcat project provides an open source implementation of Java Servlet and JSP technologies, but its deployment is hard to quantify.

Tomcat is often used as a backend application server, with the Apache Tomcat Connectors project connecting it to other web-facing servers like Apache and Microsoft IIS. In many of these cases, Tomcat cannot be detected passively, although it may be possible to confirm its use during a web application security test – for example, by tricking the application into returning a Java stack trace.

Tomcat also includes its own native HTTP connector that allows it to be used as a standalone HTTP server, and these servers can be passively identified from their "Apache Tomcat" server headers. However, this is not a commonly used configuration: Only 10,300 websites exhibited the Apache Tomcat server header this month, and only 35 of these sites were ranked within the top million.

Several different versions of Apache Tomcat are available, depending which version of Java needs to be supported. Surprisingly, most Tomcat servers that are exposed directly to the internet are running Apache Tomcat 4.1.x, which has not been supported for several years. Actively maintained versions include 9.x, 8.5.x, 8.0.x and 7.x, although support for 8.0.x will end on 30 June 2018. The most recent versions of Apache Tomcat are 8.5.30 and 9.0.7, which were both released on 7 April.

Other new releases

The mainline branch of nginx has seen three new releases since last month's survey. nginx 1.13.10 was released on 20 March 2018, and added a few new features including the ngx_http_grpc_module module, which allows requests to be passed to a gRPC server. nginx 1.13.11 was subsequently released on 3 April, followed by nginx 1.13.12 on 10 April. These releases include a few bug fixes and an improved proxy protocol feature.

nginx also announced the release of njs 0.2.0 on 3 April. njs implements a subset of the JavaScript language, allowing location and variable handlers to be used in nginx's ngx_http_js_module and ngx_stream_js_module modules.

OpenLiteSpeed 1.4.31 (stable) and 1.5.0 RC3 were released on 11 April 2018. This open source server cannot be distinguished from the commercially available LiteSpeed Web Server, as both products use the same "LiteSpeed" server header. More than 12.5 million sites exhibit this header, across 13,600 web-facing computers.

Tengine 1.4.2

Nearly 28 million websites are using Taobao's nginx-based Tengine web server, but 74% are still running a version that was released several years ago, despite later releases including not just new features, but also security fixes. The most extensive user of Tengine 1.4.2 – which was released in November 2012 – is the Chinese cloud computing infrastructure service provider Aiyun Network.

Uptake of new Tengine releases is generally slow across the internet. The latest version, Tengine 2.2.2, was released on 26 January 2018, but only 262 sites are currently using it. Most of these sites are hosted by Internet Vision in Lithuania, while handfuls of other early adopters are hosted on low-cost cloud hosting platforms provided by Aliyun, DigitalOcean and Linode.

The poor uptake of newer releases could be partly caused by their lack of visibility on the Tengine website at tengine.taobao.org. The latest version that can be downloaded from the News section on the homepage is the 2.2.0 development version that was released in December 2016, followed by the 2.1.2 stable version from December 2015. Download links for the much-newer 2.2.1 and 2.2.2 releases can only be found on a separate download page.

cloudflare-nginx still lingers

Cloudflare's migration to its new cloudflare server header is not yet over, with more than 10,000 websites still using the old cloudflare-nginx header. These account for less than 0.07% of all Cloudflare sites in the survey, so the migration is very close to completion.

Cloudflare recently increased the size of its European network to 41 cities, expanding its global network to 151 cities across 74 countries. Its highest data centre is 2.6 km above sea level in the city of Bogotá, Columbia.

Total number of websites

Web server market share

DeveloperMarch 2018PercentApril 2018PercentChange
Microsoft633,719,94135.80%658,800,75636.94%1.15
Apache464,340,53526.23%456,169,33625.58%-0.65
nginx409,124,17423.11%403,381,96122.62%-0.49
Google21,802,6701.23%22,460,5621.26%0.03
Continue reading

March 2018 Web Server Survey

In the March 2018 survey we received responses from 1,770,411,187 sites across 214,774,438 unique domain names and 7,333,606 web-facing computers. This reflects a gain of 43,000 computers, and 738,000 additional domains. The total number of hostnames fell by 68.2 million, and the number of active sites fell by 3.4 million.

Domain growth this month was shared between nginx and Microsoft; nginx increased its market share by 0.15 percentage points by gaining 482,000 domains while Microsoft gained slightly fewer with 413,000. Market leader Apache lost 568,000 domains and continues to lose market share, its share now stands at 37.6%.

Apache also loses out when looking at the number of web-facing computers, 31,000 fewer computers were seen running the web server software in March leading to a drop in market share of 0.67 percentage points. Microsoft also lost computers, seeing 100 fewer, while nginx continues to gain both computers and market share. Nearly 1.8 million web-facing computers were seen running nginx in March, giving it 24.5% of the market.

New Releases

The Apache Software Foundation announced the release of Apache 2.4.33 on 23 March; this is the first release announced since Apache 2.4.29 in October 2017, the skipped version numbers were not publicly announced due to issues which came to light during the release process. The 2.4.x branch is recommended over all previous releases.

OpenLiteSpeed, the open source software behind the commercial LiteSpeed HTTP server, received releases to both its stable and latest branches in February, version 1.4.30 was released on 14 February and version 1.5.0 RC2 on 15 February, both releases fix the same set of issues.

lighttpd version 1.4.49 released on 11 March, with it the open source web server adds basic support for the HTTP CONNECT method along with several bug fixes. lighttpd is seen in use serving sites on 500,000 unique domains in the March survey; these are served from 21,000 web-facing computers.

Total number of websites

Web server market share

DeveloperFebruary 2018PercentMarch 2018PercentChange
Microsoft634,359,41934.50%633,719,94135.80%1.29
Apache504,701,56027.45%464,340,53526.23%-1.22
nginx447,224,45624.32%409,124,17423.11%-1.22
Google22,022,6331.20%21,802,6701.23%0.03
Continue reading

Google’s POODLE affects oodles

97% of SSL web servers are likely to be vulnerable to POODLE, a vulnerability that can be exploited in version 3 of the SSL protocol. POODLE, in common with BEAST, allows a man-in-the-middle attacker to extract secrets from SSL sessions by forcing the victim's browser into making many thousands of similar requests. As a result of the fallback behaviour in all major browsers, connections to web servers that support both SSL 3 and more modern versions of the protocol are also at risk.

The Secure Sockets Layer (SSL) protocol is used by millions of websites to protect confidential data in transit across the internet using strong cryptography. The protocol was designed by Netscape in the mid 1990s and was first released to the public as SSL 2 in February 1995. It was quickly replaced by SSL 3 in 1996 after serious security flaws were discovered. SSL 3 was replaced by the IETF-defined Transport Layer Security (TLS) version 1.0 in January 1999 with relatively few changes. Since TLS 1's release, TLS 1.1 and TLS 1.2 have succeeded it and should be used in its place wherever possible.

sslv3-vulnerable6

POODLE's bark may be worse than its bite

Unlike Heartbleed, POODLE can be used to attack client-server connections and is inherent to the protocol itself, rather than any one implementation such as OpenSSL or Microsoft's SChannel. In order to exploit it, an attacker must modify the victim's network traffic, know how the targeted secret information is structured (such as where a session cookie appears) and be able to force the victim into making a large number of requests.

Each SSL connection is split up into a number of chunks, known as SSL records. When using a block cipher, such as Triple DES in CBC mode, each block is mixed in with the next block and the record then padded to be a whole number of blocks long (8-bytes in the case of Triple DES). An attacker with network access can carefully manipulate the ordering of the cipher-blocks within a record to influence the decryption and exploit the padding oracle. If the attacker has been lucky (there's a 1 in 256 chance), she will have matched the correct value for the padding length in her manipulated record and correctly guessed the value of a single byte of the secret. This can be repeated to reveal the entire targeted secret.

SSL 3's padding is particularly easy to exploit as it relies on a single byte at the end of the padding, the padding length. Consequently an attacker must force the victim to make only 256×n requests for n bytes of secret to be revealed. TLS 1.0 changed this padding mechanism, requiring the padding bytes themselves to have a specific value making the attack far less likely to succeed.

The POODLE vulnerability makes session hijacking attacks against web applications reasonably feasible for a correctly-positioned attacker. For example, a typical 32-byte session cookie could be retrieved after eavesdropping just over 8,000 HTTPS requests using SSL 3. This could be achieved by tricking the victim into visiting a specially crafted web page which uses JavaScript to send the necessary requests.

Use of SSL v3

Within the top 1,000 SSL sites, SSL 3 remained very widely supported yesterday, with 97% of SSL sites accepting an SSL 3 handshake. CitiBank and Bank of America both support SSL 3 exclusively and presumably are vulnerable.

move-to-tls1

A number of SSL sites have already reacted to this vulnerability by disabling support for SSL 3, including CloudFlare and LinkedIn. On Tuesday 14th, the most common configuration within the top 1,000 SSL sites was to support SSL 3.0 all the way through to TLS 1.2, with almost two-thirds of popular sites taking this approach. One day later, this remains the most popular configuration; however, TLS 1.0 is now the minimum version for 11%.

Microsoft Internet Explorer 6 does not support TLS 1.0 or greater by default and may be the most notable victim of disabling SSL 3 internet-wide. Now 13 years old, IE6 was the default browser released with Windows Server 2003 and Windows XP in 2001 and will remain supported in Windows Server 2003 until July 2015. Despite its age and the end of Microsoft's support for Windows XP, IE6 remains popular, accounting for more 3.8% of web visits worldwide, and 12.5% in China. This vulnerability may ring the death knell for IE6 and Windows XP.

However, unless SSL 3 is completely disabled on the server side, a client supporting SSL 3 may still be vulnerable even if the server supports more recent versions of TLS. An attacker can take advantage of browser fallback behaviour to force otherwise secure connections to use SSL 3 in place of TLS version 1 or above.

SSL version negotiation

At the start of an SSL connection, servers and clients mutually agree upon a version of SSL/TLS to use for the remainder of the connection. The client's first message to the server includes its maximum supported version of the protocol, the server then compares the client's maximum version against its own maximum version to pick the highest mutually supported version.

While this mechanism protects against version downgrade attacks in theory, most browsers have an additional fallback mechanism that retries a connection attempt with successively lower version numbers until it succeeds in negotiating a connection or it reaches the lowest acceptable version. This additional fallback mechanism has proven necessary for practical interoperability with some TLS servers and corporate man-in-the-middle devices which, rather than gracefully downgrading when presented with a non-supported version of TLS, they instead terminate the connection prematurely.

An attacker with appropriate network access can exploit this behaviour to force a TLS connection to be downgraded by forging Handshake Alert messages. The browser will take the Handshake Alert message as a signal that the remote server (or some intermediate device) has version negotiation bugs and the browser will retry the connection with a lower maximum version in the initial Client Hello message.

handshake-alert

Operation of a forced downgrade to SSL 3 against a modern browser.

The fallback mechanism was previously not a security issue as it never results in the use of a protocol version that neither the client nor server will accept. However, those with clients that have not yet been updated to disable support for SSL 3 are relying on the server to have disabled SSL 3. What remains is a chicken and egg problem, where modern clients support SSL 3 in order to retain support for legacy servers, and modern servers retain support for SSL 3 for legacy clients.

There is, however, a proposed solution in the form of an indicator (an SCSV) in the fallback connection to inform compatible servers that this connection is a fallback and to reject the connection unless the fallback was expected. Google Chrome and Google's web sites already support this SCSV indicator.


Firefox 32

Chrome 40

IE 11

Opera 25

Safari 7.1
TLS 1.2 TLS 1.2 x 3 TLS 1.2 TLS 1.2 x 3 TLS 1.2
TLS 1.1 TLS 1.1 TLS 1.1
TLS 1.0 TLS 1.0 TLS 1.0 TLS 1.0 TLS 1.0
SSL 3.0 SSL 3.0 SSL 3.0 SSL 3.0 SSL 3.0

Comparison of browser fallback behaviour

We tested five major browsers with an attack based on the forged Handshake Alert method outlined above, and found that each browser has a variant of this fallback behaviour. Both Chrome and Opera try TLS 1.2 three times before trying to downgrade the maximum supported version, whereas the remainder immediately started downgrading. Curiously, Internet Explorer and Safari both skip TLS 1.1 and jump straight from TLS 1.2 to TLS 1.0.

Mitigation

Mitigation can take many forms: the fallback SCSV, disabling SSL 3 fallback, disabling SSL 3 in the client side, disabling SSL 3 in the server side, and disabling CBC cipher suites in SSL version 3. Each solution has its own problems, but the current trend is to disable SSL 3 entirely.

Disabling only the CBC cipher suites in SSL 3 leaves system administrators with a dilemma: RC4 is the only other practical choice and it has its fair share of problems making it an undesirable alternative. The SCSV requires support from both clients and servers, so may take some time before it is widely deployed enough to mitigate this vulnerability; it will also likely not be applied to legacy browsers such as IE 6.

Apache httpd can be configured to disable SSL 3 as follows:

SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2 -SSLv2 -SSLv3
Microsoft IIS and nginx can also be configured to avoid negotiating SSL version 3.

Firefox can be configured to disable support for SSL 3 by altering security.tls.version.min from 0 (SSL 3) to 1 (TLS 1) in about:config.

firefox-disable

Internet Explorer can also be configured to disable support using the Advanced tab in the Internet Options dialogue (found in the Control Panel). In a similar way, IE 6 users can also enable support for TLS 1.0.

internet-options-disable

Chrome can be configured to not use SSL 3 using a command line flag, --ssl-version-min=tls1.

Site Report

You can check which SSL sites are still using SSL 3 using the Netcraft Site Report:

Netcraft site report
URL:

May 2018 Web Server Survey

In the May 2018 survey we received responses from 1,584,940,345 sites, 217,875,435 unique domains, and 7,452,628 web-facing computers. This reflects a loss of 198 million sites, but a gain of 3.36 million domains and 65,600 web-facing computers.

nginx saw moderate growth this month, gaining 1.17 million unique domains. This has increased its market share of domains by 0.19 percentage points, even though it lost 44.2 million sites.

The nginx ecosystem has continued to evolve over the past month, with some notable software releases:

  • nginx 1.14.0 was released on the 17 April, which adds HTTP/2 server push support to the nginx stable stream (the recommended release stream for production web servers). HTTP/2 server push can improve the performance of some websites by pre-emptively pushing assets to browser clients, avoiding the need for the client to explicitly make additional GET requests for assets on a page such as images, stylesheets & JavaScript.
  • The first production-ready release of NGINX Unit 1.0 was released on the 12 April, ending the product's beta period. NGINX Unit is a web application server that can serve sandboxed Go, Perl, PHP, Python and Ruby applications on the same server. It is unique in allowing dynamic reloading & remote configuration via a REST API rather than individual configuration files. A bugfix release, NGINX Unit 1.1, was subsequently made available on 26 April.
  • MySQL monitoring support was added to NGINX Amplify on 23 April. This commercial SaaS monitoring product from NGINX Inc. could increase the appeal of transitioning MySQL & PHP applications from Apache-based stacks to nginx-based ones.

Despite losing 59.7 million sites this month, Apache still powers sites on the largest number of unique domains. Apache is also running on 36% of the world's top 1 million websites – 12 percentage points ahead of its closest competitor, nginx.

There was also a 21% reduction of the number websites running Microsoft web server software this month, with the majority of these losses (77%) coming from hosting provider Raksmart, which lost 107 million of these sites. Despite this, the hosting provider gained 49,600 domains that point to Microsoft web servers. Many of the lost websites featured automatically generated content, and so were not counted as Active Sites.

Total number of websites

Web server market share

DeveloperApril 2018PercentMay 2018PercentChange
Microsoft658,800,75636.94%518,826,52432.73%-4.21
Apache456,169,33625.58%396,463,72325.01%-0.57
nginx403,381,96122.62%359,163,59922.66%0.04
Google22,460,5621.26%22,427,7521.42%0.16
Continue reading

HTTP Public Key Pinning: You’re doing it wrong!

HTTP Public Key Pinning (HPKP) is a security feature that can prevent fraudulently issued TLS certificates from being used to impersonate existing secure websites.

Our previous article detailed how this technology works, and looked at some of the sites that have dared to use this powerful but risky feature. Notably, very few sites are making use of HPKP: Only 0.09% of the certificates in Netcraft's March 2016 SSL Survey are served with HPKP headers, which equates to fewer than 4,100 certificates in total.

But more surprisingly, around a third of these sites are using the HPKP header incorrectly, which effectively disables HPKP. Consequently, the total number of certificates that are actually using HPKP is effectively less than 3,000.

Firefox's developer console reveals that this site has failed to include a backup pin, and so its HPKP policy is ignored by the browser.Failing to include a backup pin is the most common type of mistake made by sites that try to use HPKP.

Firefox's developer console reveals that this site has failed to include a backup pin, and so its HPKP policy is ignored by the browser.
Failing to include a backup pin is the most common type of mistake made by sites that try to use HPKP.

HPKP is the best way of protecting a site from being impersonated by mis-issued certificates, but it is easy for this protection to backfire with severe consequences. Fortunately, most misconfigurations simply mean that a site's HPKP policy will be ignored by browsers. The site's administrators might not realise it, but this situation is essentially the same as not using HPKP at all.

How can it go wrong?

Our previous article demonstrated a few high-profile websites that were using HPKP to varying degrees. However, plenty of other sites have bungled HPKP to the extent that it simply does not work.

Zero max-age

Every HPKP policy must specify a max-age directive, which suggests how long a browser should regard the website as a "Known Pinned Host". The most commonly used max-age value is 5184000 seconds (60 days). Nearly 1,200 servers use this value, while around 900 use 2592000 seconds (30 days).

But around 70 sites feature pointlessly short max-age values, such as 5 or 10 seconds. These durations are far too short to be effective, as a victim's browser would rapidly forget about these known pinned hosts.

Additionally, a few sites explicitly specify a max-age of zero along with their public key pins. These sites are therefore not protected by HPKP, and are in some cases needlessly sending this header to every client request. It is possible that they are desperately trying to remove a previously set HPKP policy, but this approach obviously cannot be relied upon to remove cached pins from browsers that do not visit the site in the meantime. These sites would therefore have to continue using a certificate chain that conforms to their previous HPKP policy, or run the risk of locking out a few stragglers.

One of the sites that sets a zero max-age is https://vodsmarket.com. Even if this max-age were to be increased, HPKP would still not be enabled because there is only one pinned public key:

Public-Key-Pins: pin-sha256="sbKjNAOqGTDfcyW1mBsy9IOtS2XS4AE+RJsm+LcR+mU="; max-age=0;

Another example can be seen on https://wondershift.biz, which pins two certificates' public keys. Again, even if the max-age were to be increased, this policy would still not take effect because there are no backup pins specified (both of the pinned keys appear in the site's certificate chain):

Public-Key-Pins: pin-sha256="L7mpy8M0VvQcWm7Yyx1LFK/+Ao280UZkz5U38Qk5G5g=";
    pin-sha256="EohwrK1N7rr3bRQphPj4j2cel+B2d0NNbM9PWHNDXpM=";
    includeSubDomains;
    max-age=0;
    report-uri="https://yahvehyireh.com/incoming/hpkp/index.php"

Wrong pin directives

Each pinned public key must be specified via a separate pin-sha256 directive, and each value must be a SHA256 hash; but more than 1% of servers that try to use HPKP fail to specify these pins correctly.

For example, the Department of Technology at Aichi University of Education exhibits the following header on https://www.auetech.aichi-edu.ac.jp:

Public-Key-Pins: YEnyhAxjrMAeVokI+23XQv1lzV3IBb3zs+BA2EUeLFI=";
    max-age=5184000;
    includeSubDomains

This header appears to include a single public key hash, but it omits the pin-sha256 directive entirely. No browser will make any sense of this attempted policy.

In another example, the Fast Forward Imaging Customer Interface at https://endor.ffwimaging.com does something very peculiar. It uses a pin-sha512 directive, which is not supported by the RFC – but in any case, the value it is set to is clearly not a SHA512 hash:

Public-Key-Pins: pin-sha512="base64+info1="; max-age=31536000; includeSubDomains

Some sites try to use SHA1 public key hashes, which are also unsupported:

Public-Key-Pins: pin-sha1='ewWxG0o6PsfOgu9uOCmZ0znd8h4='; max-age=2592000; includeSubdomains

This one uses pin-sha instead of pin-sha256:

Public-Key-Pins: pin-sha="xZ4wUjthUJ0YMBsdGg/bXHUjpEec5s+tHDNnNtdkwq8=";
    max-age=5184000; includeSubDomains

And this one refers to the algorithm "SHA245", which does not exist:

Public-Key-Pins: pin-sha245="pyCA+ftfVu/P+92tEhZWnVJ4BGO78XWwNhyynshV9C4=";
    max-age=31536000; includeSubDomains

The above example was most likely a typo, as is the following example, which specifies a ping-sha256 value:

Public-Key-Pins: ping-sha256="5C8kvU039KouVrl52D0eZSGf4Onjo4Khs8tmyTlV3nU=";
    max-age=2592000; includeSubDomains

These are careless mistakes, but it is notable that these types of mistake alone account for more than 1% of all certificates that set the Public-Key-Pins header. The net effect of these mistakes is that HPKP is not enabled on these sites.

Only one pinned public key

As we emphasised in our previous article, it is essential that a secure site should specify at least two public key pins when deploying HPKP. At least one of these should be a backup pin, so that the website can recover from losing control of its deployed certificate. If the website owner still possesses the private key for one of the backup certificates, the site can revert to using one of the other pinned public keys without any browsers refusing to connect.

But 25% of servers that use HPKP specify only one public key pin. This means that HPKP will not be enabled on the sites that use these certificates.

To prevent sites from inadvertently locking out all of their visitors, and to force the use of backup pins, browsers should only cache a site's pinned public keys if the Public-Key-Pins header contains two or more hashes. At least one of these must correspond to a certificate that is in the site's certificate chain, and at least one must be a backup pin (if a hash cannot be found in the certificate chain, then the browser will assume it is a backup pin without verifying its existence).

https://xcloud.zone is an example of a site that only sets one public key pin:

Public-Key-Pins: pin-sha256="DKvbzsurIZ5t5PvMaiEGfGF8dD2MA7aTUH9dbVtTN28=";
    max-age=2592000; includeSubDomains

This single pin corresponds to the subscriber certificate issued to xcloud.zone. Despite the 30-day max-age value, this lonely public key hash will never be cached by a browser. Consequently, HPKP is not enabled on this site, and the header might as well be missing entirely.

No pins at all

As well as the 1,000+ servers that only have one pinned public key, some HPKP headers neglect to specify any pins at all, and a few try to set values that are not actually hashes (which has the same effect as not setting any pins at all). For example, the Hide My Ass! forum at https://forum.hidemyass.com sets the following:

Public-Key-Pins: pin-sha256="<Subject Public Key Information (SPKI)>";
    max-age=2592000; includeSubDomains

The ProPublica SecureDrop site at https://securedrop.propublica.org also made a subtle mistake last month by forgetting to enclose its pinned public key hashes in double-quotes:

Public-Key-Pins: max-age=86400;
    pin-sha256=rhdxr9/utGWqudj8bNbG3sEcyMYn5wspiI5mZWkHE8A=
    pin-sha256=lT09gPUeQfbYrlxRtpsHrjDblj9Rpz+u7ajfCrg4qDM=

The HPKP RFC mandates that the Base64-encoded public key hashes must be quoted strings, so the above policy would not have worked. ProPublica has since fixed this problem, as well as adding a third pin to the header.

ProPublica is an independent newsroom that produces investigative journalism in the public interest. It provides a SecureDrop site to allow tips or documents to be submitted securely; however, until recently the HPKP policy on this site was ineffectual.

ProPublica is an independent newsroom that produces investigative journalism in the public interest. It provides a SecureDrop site to allow tips or documents to be submitted securely; however, until recently the HPKP policy on this site was ineffectual.

If companies that specialise in online privacy and secure anonymous filesharing are making these kinds of mistake on their own websites, it's not surprising that so many other websites are also getting it wrong.

At least two pins, but no backup pins

A valid HPKP policy must specify at least two pins, and at least one of these must be a backup pin. A browser will assume that a pin corresponds to a backup certificate if none of the certificates in the site's certificate chain correspond to that pin.

The Samba mailing list website fails to include any backup pins. Consequently, its HPKP policy is not enforced.

The Samba mailing list website fails to include any backup pins. Consequently, its HPKP policy is not enforced.

The Samba mailing lists site at https://lists.samba.org specifies two pinned public key hashes, but both of these appear in its certificate chain. Consequently, a browser will not apply this policy because there is no evidence of a backup pin. HPKP is effectively disabled on this site.

Incidentally, the Let's Encrypt Authority X1 cross-signed intermediate certificate has the most commonly pinned public key in our survey. More than 9% feature this in their set of pins, although it should never be pinned exclusively because Let's Encrypt is not guaranteed to always use their X1 certificate. Topically, just a few days ago, Let's Encrypt started to issue all certificates via its new Let's Encrypt Authority X3 intermediate certificate in order to be compatible with older Windows XP clients; but fortunately, the new X3 certificate uses the same keys as the X1 certificate, and so any site that had pinned the public key of the X1 certificate will continue to be accessible when it renews its subscriber certificate, without having to change its current HPKP policy.

The next most common pin belongs to the COMODO RSA Domain Validation Secure Server CA certificate. This pin is used by more than 6% of servers in our survey, all of which – despite the use of HPKP – could be vulnerable to man-in-the-middle attacks if Comodo were to be hacked again.

Pinning only the public keys of subscriber certificates would offer the best security against these kinds of attack, but it is fairly common to also pin the keys of root and intermediate certificates to reduce the risk of "bricking" a website in the event of a key loss. This approach is very common among Let's Encrypt customers, as the default letsencrypt client software generates a new key pair each time a certificate is renewed. If the public key of the subscriber certificate were to be pinned, the pinning would no longer be valid when it is renewed.

Setting HPKP policies over HTTP

Some sites set HPKP headers over unencrypted HTTP connections, which is also ineffectual. For example, the Internet Storm Center website at www.dshield.org sets the following header on its HTTP site:

Public-Key-Pins: pin-sha256="oBPvhtvElQwtqQAFCzmHX7iaOgvmPfYDRPEMP5zVMBQ=";
    pin-sha256="Ofki57ad70COg0ke3x80cbJ62Tt3c/f3skTimJdpnTw=";
    max-age=2592000; report-uri="https://isc.sans.org/badkey.html"

The Public Key Pinning Extension for HTTP RFC states that browsers must ignore HPKP headers that are received over non-secure transport, and so the above header has no effect other than to consume additional bandwidth.

2.2.2.  HTTP Request Type
  Pinned Hosts SHOULD NOT include the PKP header field in HTTP
  responses conveyed over non-secure transport.  UAs MUST ignore any
  PKP header received in an HTTP response conveyed over non-secure
  transport.

One very good reason for ignoring HPKP policies that are set over unencrypted connections is to prevent "hostile pinning" by man-in-the-middle attackers. If an attacker were to inject a set of pins that the site owner does not control—and if the browser were to blindly cache these values—he would be able to create a junk policy on behalf of that website. This would prevent clients from accessing the site for a long period, without the attacker having to maintain his position as a man-in-the-middle.

If a visitor instead browses to https://www.dshield.org (using HTTPS), an HSTS policy is applied which forces future requests to use HTTPS. The HTTPS site also sets an HPKP header which is then accepted and cached by compatible browsers. However, as the HTTP site does not automatically redirect to the HTTPS site, it is likely that many visitors will never benefit from these HSTS or HPKP polices, even though they are correctly implemented on the HTTPS site.

In another bizarre example, HPKP headers are set by the HTTP site at http://www.msvmgroup.com, even though there is no corresponding HTTPS website (it does accept connections on port 443, but does not present a subscriber certificate that is valid for this hostname).

Not quite got round to it yet...

A few sites that use the Public-Key-Pins header have not quite got around to implementing it yet, such as https://justamagic.ru, which sets the following value:

Public-Key-Pins: TODO

Using HPKP headers to broadcast skepticism

One security company's website – https://websec-test.com – uses the Public-Key-Pins header to express its own skepticisms over the usefulness of HPKP:

Public-Key-Pins: This is like the most useless header I have ever seen.
    Preventing MITM, c'mon, whoever can't trust his own network shouldn't
    enter sensitive data anywhere.

Violation reports that will never be received

The Public-Key-Pins header supports an optional report-uri directive. In the event of a pin validation failure, the user's browser should send a report to this address, in addition to blocking access to the site. These reports are obviously valuable, as they will usually be the first indication that something is wrong.

However, if the report-uri address uses HTTPS, and is also known pinned host, the browser must also carry out pinning checks on this address when the report is sent. This makes it foolish to specify a report-uri that uses the same hostname as the site that is using HPKP.

An example of this configuration blunder can be seen on https://yahvehyireh.com, which sets the following Public-Key-Pins header:

Public-Key-Pins: pin-sha256="y+PfuAS+Dx0OspfM9POCW/HRIqMqsa83jeXaOECu1Ns=";
    pin-sha256="klO23nT2ehFDXCfx3eHTDRESMz3asj1muO+4aIdjiuY=";
    pin-sha256="EohwrK1N7rr3bRQphPj4j2cel+B2d0NNbM9PWHNDXpM=";
    includeSubDomains; max-age=0;
     report-uri="https://yahvehyireh.com/incoming/hpkp/index.php"

This header instructs the browser to send pinning validation failure reports to https://yahvehyireh.com/incoming/hpkp/index.php. However, if there were to be a pinning validation failure on yahvehyireh.com, then the browser would be unable to send any reports because the report-uri itself would also fail the pinning checks by virtue of using the same hostname.

Incidentally, Chrome 46 introduced support for a newer header, Public-Key-Pins-Report-Only, which instructs the browser to perform identical pinning checks to those specified by the Public-Key-Pins header, but it will never block a request when no pinned keys are encountered; instead, the browser will send a report to a URL specified by a report-uri parameter, and the user will be allowed to continue browsing the site. This mechanism would make it safe for site administrators to test the deployment of HPKP on their sites, without inadvertently introducing a denial of service.

Summary

The proportion of secure servers that use HPKP headers is woefully low at only 0.09%, but to make matters worse, many of these few HPKP policies have been implemented incorrectly and do not work as intended.

Without delving into developer settings, browsers offer no visible indications that a site has an invalid HPKP policy, and so it is likely that many website administrators have no idea that their attempts at implementing HPKP have failed. Around a third of the sites that attempt to set an HPKP policy have got it wrong, and consequently behave as if there was no HPKP policy at all. Every response from these servers will include the unnecessary overhead of a header containing a policy that will ultimately be ignored by all browsers.

But there is still hope for the masses: A more viable alternative to HPKP might arise from an Internet-Draft entitled TLS Server Identity Pinning with Tickets. It proposes to extend TLS with opaque tickets, similar to those being used for TLS session resumption, as a way to pin a server's identity. This feature would allow a client to ensure that it is connecting to the right server, even in the presence of a fraudulently issued certificate, but has a significant advantage over HPKP in that no manual management actions would be required. If this draft comes to fruition, and is subsequently implemented by browsers and servers, this ticket-based approach to pinning could potentially see a greater uptake than HPKP has.

Netcraft offers a range of services that can be used to detect and defeat large-scale pharming attacks, and security testing services that identify man-in-the-middle vulnerabilities in web application and mobile apps. Contact security-sales@netcraft.com for more information.