95% of HTTPS servers vulnerable to trivial MITM attacks

Only 1 in 20 HTTPS servers correctly implements HTTP Strict Transport Security, a widely-supported security feature that prevents visitors making unencrypted HTTP connections to a server.

The remaining 95% are therefore vulnerable to trivial connection hijacking attacks, which can be exploited to carry out effective phishing, pharming and man-in-the-middle attacks. An attacker can exploit these vulnerabilities whenever a user inadvertently tries to access a secure site via HTTP, and so the attacker does not even need to spoof a valid TLS certificate. Because no crypto-wizardry is required to hijack an HTTP connection, these attacks are far easier to carry out than those that target TLS, such as the recently announced DROWN attack.

Background

The growth of HTTPS has been a mostly positive step in the evolution of the internet, enabling encrypted communications between more users and websites than ever before. Many high profile sites now use HTTPS by default, and millions of TLS certificates are currently in use on the web. With companies like Let's Encrypt offering free certificates and automated management tools, it is also easier than ever to deploy an HTTPS website that will be trusted by all modern browsers.

The primary purpose of a TLS certificate is to allow a browser to verify that it is communicating with the correct website. For example, if https://www.example.com uses a valid TLS certificate, then a man-in-the-middle attacker would not be able to hijack a browser's connection to this site unless he is also able to obtain a valid certificate for that domain.

A man-in-the-middle attack like this is generally not possible if the customer uses HTTPS.

A man-in-the-middle attack like this is generally not possible if the initial request from the customer uses HTTPS.

It would be extremely difficult for the attacker to obtain a valid certificate for a domain he does not control, and using an invalid certificate would cause the victim's browser to display an appropriate warning message. Consequently, man-in-the-middle attacks against HTTPS services are hard to pull off, and often not very successful. However, there are plenty of realistic opportunities to use the unencrypted HTTP protocol to attack most HTTPS websites.

HTTP Strict Transport Security (HSTS)

Encrypted communications are an essential requirement for banks and other financial websites, but HTTPS alone is not sufficient to defend these sites against man-in-the-middle attacks. Astonishingly, many banking websites lurk amongst the 95% of HTTPS servers that lack a simple feature that renders them still vulnerable to pharming and man-in-the-middle attacks. This missing feature is HTTP Strict Transport Security (HSTS), and only 1 in 20 secure servers currently make use of it, even though it is supported by practically all modern browsers.

Each secure website that does not implement an HSTS policy can be attacked simply by hijacking an HTTP connection that is destined for it. This is a surprisingly feasible attack vector, as there are many ways in which a user can inadvertently end up connecting via HTTP instead of HTTPS.

Manually typed URLs often result in an initial insecure request, as most users do not explicitly type in the protocol string (http:// or https://). When no protocol is given, the browser will default to HTTP – unless there is an appropriate HSTS policy in force.

To improve accessibility, most secure websites also run an HTTP service to redirect users to the corresponding HTTPS site – but this makes them particularly prone to man-in-the-middle attacks if there is no HSTS policy in force. Not only would many users be accustomed to visiting the HTTP site first, but anyone else who visits the site via an old bookmark or search engine result might also initially access the site via an insecure HTTP address. Whenever this happens, the attacker can hijack the initial HTTP request and prevent the customer being redirected to the secure HTTPS website.

This type of attack can be automated with the sslstrip tool, which transparently hijacks HTTP traffic on a network and converts HTTPS links and redirects into HTTP. This type of exploit is sometimes regarded as a protocol downgrade attack, but strictly speaking, it is not: rather than downgrading the protocol, it simply prevents the HTTP protocol being upgraded to HTTPS.

NatWest's online banking website at www.nwolb.com lacks an HSTS policy and also offers an HTTP service to redirect its customers to the HTTPS site. This setup is vulnerable to the type of man-in-the-middle attack described above.

NatWest's online banking website at www.nwolb.com lacks an HSTS policy and also offers an HTTP service to redirect its customers to the HTTPS site. This setup is vulnerable to the type of man-in-the-middle attack described above.

Vulnerable sites can be attacked on a massive scale by compromising home routers or DNS servers to point the target hostname at a server that is controlled by the attacker (a so-called "pharming" attack). Some smaller scale attacks can be carried out very easily – for example, if an attacker sets up a rogue Wi-Fi access point to provide internet access to nearby victims, he can easily influence the results of their DNS lookups.

Even if a secure website uses HTTPS exclusively (i.e. with no HTTP service at all), then man-in-the-middle attacks are still possible. For example, if a victim manually types www.examplebank.com into his browser's address bar—without prefixing it with https://—the browser will attempt to make an unencrypted HTTP connection to http://www.examplebank.com, even if the genuine site does not run an HTTP service. If this hostname has been pharmed, or is otherwise subjected to a man-in-the-middle attack, the attacker can hijack the request nonetheless and eavesdrop the connection as it is relayed to the genuine secure site, or serve phishing content directly to the victim.

In short, failing to implement an HSTS policy on a secure website means attackers can carry out man-in-the-middle attacks without having to obtain a valid TLS certificate. Many victims would fall for these attacks, as they can be executed over an unencrypted HTTP connection, thus avoiding any of the browser's tell-tale warnings about invalid certificates.

Implementing HSTS: A simple one-liner

The trivial man-in-the-middle attacks described above can be thwarted by implementing an appropriate HSTS policy. A secure website can do this simply by setting a single HTTP header in its responses:

    Strict-Transport-Security: max-age=31536000;

This header can only be set over an HTTPS connection, and instructs compatible browsers to only access the site over HTTPS for the next year (31,536,000 seconds = 1 year). This is the most common max-age value, used by nearly half of all HTTPS servers. After this HSTS policy has been applied, even if a user manually prefixes the site's hostname with http://, the browser will ignore this and access the site over HTTPS instead.

The combination of HSTS and HTTPS therefore provides a good defence against pharming attacks, as the attacker will not be able to redirect and intercept plaintext HTTP traffic when a client obeys the HSTS policy, nor will he be able to present a valid TLS certificate for the site he is impersonating.

The attacker cannot even rely on a small proportion his victims unwisely ignoring the use of an invalid certificate, as browsers must regard this situation as a hard fail when an HSTS policy is in force. The browser will simply not let the victim access the site if it finds an invalid certificate, nor will it allow an exception to be added.

When Google Chrome encounters an invalid certificate for a site that has an effective HSTS policy, the victim is not allowed to bypass the browser's warning message or add an exception.

When Google Chrome encounters an invalid certificate for a site that has an effective HSTS policy, the victim is not allowed to bypass the browser's warning message or add an exception.

To prevent other types of attack, it is also wise to add the includeSubDomains directive to ensure that every possible subdomain of a site is protected by HSTS. This mitigates cookie injection and session fixation attacks that could be executed by impersonating an HTTP site on a non-existent subdomain such as foo.www.example.com, and using it to set a cookie which would be sent to the secure site at https://www.example.com. This directive can be enabled like so:

    Strict-Transport-Security: max-age=31536000; includeSubDomains

However, some thought is required before taking the carte blanche approach of including all subdomains in an HSTS policy. The website's administrators must ensure that every single one of its subdomains supports HTTPS for at least the duration specified by the max-age parameter, otherwise users of these subdomains risk being locked out.

Setting an HSTS policy will also protect first time visitors who habitually use search bars or search engines to reach their destination. For example, typing "paypal" into Google's HTTPS search engine will yield a link to https://www.paypal.com, because Google will always link to the HTTPS version of a website if an appropriate HSTS policy exists.

HSTS preloading

HSTS is clearly an important security feature, but there are several circumstances under which its benefits will not work. Because HSTS directives are delivered via an HTTP header (over an HTTPS connection), HSTS can only instruct a browser to only use HTTPS after the browser's first visit to a secure website.

Men-in-the-middle can therefore still carry out attacks against users who have:

  • Never before visited the site.
  • Recently reinstalled their operating system.
  • Recently reinstalled their browser.
  • Switched to a new browser.
  • Switched to a new device (e.g. mobile phone).
  • Deleted their browser's cache.
  • Not visited the site within the past year (or however long the max-age period lasts).

These vulnerabilities can be eliminated by using HSTS Preloading, which ensures that the site's HSTS policy is distributed to supported browsers before the customer's first visit.

Website administrators can use the form at https://hstspreload.appspot.com/ to request for domains to be included in the HSTS Preload list maintained by Google. Each site must have a valid certificate, redirect all HTTP traffic to HTTPS, and serve all subdomains over HTTPS. The HSTS header served from each site must specify a max-age of at least 18 weeks (10,886,400 seconds) and include the preload and includeSubdomains directives.

It can take several months for domains to be reviewed and propagated to the latest stable versions of Firefox, Safari, Internet Explorer, Edge and Chrome. When domains are added to the preload list, all users of these browsers will benefit from the security offered by HSTS, even if they have never visited the sites before.

Conclusions

HSTS is widely supported, but not widely implemented. Nearly all modern browsers obey HSTS policies, including Internet Explorer 11, Microsoft Edge, Firefox, Chrome, Safari and Opera – yet less than 5% of secure websites enable this important security feature.

Secure websites that do not use HSTS are trivial to attack if the attacker can hijack a victim's web traffic, but it is even easier to defeat such attacks by implementing an HSTS policy. This begs the question of why so few websites are using HSTS.

The HSTS specification (RFC 6797) was published in 2012, and so it can hardly be considered a new technology any more. Nonetheless, many website administrators might still be unaware of its existence, or may not yet feel ready to commit to running an HTTPS-only website. These are probably the most significant reasons for its low uptake.

Some website administrators have even disabled HSTS by explicitly setting a max-age of 0 seconds. This has the effect of switching off any previously established HSTS policies, but this backpedalling can only take proper effect if every client revisits the secure site after the max-age has been set to zero. When a site implements an HSTS policy, it is effectively committed to maintaining its HTTPS service for as long as the largest max-age it has ever specified, otherwise it risks denying access to infrequent visitors. Nearly 4% of all HTTPS servers that use the Strict-Transport-Security header currently set a max-age of zero, including Twitter's t.co URL-shortener.

Browser support for HSTS can also introduce some privacy concerns. By initiating requests to several distinct hostnames (some of which enable HSTS), a hostile webpage can establish a "supercookie" to uniquely identify the client browser during subsequent visits, even if the user deletes the browser's conventional cookies. The browser will remember which pattern of hostnames had HSTS enabled, thus allowing the supercookie to persist. However, this privacy concern only affects clients and does not serve as an excuse for websites to avoid implementing their own HSTS policies.

Implementing an HSTS policy is very simple and there are no practical downsides when a site already operates entirely over HTTPS. This makes it even more surprising to see many banks failing to use HSTS, especially on their online banking platforms. This demonstrates poor security practices where it matters the most, as these are likely to be primary targets of pharming attacks.

Netcraft offers a range of services that can be used to detect and defeat large-scale pharming attacks, and security testing services that identify man-in-the-middle vulnerabilities in web application and mobile apps. Contact security-sales@netcraft.com for more information.

March 2017 Web Server Survey

In the March 2017 survey we received responses from 1,760,630,795 sites and 6,271,146 computers, reflecting a loss of 31 million sites, but a gain of 34,000 computers.

nginx was the only major web server vendor to increase its market shares in all four metrics this month. Its share of websites grew by 0.49 percentage points, reaching nearly 20%, and its share of web-facing computers grew by 0.39 p.p. to 19.6%. The latter gain was driven by a 31,000 growth in the number of computers running nginx, which was by far the largest computer growth in the survey.

Microsoft suffered the largest loss in March, falling by 70 million sites and taking its share down by more than 3 percentage points. This drop paved the way for Apache to claw back 0.91 points with its gain of 9.4 million sites. Microsoft's market share of sites now stands just below 40%, though this remains nearly as much as Apache and nginx combined.

Microsoft also suffered the largest and only loss of active sites among the major vendors, reducing its total count of active sites by 421,000. Microsoft's active sites share is now less than 9%, far behind nginx's share of 19.7% and Apache's 45.8%.

nginx was the only major vendor to increase its presence within the top million sites, increasing its count by 1,592, while Apache lost 3,502 and Microsoft lost 746.

Microsoft web servers

Among the 704 million sites that are powered by Microsoft web server software, Windows Server 2008 is still the most commonly used platform. The original version of this operating system shipped with Microsoft IIS 7.0 as its web server, while the subsequent Windows Server 2008 R2 release included IIS 7.5. More than half a billion websites are hosted on Windows Server 2008 computers (including R2), which accounts for 72% of all Windows-hosted websites.

Although its R2 version was released more than 7 years ago, Windows Server 2008 is likely to remain prevalent for several more years. Last year's launch of the Windows Server Premium Assurance program allows customers to extend Windows Server 2008's support period from 10 to 16 years, giving access to security updates (as well as "critical" and "important" bulletins) until January 2026.

A further 185 million sites are still running on Windows Server 2003 computers, which are not covered by the Windows Server Premium Assurance program. The extended support period for Windows Server 2003 ended on July 14, 2015, so unless site operators have a special agreement in place, Microsoft will no longer be issuing security updates for any version of Windows Server 2003. US-CERT warned that these unsupported installations of Windows Server 2003 are exposed to an elevated risk of cybersecurity dangers, such as malicious attacks or electronic data loss.

Microsoft's newest operating system, Windows Server 2016, may still seem in its infancy, but it is now starting to show promising growth. 80,200 sites are now being served from Windows Server 2016 machines, which is nearly 20,000 more than last month. The number of web-facing computers running Windows Server 2016 also grew by 2,509, while Windows Server 2008 lost 2,316.

windows-computers

Windows Server 2016's computer growth was outpaced only by Window Server 2012, which gained 7,100 computers this month. Windows Server 2012 now accounts for 463,000 web-facing computers, which is nearly half as many as Windows Server 2008, but it is used to host far fewer websites – just 22 million compared with the 535 million sites hosted on Windows Server 2008 computers.

Total number of websites

Web server market share

DeveloperFebruary 2017PercentMarch 2017PercentChange
Microsoft773,552,45443.16%704,000,53039.99%-3.18
Apache374,297,08020.89%383,707,11221.79%0.91
nginx348,025,78819.42%350,540,37219.91%0.49
Google18,438,7021.03%18,849,1711.07%0.04
Continue reading

February 2017 Web Server Survey

In the February 2017 survey we received responses from 1,792,104,054 sites and 6,236,791 web-facing computers, reflecting a loss of 7.9 million sites and 91,200 computers.

nginx gains sites and computers

nginx had the largest growth of both sites and web-facing computers amongst the major vendors this month, enjoying a gain of 31 million sites and 13,400 computers, while hefty losses by Microsoft and Apache led to the overall losses seen in this month’s survey. Microsoft lost 48 million sites and 9,900 computers, while Apache lost 13 million sites and 85,700 computers.

Much of the loss of web-facing computers using Apache is the result of declining numbers of Western Digital My Cloud personal storage devices being found in Netcraft's survey. These devices allowed consumers to access their files remotely using public hostnames under the wd2go.com domain. This disappearing act might have been influenced by the three My Cloud firmware updates that were released in December – the first of these changed how files are accessed from the My Cloud web and mobile apps, and the other two resolved a security vulnerability related to remote access.

Despite suffering the largest loss, Microsoft web servers power 43.2% of all sites on the internet, more than twice Apache's share. Meanwhile, nginx's growth has increased its own count to 348 million, bringing it to within striking distance of Apache. This highlights a dramatic change in fortunes for Apache, which was comfortably in first place a year ago, but is now under threat of falling into third place.

In terms of web-facing computers, Apache continues to fare well. While its 3% decline is significant in the space of a month, Apache's 2.7 million computers still give it the lion's share of the market (44.1%). This is followed by Microsoft's 1.5 million computers (24.7%), and nginx's 1.2 million (19.2%).

nginx was also the only major vendor to make a gain within the top million busiest sites. Its share grew slightly to 28.34%, while Apache suffered the largest loss of 0.21 percentage points, taking its share down to 41.41%, though Apache maintained its first-place position with a lead of 13.1 percentage points over nginx.

Apache still strong in active sites

Despite its losses elsewhere, Apache gained 887,000 active sites this month. nginx made the second largest gain, with an increase of 757,000 active sites. The active sites metric is more appropriate for some applications, as it counts websites but excludes those that contain automatically generated content such as domain holding pages.

Apache also has the largest share of this market (45.8%), with its total number of active sites now reaching almost 80 million – comfortably ahead of nginx, which takes up second place with 34 million active sites.

LiteSpeed 5.1.13 addresses DDoS vulnerability

February saw some new releases of the LiteSpeed web server. Most notably, version 5.1.13 was released on 17 February, after some LiteSpeed Enterprise customers reported service disruptions. These were caused by a surge of distributed denial of service (DDoS) attacks that specifically targeted a bug in LiteSpeed servers earlier that day. Rather impressively, it took LiteSpeed less than two hours to identify the heap buffer overflow that was responsible for the problem, push a bug fix build of 5.1.12, and release 5.1.13.

Looking ahead, it is likely that the first release in the 5.2 branch of LiteSpeed will support HTTP/2 Server Push, which could speed up some websites by allowing the server to send resources to clients before the browser has requested them. This feature has already been implemented in the second release candidate (5.2RC2), which was made available on 13 February.

LiteSpeed gained 42 million sites this month as a large number of sites under the .science gTLD reappeared. This did not have a positive impact on its computer count, however, which fell by 666 to 23,240.

Other new releases from web server vendors

Apache 2.2.32 was released on 13 January. This is the latest version in the 2.2 legacy branch, which now enforces a stricter HTTP request grammar, corresponding to RFC 7230 for request lines and request headers. This addresses a security vulnerability (CVE-2016-8743) that might have allowed malicious clients or downstream proxies to carry out response splitting and cache pollution attacks. This release also mitigates the "httpoxy" (CVE-2016-5387) issues that were already addressed in the 2.4 stable branch.

New stable and mainline versions of nginx were also released in the past month. nginx 1.10.3 stable was released on 31 January, followed by nginx 1.11.10 mainline on Valentine's Day. Both versions include several bugfixes, while the mainline release also introduces a few new features.

Meanwhile, documentation for the Microsoft IIS Administration API is now available. This REST API allows IIS instances to be configured with any HTTP client, using tools such as the one available at manage.iis.net. The rationale for providing the API is to have an open and standard interface that can be used from any platform, unlike AppCmd.exe, which can only be run on Windows.

Total number of websites

Web server market share

DeveloperJanuary 2017PercentFebruary 2017PercentChange
Microsoft821,905,28345.66%773,552,45443.16%-2.50
Apache387,211,50321.51%374,297,08020.89%-0.63
nginx317,398,31717.63%348,025,78819.42%1.79
Google17,933,7621.00%18,438,7021.03%0.03
Continue reading

Let’s Encrypt and Comodo issue thousands of certificates for phishing

Certificate Authorities are still issuing tens of thousands of certificates for domain names obviously intended for use in phishing and fraud. Fraudsters are mostly using just two CAs — Let's Encrypt and Comodo domain-validated certificates accounted for 96% of phishing sites with a valid TLS certificate found in the first quarter of 2017.

Netcraft has blocked phishing attacks on more than 47,500 sites with a valid TLS certificate between 1st January and 31st March 2017. On 19,700 of these, Netcraft blocked the whole site rather than a specific subdirectory. 61% of the sites that were entirely blocked were using certificates issued by Let's Encrypt, and 36% by Comodo.

While some CAs, browser vendors, and commentators have argued that fraud prevention is not and should not be the role of certificate authorities, the scale of foreseeable misuse that can be combated automatically warrants further consideration of this policy. Without change, issuance of certificates for sites such as login-appleid.com-direct-apple.com and dropbox.com.login.verify.danaharperandfriends.com that are obviously intended for misuse will continue unabated.

Certificates issued by publicly-trusted CAs that have been used on phishing sites

Certificates issued by publicly-trusted CAs that have been used on phishing sites. An interactive, updating version of this graph can be found on Netcraft's Phishiest Certificate Authorities page.

Mozilla Firefox's telemetry reports that approximately 55% of all page loads are over HTTPS. The movement to a secure web is crucial to defend against the risks posed by unencrypted traffic, and easy access to trusted certificates is a key factor in the recent growth. However, this easy access also offers opportunities for fraudsters to capitalise on the perception of HTTPS as trustworthy as demonstrated by the number of certificates issued for clearly deceptive domain names.

Looking at a small sample of these blocked phishing sites with valid TLS certificates that have high Deceptive Domain Scores:

Hostname Certificate Authority Target Deceptive Domain Score
login-appleid.com-direct-apple.com Let's Encrypt Apple
9.25
payepal.com-signin-country-localed.access-logons.com Let's Encrypt PayPal
9.13
www.ll-airbnb.com Symantec Airbnb
8.99
chaseonline.chase.com.bajpayee.com Comodo Chase
10.00
payqal.limited GoDaddy PayPal
9.13
lost-apple.ru GlobalSign Apple
9.30
servicesonline-americanexpress.com Let's Encrypt American Express
7.99
dropbox.com.login.verify.danaharperandfriends.com Comodo Dropbox
9.70
update.wellsfargo.com.casaecologica.cl Let's Encrypt Wells Fargo
10.00
bankofamerica.com.online.do.dbraunss.org Comodo Bank of America
9.40
labanque-postalegroupe.com Let's Encrypt La Banque Postale
7.10
usaa.com.983746.imexcomed.com.bo Let's Encrypt USAA
10.00

In each of these examples above — and in the other statistics referenced above — the certificate authority had sight of the whole hostname that was blocked. These examples did not rely on wildcard certificates to carry out their deception. In particular, some of these examples (such as update.wellsfargo.com.casaecologica.cl) demonstrate that the certificate authority was better placed to prevent misuse than the domain registrar (who would have seen casaecologica.cl upon registration).

Let's Encrypt and Comodo are attractive to fraudsters as both offer automated, domain-validated certificates at no cost to end users. Let's Encrypt's ACME protocol allows for free automated issuance, while Comodo offers no-cost certificates via its trial certificates, cPanel AutoSSL, and its Cloudflare partnership.

While Let's Encrypt's policy on phishing and malware is to check the Safe Browsing API, this does not provide effective pre-issuance blocking. It does not match the reality of automated certificate deployment, where the certificate is likely to be issued and installed before the phishing content has been uploaded, detected, and blocked. Let's Encrypt also has a limited list of domain names for which they block issuance which has triggered forum posts by users unable to obtain a certificate for the blocked name. All of the Let's Encrypt certificates that Netcraft found on phishing sites were issued despite the Safe Browsing check and the additional name-based blocking.

Phishing site on https://www.instagram.com-getid.com

Phishing site on https://www.instagram.com-getid.com (Deceptive Domain Score is 9.46).

The use of TLS by these phishing sites is particularly dangerous, as websites that use TLS are marketed as being trustworthy and operated by legitimate organisations. Consumers have been trained to look for padlocks, security indicators, and https:// in the address bar in their browser before submitting sensitive information, such as passwords and credit card numbers, to websites.

However, a displayed padlock or "Secure" indicator alone does not imply that a site using TLS can be trusted, or is operated by a legitimate organisation. The distinction between the connection being "Secure" and the safety of providing sensitive information to the HTTPS site may be challenging to interpret for those unfamiliar with the technical underpinnings of TLS.

"Secure" vs. "Private" in Google Chrome

"Secure" vs. "Private" in Google Chrome.

Demonstrating the difficulty of explaining this technical distinction, Google Chrome indicates that an HTTPS connection is using a valid TLS certificate by displaying the word "Secure" in the address bar. While the word "secure" refers to the encrypted connection's protection against eavesdroppers, this is explained in the drop-down with the word "private". The distinction between these two words is subtle, yet potentially significant for user understanding. However, it is important to note that Google has been at the forefront of research into how security indicators are perceived by internet users at large.

Mozilla Firefox's warning when selecting a password form field on a non-secure HTTP site

Mozilla Firefox's warning when selecting a password form field on a non-secure HTTP site.

Both Google Chrome and Mozilla Firefox have made recent changes to the display of password input forms on non-TLS sites — non-secure forms now trigger in-context warnings. These warnings are likely to increase the prevalence of TLS on phishing sites, with fraudsters deploying TLS to both gain the positive "Secure" indicator, and now to avoid negative indicators when collecting passwords.

Deceptive Domain Score service

Netcraft's Deceptive Domain Score service provides an automated mechanism for evaluating whether a given hostname or domain name is likely to be used to fraudulently impersonate an organisation. Crucially, this can be evaluated before issuing a certificate. Of these 19,700 hostnames with valid TLS certificates where Netcraft blocked the entire site, 72.5% scored more than 5.0 and 49% more than 7.0 (on a scale from 0.0 to 10.0).

Distribution of Deceptive Domain Score across blocked phishing sites with valid TLS certificates

Distribution of Deceptive Domain Score across blocked phishing sites with valid TLS certificates.

For comparison, a random sample of 10,000 hostnames taken from domain-validated certificates issued in February 2017 as found in Netcraft's April 2017 SSL survey, had an average score of 0.72, with 7% having a score over 5.0, and 4.4% a score over 7.0.

More information on Netcraft's Deceptive Domain Score service can be found on Netcraft's website.

HTTP Public Key Pinning: You’re doing it wrong!

HTTP Public Key Pinning (HPKP) is a security feature that can prevent fraudulently issued TLS certificates from being used to impersonate existing secure websites.

Our previous article detailed how this technology works, and looked at some of the sites that have dared to use this powerful but risky feature. Notably, very few sites are making use of HPKP: Only 0.09% of the certificates in Netcraft's March 2016 SSL Survey are served with HPKP headers, which equates to fewer than 4,100 certificates in total.

But more surprisingly, around a third of these sites are using the HPKP header incorrectly, which effectively disables HPKP. Consequently, the total number of certificates that are actually using HPKP is effectively less than 3,000.

Firefox's developer console reveals that this site has failed to include a backup pin, and so its HPKP policy is ignored by the browser.Failing to include a backup pin is the most common type of mistake made by sites that try to use HPKP.

Firefox's developer console reveals that this site has failed to include a backup pin, and so its HPKP policy is ignored by the browser.
Failing to include a backup pin is the most common type of mistake made by sites that try to use HPKP.

HPKP is the best way of protecting a site from being impersonated by mis-issued certificates, but it is easy for this protection to backfire with severe consequences. Fortunately, most misconfigurations simply mean that a site's HPKP policy will be ignored by browsers. The site's administrators might not realise it, but this situation is essentially the same as not using HPKP at all.

How can it go wrong?

Our previous article demonstrated a few high-profile websites that were using HPKP to varying degrees. However, plenty of other sites have bungled HPKP to the extent that it simply does not work.

Zero max-age

Every HPKP policy must specify a max-age directive, which suggests how long a browser should regard the website as a "Known Pinned Host". The most commonly used max-age value is 5184000 seconds (60 days). Nearly 1,200 servers use this value, while around 900 use 2592000 seconds (30 days).

But around 70 sites feature pointlessly short max-age values, such as 5 or 10 seconds. These durations are far too short to be effective, as a victim's browser would rapidly forget about these known pinned hosts.

Additionally, a few sites explicitly specify a max-age of zero along with their public key pins. These sites are therefore not protected by HPKP, and are in some cases needlessly sending this header to every client request. It is possible that they are desperately trying to remove a previously set HPKP policy, but this approach obviously cannot be relied upon to remove cached pins from browsers that do not visit the site in the meantime. These sites would therefore have to continue using a certificate chain that conforms to their previous HPKP policy, or run the risk of locking out a few stragglers.

One of the sites that sets a zero max-age is https://vodsmarket.com. Even if this max-age were to be increased, HPKP would still not be enabled because there is only one pinned public key:

Public-Key-Pins: pin-sha256="sbKjNAOqGTDfcyW1mBsy9IOtS2XS4AE+RJsm+LcR+mU="; max-age=0;

Another example can be seen on https://wondershift.biz, which pins two certificates' public keys. Again, even if the max-age were to be increased, this policy would still not take effect because there are no backup pins specified (both of the pinned keys appear in the site's certificate chain):

Public-Key-Pins: pin-sha256="L7mpy8M0VvQcWm7Yyx1LFK/+Ao280UZkz5U38Qk5G5g=";
    pin-sha256="EohwrK1N7rr3bRQphPj4j2cel+B2d0NNbM9PWHNDXpM=";
    includeSubDomains;
    max-age=0;
    report-uri="https://yahvehyireh.com/incoming/hpkp/index.php"

Wrong pin directives

Each pinned public key must be specified via a separate pin-sha256 directive, and each value must be a SHA256 hash; but more than 1% of servers that try to use HPKP fail to specify these pins correctly.

For example, the Department of Technology at Aichi University of Education exhibits the following header on https://www.auetech.aichi-edu.ac.jp:

Public-Key-Pins: YEnyhAxjrMAeVokI+23XQv1lzV3IBb3zs+BA2EUeLFI=";
    max-age=5184000;
    includeSubDomains

This header appears to include a single public key hash, but it omits the pin-sha256 directive entirely. No browser will make any sense of this attempted policy.

In another example, the Fast Forward Imaging Customer Interface at https://endor.ffwimaging.com does something very peculiar. It uses a pin-sha512 directive, which is not supported by the RFC – but in any case, the value it is set to is clearly not a SHA512 hash:

Public-Key-Pins: pin-sha512="base64+info1="; max-age=31536000; includeSubDomains

Some sites try to use SHA1 public key hashes, which are also unsupported:

Public-Key-Pins: pin-sha1='ewWxG0o6PsfOgu9uOCmZ0znd8h4='; max-age=2592000; includeSubdomains

This one uses pin-sha instead of pin-sha256:

Public-Key-Pins: pin-sha="xZ4wUjthUJ0YMBsdGg/bXHUjpEec5s+tHDNnNtdkwq8=";
    max-age=5184000; includeSubDomains

And this one refers to the algorithm "SHA245", which does not exist:

Public-Key-Pins: pin-sha245="pyCA+ftfVu/P+92tEhZWnVJ4BGO78XWwNhyynshV9C4=";
    max-age=31536000; includeSubDomains

The above example was most likely a typo, as is the following example, which specifies a ping-sha256 value:

Public-Key-Pins: ping-sha256="5C8kvU039KouVrl52D0eZSGf4Onjo4Khs8tmyTlV3nU=";
    max-age=2592000; includeSubDomains

These are careless mistakes, but it is notable that these types of mistake alone account for more than 1% of all certificates that set the Public-Key-Pins header. The net effect of these mistakes is that HPKP is not enabled on these sites.

Only one pinned public key

As we emphasised in our previous article, it is essential that a secure site should specify at least two public key pins when deploying HPKP. At least one of these should be a backup pin, so that the website can recover from losing control of its deployed certificate. If the website owner still possesses the private key for one of the backup certificates, the site can revert to using one of the other pinned public keys without any browsers refusing to connect.

But 25% of servers that use HPKP specify only one public key pin. This means that HPKP will not be enabled on the sites that use these certificates.

To prevent sites from inadvertently locking out all of their visitors, and to force the use of backup pins, browsers should only cache a site's pinned public keys if the Public-Key-Pins header contains two or more hashes. At least one of these must correspond to a certificate that is in the site's certificate chain, and at least one must be a backup pin (if a hash cannot be found in the certificate chain, then the browser will assume it is a backup pin without verifying its existence).

https://xcloud.zone is an example of a site that only sets one public key pin:

Public-Key-Pins: pin-sha256="DKvbzsurIZ5t5PvMaiEGfGF8dD2MA7aTUH9dbVtTN28=";
    max-age=2592000; includeSubDomains

This single pin corresponds to the subscriber certificate issued to xcloud.zone. Despite the 30-day max-age value, this lonely public key hash will never be cached by a browser. Consequently, HPKP is not enabled on this site, and the header might as well be missing entirely.

No pins at all

As well as the 1,000+ servers that only have one pinned public key, some HPKP headers neglect to specify any pins at all, and a few try to set values that are not actually hashes (which has the same effect as not setting any pins at all). For example, the Hide My Ass! forum at https://forum.hidemyass.com sets the following:

Public-Key-Pins: pin-sha256="<Subject Public Key Information (SPKI)>";
    max-age=2592000; includeSubDomains

The ProPublica SecureDrop site at https://securedrop.propublica.org also made a subtle mistake last month by forgetting to enclose its pinned public key hashes in double-quotes:

Public-Key-Pins: max-age=86400;
    pin-sha256=rhdxr9/utGWqudj8bNbG3sEcyMYn5wspiI5mZWkHE8A=
    pin-sha256=lT09gPUeQfbYrlxRtpsHrjDblj9Rpz+u7ajfCrg4qDM=

The HPKP RFC mandates that the Base64-encoded public key hashes must be quoted strings, so the above policy would not have worked. ProPublica has since fixed this problem, as well as adding a third pin to the header.

ProPublica is an independent newsroom that produces investigative journalism in the public interest. It provides a SecureDrop site to allow tips or documents to be submitted securely; however, until recently the HPKP policy on this site was ineffectual.

ProPublica is an independent newsroom that produces investigative journalism in the public interest. It provides a SecureDrop site to allow tips or documents to be submitted securely; however, until recently the HPKP policy on this site was ineffectual.

If companies that specialise in online privacy and secure anonymous filesharing are making these kinds of mistake on their own websites, it's not surprising that so many other websites are also getting it wrong.

At least two pins, but no backup pins

A valid HPKP policy must specify at least two pins, and at least one of these must be a backup pin. A browser will assume that a pin corresponds to a backup certificate if none of the certificates in the site's certificate chain correspond to that pin.

The Samba mailing list website fails to include any backup pins. Consequently, its HPKP policy is not enforced.

The Samba mailing list website fails to include any backup pins. Consequently, its HPKP policy is not enforced.

The Samba mailing lists site at https://lists.samba.org specifies two pinned public key hashes, but both of these appear in its certificate chain. Consequently, a browser will not apply this policy because there is no evidence of a backup pin. HPKP is effectively disabled on this site.

Incidentally, the Let's Encrypt Authority X1 cross-signed intermediate certificate has the most commonly pinned public key in our survey. More than 9% feature this in their set of pins, although it should never be pinned exclusively because Let's Encrypt is not guaranteed to always use their X1 certificate. Topically, just a few days ago, Let's Encrypt started to issue all certificates via its new Let's Encrypt Authority X3 intermediate certificate in order to be compatible with older Windows XP clients; but fortunately, the new X3 certificate uses the same keys as the X1 certificate, and so any site that had pinned the public key of the X1 certificate will continue to be accessible when it renews its subscriber certificate, without having to change its current HPKP policy.

The next most common pin belongs to the COMODO RSA Domain Validation Secure Server CA certificate. This pin is used by more than 6% of servers in our survey, all of which – despite the use of HPKP – could be vulnerable to man-in-the-middle attacks if Comodo were to be hacked again.

Pinning only the public keys of subscriber certificates would offer the best security against these kinds of attack, but it is fairly common to also pin the keys of root and intermediate certificates to reduce the risk of "bricking" a website in the event of a key loss. This approach is very common among Let's Encrypt customers, as the default letsencrypt client software generates a new key pair each time a certificate is renewed. If the public key of the subscriber certificate were to be pinned, the pinning would no longer be valid when it is renewed.

Setting HPKP policies over HTTP

Some sites set HPKP headers over unencrypted HTTP connections, which is also ineffectual. For example, the Internet Storm Center website at www.dshield.org sets the following header on its HTTP site:

Public-Key-Pins: pin-sha256="oBPvhtvElQwtqQAFCzmHX7iaOgvmPfYDRPEMP5zVMBQ=";
    pin-sha256="Ofki57ad70COg0ke3x80cbJ62Tt3c/f3skTimJdpnTw=";
    max-age=2592000; report-uri="https://isc.sans.org/badkey.html"

The Public Key Pinning Extension for HTTP RFC states that browsers must ignore HPKP headers that are received over non-secure transport, and so the above header has no effect other than to consume additional bandwidth.

2.2.2.  HTTP Request Type
  Pinned Hosts SHOULD NOT include the PKP header field in HTTP
  responses conveyed over non-secure transport.  UAs MUST ignore any
  PKP header received in an HTTP response conveyed over non-secure
  transport.

One very good reason for ignoring HPKP policies that are set over unencrypted connections is to prevent "hostile pinning" by man-in-the-middle attackers. If an attacker were to inject a set of pins that the site owner does not control—and if the browser were to blindly cache these values—he would be able to create a junk policy on behalf of that website. This would prevent clients from accessing the site for a long period, without the attacker having to maintain his position as a man-in-the-middle.

If a visitor instead browses to https://www.dshield.org (using HTTPS), an HSTS policy is applied which forces future requests to use HTTPS. The HTTPS site also sets an HPKP header which is then accepted and cached by compatible browsers. However, as the HTTP site does not automatically redirect to the HTTPS site, it is likely that many visitors will never benefit from these HSTS or HPKP polices, even though they are correctly implemented on the HTTPS site.

In another bizarre example, HPKP headers are set by the HTTP site at http://www.msvmgroup.com, even though there is no corresponding HTTPS website (it does accept connections on port 443, but does not present a subscriber certificate that is valid for this hostname).

Not quite got round to it yet...

A few sites that use the Public-Key-Pins header have not quite got around to implementing it yet, such as https://justamagic.ru, which sets the following value:

Public-Key-Pins: TODO

Using HPKP headers to broadcast skepticism

One security company's website – https://websec-test.com – uses the Public-Key-Pins header to express its own skepticisms over the usefulness of HPKP:

Public-Key-Pins: This is like the most useless header I have ever seen.
    Preventing MITM, c'mon, whoever can't trust his own network shouldn't
    enter sensitive data anywhere.

Violation reports that will never be received

The Public-Key-Pins header supports an optional report-uri directive. In the event of a pin validation failure, the user's browser should send a report to this address, in addition to blocking access to the site. These reports are obviously valuable, as they will usually be the first indication that something is wrong.

However, if the report-uri address uses HTTPS, and is also known pinned host, the browser must also carry out pinning checks on this address when the report is sent. This makes it foolish to specify a report-uri that uses the same hostname as the site that is using HPKP.

An example of this configuration blunder can be seen on https://yahvehyireh.com, which sets the following Public-Key-Pins header:

Public-Key-Pins: pin-sha256="y+PfuAS+Dx0OspfM9POCW/HRIqMqsa83jeXaOECu1Ns=";
    pin-sha256="klO23nT2ehFDXCfx3eHTDRESMz3asj1muO+4aIdjiuY=";
    pin-sha256="EohwrK1N7rr3bRQphPj4j2cel+B2d0NNbM9PWHNDXpM=";
    includeSubDomains; max-age=0;
     report-uri="https://yahvehyireh.com/incoming/hpkp/index.php"

This header instructs the browser to send pinning validation failure reports to https://yahvehyireh.com/incoming/hpkp/index.php. However, if there were to be a pinning validation failure on yahvehyireh.com, then the browser would be unable to send any reports because the report-uri itself would also fail the pinning checks by virtue of using the same hostname.

Incidentally, Chrome 46 introduced support for a newer header, Public-Key-Pins-Report-Only, which instructs the browser to perform identical pinning checks to those specified by the Public-Key-Pins header, but it will never block a request when no pinned keys are encountered; instead, the browser will send a report to a URL specified by a report-uri parameter, and the user will be allowed to continue browsing the site. This mechanism would make it safe for site administrators to test the deployment of HPKP on their sites, without inadvertently introducing a denial of service.

Summary

The proportion of secure servers that use HPKP headers is woefully low at only 0.09%, but to make matters worse, many of these few HPKP policies have been implemented incorrectly and do not work as intended.

Without delving into developer settings, browsers offer no visible indications that a site has an invalid HPKP policy, and so it is likely that many website administrators have no idea that their attempts at implementing HPKP have failed. Around a third of the sites that attempt to set an HPKP policy have got it wrong, and consequently behave as if there was no HPKP policy at all. Every response from these servers will include the unnecessary overhead of a header containing a policy that will ultimately be ignored by all browsers.

But there is still hope for the masses: A more viable alternative to HPKP might arise from an Internet-Draft entitled TLS Server Identity Pinning with Tickets. It proposes to extend TLS with opaque tickets, similar to those being used for TLS session resumption, as a way to pin a server's identity. This feature would allow a client to ensure that it is connecting to the right server, even in the presence of a fraudulently issued certificate, but has a significant advantage over HPKP in that no manual management actions would be required. If this draft comes to fruition, and is subsequently implemented by browsers and servers, this ticket-based approach to pinning could potentially see a greater uptake than HPKP has.

Netcraft offers a range of services that can be used to detect and defeat large-scale pharming attacks, and security testing services that identify man-in-the-middle vulnerabilities in web application and mobile apps. Contact security-sales@netcraft.com for more information.

LEGO vs Cybersquatters: The burden of new gTLDs

netcraft-minifig-annotated ICANN's New gTLD Program was developed to increase the amount of choice within the domain name space, and it has been unquestionably successful in that respect. Consumers and businesses alike can now register domains under hundreds of different top-level domains such as .toys, .mortgage, .software, .gifts, .london and so on.

But the launch of so many new gTLDs could be costly for brand owners, who will have to contend with even more "bad faith" registrations by cybersquatters and fraudsters. When a company fails to register its own trademarks — along with many subtle variations of those trademarks — under each new gTLD, there is a risk that someone else will, and these opportunities are often abused to acquire some of the traffic that would otherwise have gone to the brand owner's own websites. Not only does this divert money away from the legitimate brand owner, but it can also be detrimental to its reputation.

LEGO: A bigger brand than Google

LEGO is one of the brands that is most affected by bad faith registrations, as its globally-recognised name makes an attractive target for anyone who wants to piggyback on its success.

toylego.xyz was registered anonymously by a domain squatter last year. It currently shows a monetized domain holding page that has sponsored listings for LEGO-related keywords.

toylego.xyz was registered anonymously by a domain squatter last year. It currently shows a monetized domain holding page that has sponsored listings for LEGO-related keywords.

Early this year, LEGO regained its status as the world's most powerful brand, beating the likes of Google, Nike, Ferrari, Visa and Disney. Last year, the privately held LEGO Group increased its revenue to a record high of DKK 37.9 billion (US $5.4 billion), and its operating profit grew to DKK 12.4 billion (US $1.8 billion).

To safeguard its continued success, The LEGO Group is very protective of its trademarks, and actively seeks to prevent any misuse that could lead to confusion as to whether it sponsors or authorizes unofficial or unlicensed websites. In particular, it asserts that the use of a LEGO trademark in a domain name is an infringement of its rights.

legostar.shop is a clear infringement of LEGO's rights. It monetises its content through advertising banners and Amazon affiliate links to LEGO products.

The legostar.shop domain is a clear infringement of LEGO's asserted rights. It monetizes its content through advertising banners and Amazon affiliate links, which earn commission of up to 5%.

To deter these infringements, The LEGO Group has a legal notice that asks for Fair Play from customers and competitors alike. This philosophy mirrors the names of its own products: "LEGO" is derived from the Danish words "leg godt", which means "play well".

But of course, a polite request cannot deter all ne'er-do-wells. Many domain squatters are unlikely to take heed of legal notices when they register infringing domain names. Consequently, lots of infringement does occur, and The LEGO Group has to expend more effort in dealing with these.

WIPO to the rescue

The LEGO Group is an avid supporter of the World Intellectual Property Organization (WIPO), which it relies on to settle some of its disputes over infringing domain names. Last year, LEGO was the fourth largest filer of domain name cases, accounting for more than 1.4% of all cases handled by WIPO in 2016.

When a domain name is disputed via WIPO, the costs can vary depending on how many domains are included in the complaint, and how many panellists will be involved in considering the complaint. A dispute over a single domain name with a single panellist costs $1,500, or $4,000 with three panellists. These costs are borne solely by the complainant, while the infringing party stands only to lose the registration fee he paid for the domain.

Speculating before the speculators

With so many new gTLDs available to choose from, domain name speculators have many more opportunities than they did a few years ago. Filing disputes amongst an ever-growing landscape of TLDs could soon become a very costly exercise for brand owners.

To avoid these costs, some brand owners speculatively register their own trademarks before the domain squatters can, even if they have no practical use for them. This prevents the domains being registered by others in bad faith, and works out much cheaper than having to file disputes for each one. Legitimate trademark owners can submit claims for their domains during each new gTLD's sunrise period, before anyone else has the opportunity to register them.

LEGO Juris A/S (which does business as The LEGO Group) is the registrant of more than a hundred domains for just its "lego" string. A few examples of these include lego.world, lego.wtf, lego.video, lego.tv, lego.toys, lego.movie, lego.gift, lego.deals, lego.sucks, and even lego.porn. As long as LEGO holds on to these domains, nobody else will be able to register them. Most of these sites simply display a blank homepage, while a few redirect visitors to LEGO's main website at www.lego.com.

However, not all lego domains belong to LEGO. For example, lego.xyz is currently registered to an individual at an agricultural university in Beijing. The site previously displayed a Wishloop domain holding page, which suggested that the owner might have eventually tried to monetize it through conversions, but now the domain name does not resolve in DNS. However, the domain is still registered, and it is not clear why LEGO has not yet acted on this or many other infringing domains – perhaps it is not worth the cost or effort until an infringing site becomes popular enough to cause measurable damage.

Last year, both lego.photo and lego.pics were registered to an individual in Pennsylvania, and the latter domain was used to host a WordPress blog. Rather than being taken over by LEGO Juris A/S, both domain registrations expired and are purportedly now available for registration.

New gTLDs increase the size of the cybersquatter's playground

Speculatively registering domains before they are registered in bad faith by domain squatters can be effective in some cases, but this approach rapidly becomes less practical and too expensive when there are multiple trademarks to protect.

The LEGO Group produces its plastic construction toys under a variety of trademarked themes, such as Dimensions, Ninjago, Chima, Mixels and Mindstorms – plus several licenced brands such as Star Wars and Angry Birds. These provide even more opportunities for cybersquatters to register deceptive domain names.

LEGO owns more than 4,000 unique domains that serve websites, and many of these typify the type of strings that might be registered by domain squatters. These include thelego.movie, legominecraftsets.com, lego-star-wars.net, lego-starwars.eu, lego-starwarsshop.com, lego-starwars.de, citylego.com and more. Each of these sites serves nothing more than a blank webpage, which implies that LEGO only owns them so that others cannot. A few domains, such as www-lego.com and wwwlego.com are configured to redirect visitors to LEGO's main website at www.lego.com.

But it is clearly not feasible to defensively register all possible permutations of LEGO's brands, particularly now there are also hundreds of new gTLDs under which such domains can be registered. This situation makes the domain name dispute process seem almost unavoidable; and indeed, the total number of disputes handled by WIPO during 2016 rose by 10%.

Deciding who a domain name should belong to

When a domain name dispute is handled by the WIPO Arbitration and Mediation Center, the panel considers many factors when deciding whether the domain should be transferred to the complainant. The process is largely transparent, with the procedural history and reasons behind each decision being published on wipo.int.

Take lego-starwars.xyz as an example, which was handled in case D2015-1217. The infringing domain was registered by an individual in the United States, but she did not respond at any point during the dispute proceedings, and thus failed to show that she had any rights or legitimate interests in the disputed domain name.

Prior to filing the dispute, LEGO had attempted the much cheaper option of sending a cease-and-desist letter to the respondent, and proposed to compensate her for the expense of registering the disputed domain name; but this letter was also ignored. This contributed to the panel's decision that the domain had been registered in bad faith.

LEGO requested the panel to issue a decision to transfer the disputed domain name on the grounds that it is a combination of the LEGO trademark and the licenced trademark STARWARS, and that the respondent had no rights or legitimate interests. Although the disputed domain did not serve any content when the complaint was considered by the panel, LEGO claimed it had been connected to a website containing sponsored links to various online shops where LEGO products were sold.

Amongst its findings, the panel pointed out that the use of the .xyz gTLD is not relevant when assessing whether a trademark is identical or confusingly similar. This means that if the respondent had also registered dozens of identical strings under other gTLDs, those might also have had to be taken down via WIPO's service.

Less than two months after the dispute had been filed, the administrative panel ultimately ordered the lego-starwars.xyz domain to be transferred to LEGO. It has now joined LEGO's collection of websites that display nothing more than a blank page.

But many infringing domains still get away with it...

WIPO's arbitration and mediation process for domain name disputes seems effective, albeit a slow and expensive option when there are lots of infringing domains to deal with. This could explain why the LEGO Group does not take swift action against every site that tries to monetize its brand without permission.

Take playlego.xyz as an example. This domain was registered anonymously in 2015, via a WHOIS privacy service, and was used to display a set of LEGO products that are sold on Amazon. These used Amazon affiliate links, so that when a visitor clicked through and subsequently bought one of the items from Amazon, the site's operator would have netted a small percentage of the sale. For just the cost of a .xyz domain (which can be as little as $0.88 for a whole year) the operator of this site could recoup his outlay — and more — with just one sale.

Screenshot of playlego.xyz. This domain registration has since expired.

Screenshot of playlego.xyz. This domain registration has since expired.

Bad faith registrations are also capitalising on the success of The LEGO Batman Movie, which was released in February. For instance, the following domain purportedly offers the chance to stream or download the full movie for free. This is clearly dubious and not recommended.

This .xyz domain (which contains both "lego" and "batman" in its name) has clearly been registered in bad faith, as it offers free access to a pirated copy of The LEGO Batman Movie.

This .xyz domain (which contains both "lego" and "batman" in its name) has clearly been registered in bad faith, as it offers free access to a pirated copy of The LEGO Batman Movie.

Nearly 7% of the domains disputed in WIPO cases last year were under the .xyz top-level domain, making it the most problematic new gTLD in terms of bad faith registrations. Nonetheless, the majority of filed disputes still concern .com domains. This is possibly because .com is still the most recognised top-level domain, and so more people are likely to end up visiting these sites as a result of typo-traffic.

But preventing bad faith registrations is arguably not always in the interests of a domain registrar, as even after a domain has expired, it can still be monetized by the registrar. As an example, thelego.science expired in March after being registered for two years. It still serves a website, which now displays a set of LEGO-related links that lead to sponsored ads paid for by various LEGO toy retailers.

thelego.science has expired, but still displays monetized search links.

thelego.science has expired, but still displays monetized search links.

Some of the infringing domain names contain high-value search keywords, which are likely to generate more money through contextual advertising. For example, the domain name lego10179.com might look like a strange choice to some, but it refers to the 5-digit set number of one of LEGO's most expensive and sought after sets: 10179: The Ultimate Collector's Millennium Falcon. This massive 5,197-part Star Wars set retailed at $500 before it was discontinued seven years ago, but an unopened box can easily fetch several thousand dollars today.

Another very specific example is lego4184-piratesofthecaribbeanblackpearl.com, which refers to set 4184. This LEGO model ship is based on the Black Pearl from the Pirates of the Caribbean film series. The set was discontinued in 2012, but it already commands a high price on the aftermarket. This likely explains the existence of such peculiar infringing domain names, and it's also no wonder that some people consider LEGO to be a better investment than gold. To prevent misuse, the lego4184-piratesofthecaribbeanblackpearl.com domain is now registered to LEGO Juris A/S.

An eBay listing for the Black Pearl, which had an original RRP of £84.99.

An eBay listing for the Black Pearl, which had an original RRP of £84.99.

Dozens of domains that contain the numbers of expensive LEGO sets, such as lego10188.com, lego10210.com, and lego8043.com are now registered to LEGO Juris A/S after previously being registered to other parties.

Other costs of gTLDs

The plethora of new gTLDs has unarguably increased the size of the cybersquatter's playground, but ICANN's new gTLD program has also drawn more than $100 million directly from brand owners who have applied for their own Brand TLDs. Around a third of all new gTLD applications are brand applications, and many of these brand owners will also have to fork out additional money to manage the application process and for the provision of backend registry services.

The LEGO Group applied for its own .lego Brand TLD in 2012, in order to gain exclusive control over all .lego websites. As well as being able to ban cybersquatters from its own TLD, another obvious benefit of operating a Brand TLD registry is being able to make shorter, more memorable internet addresses. However, the LEGO Group does not appear to be using the .lego TLD for any of its websites yet.

Another common motivation for owning a Brand TLD is to mitigate phishing attacks, as fraudulent sites will not be able to directly leverage the trust instilled by the brand's own TLD. But remarkably, phishing attacks against LEGO's customers are practically unheard of, even though it is the world's most powerful brand, and stores payment details and loyalty credit on its online store at shop.lego.com.

LEGO's application for the .lego Brand TLD passed Initial Evaluation in 2013, and was eventually delegated in June 2016. Rather than operating the .lego gTLD itself, LEGO has opted to use Verisign as its backend registry services provider. Since the launch of ICANN's new gTLD program, more than 150 other brands have also engaged Verisign to apply for and manage their new gTLDs. Verisign is well known for its management of the .com and .net generic TLDs, which has no doubt helped to make it a popular choice as a gTLD operator.

Abandoned new gTLDs

Whether or not LEGO ends up making good use of its new gTLD has yet to be seen, but it appears that at least two brand owners have had a change of heart over having their own TLDs. The South Korean conglomerate Doosan initiated the termination of its Registry Agreement for .doosan in September 2015, and the global engineering company FLSmidth – which is headquartered in the same country as LEGO – did the same for .flsmidth in February 2016. Both of these new gTLDs made it to the point where they were successfully delegated to the internet's root zone, which suggests that the owners had already spent hundreds of thousands of dollars before deciding to abandon them.

Detecting infringements

Netcraft's Fraud Detection service can be used to find domains and content that infringe a company's rights. This service also monitors app stores, social media sites, sponsored search engine results and DMARC reports to detect additional infringements. The results for all of the searches are made available via a web interface, together with detailed site information (hosting locations, registrations details, etc.), and are reviewed into categories including 'owned by company', suspicious, benign (e.g. a mention on a news or personal site), unavailable, or phishing.