Secure websites shun HTTP Public Key Pinning

The HTTP Public Key Pinning header, or HPKP, can prevent fraudsters using mis-issued TLS certificates. While it offers a robust defence against website impersonation, hardly any HTTPS websites are actually making use of this powerful security feature, even though it has been supported by some browsers for more than a year.

Less than 0.1% of certificates found in Netcraft's March 2016 SSL Survey were served with the HPKP header. Where it has been deployed, a third of webmasters have mistakenly set a broken HPKP policy. With so many mistakes being made, the barrier to entry is evidently high.

Even for those webmasters who have set a valid policy, a lot of ongoing care and attention is required: both routine and emergency maintenance poses a significant risk of blocking legitimate visitors, potentially for long periods of time. However, when correctly deployed and carefully maintained, HPKP is a powerful security feature.

What does HPKP defend against?

A website can defend against most man-in-the-middle attacks by deploying HTTPS, HSTS and HSTS preloading. Together, these ensure all communication to and from the website is authenticated and encrypted.

While these provide a fairly robust defence against attacks like pharming and sslstrip, there is still a line of attack open. A knowledgeable and dedicated attacker can still attack an otherwise well-defended HTTPS website if he can convince a certificate authority to fraudulently issue him a certificate for it.

HPKP can prevent a customer from accessing a spoof website even if it uses a fraudulently-issued (but otherwise valid) certificate. Both sites use certificates issued by trusted CAs, but only the legitimate site's certificate chain contains a certificate that is expected by the client browser.

HPKP can prevent a customer from accessing a spoof website when it uses a fraudulently-issued (but otherwise valid) certificate. Both sites use certificates issued by trusted CAs, but only the legitimate site's certificate chain contains a certificate that is expected by the client browser. Hashes that correspond to each pinned certificate will have been noted by the browser during a previous visit to the legitimate site.

Although it is extremely difficult for a fraudster to obtain a certificate for a domain he does not control, it is not impossible. In fact, there is ample precedent. Several certificate authorities have been breached, lax issuance policies have been discovered, and technical flaws have been exploited by attackers.

The HPKP header is motivated by the history of mis-issuance within this ecosystem. To use HPKP, website owners must select a set of public keys that must be used in future connections. After visiting the site, its HPKP policy is then stored by the client to reject future connections to servers that use different, non-whitelisted keys.

However, creating an HPKP policy is not entirely sufficient to defend against impersonation attacks. In particular, HPKP cannot defend against rogue root certificates installed locally on users' computers.

Both Dell and Lenovo have recently been caught deploying local root certificates to their customers, along with accompanying private keys. With this knowledge, an attacker can generate a certificate for any website and use it to impersonate that site. The victim's browser will regard the certificate as valid, regardless of the genuine site's HPKP policy.

How is HPKP used?

There are three types of key that can be pinned using HPKP:

  • The current public key of the certificate issued to a site.
  • Public keys corresponding to certificate authorities and their intermediate certificates.
  • Backup keys.

In order for browsers to accept and store a website's HPKP policy, there must be at least two pins specified. At least one pin must be in the chain of trust formed by the browser when verifying the site's certificate, and there must be at least one pin that is not in the chain (a backup pin).

Here is an example of a valid HPKP header, which sets pins for three distinct public keys (marked in bold). This policy is valid for one year over all subdomains of the current origin:

  max-age=31536000; includeSubDomains

Webmasters must be cautious when pinning certificate authority keys. CAs may change their issuance practices without notice, and new certificates may not use the same chain of trust as the old ones. If the new certificate chain no longer includes the pinned keys, the website will not be accessible until the HPKP policy expires.

To avoid the problems posed by using certificate authority keys, webmasters can elect to pin their own keys. This is also a risky practice if the backup key cannot be used: it may have been lost, or may no longer qualify for inclusion in certificates (for example, if a backup key is known to be a Debian weak key, CAs will not accept it for use in new certificates).

"Who dares pins"

HPKP is perfectly safe to implement when pins and certificates are well-managed, but it can also be considered rather risky when you think about what could go wrong: A small mistake could effectively wipe out an online business by preventing its own customers from accessing its website for months. Here are some of the most popular sites that are brave enough to be using HPKP today:


GitHub is the busiest site to have deployed HPKP. Well-known for taking security seriously, it sets a plethora of well-configured best-practice security headers.

One of the headers that is set when visiting is the following HPKP header:

Public-Key-Pins: max-age=300;

This HPKP policy specifies two pins, and the directive applies to all subdomains.

The first pinned key (identified by the WoiWRyIOVNa9ihaBciRSC7XHjliYS9VwUGOIud4PB18= SHA-256 hash) corresponds to the DigiCert High Assurance EV Root CA. This is the root of the chain of trust currently used by

The second hash (JbQbUG5JMJUoI6brnx0x3vZF6jilxsapbXGVfjhN8Fg=) is GitHub's backup pin. This corresponds to the VeriSign Class 3 Public Primary Certification Authority - G5 root. As this key does not appear in GitHub's served certificate chain, it is treated as a backup pin.

When GitHub wants to replace its TLS certificate, the new certificate must be signed by either DigiCert or Symantec – otherwise, none of the key hashes in the new certificate chain would match the existing HPKP policy, and its users would be blocked from accessing the site.

Pinning a pair of root certificate keys is arguably less risky than pinning one of GitHub's own backup keys, but there is a rather large trade-off. With GitHub's current HPKP policy, an attacker can still impersonate the site if he can obtain a fraudulent certificate issued by either DigiCert or Symantec. Conversely, if GitHub were to rely on backup keys that only it controlled, then the only way an attacker could impersonate the site is by compromising GitHub's private keys.

Even so, GitHub evidently remains wary — its HPKP header sets a max-age value of 300. This instructs browsers to remember the policy for no longer than 300 seconds, so in the event of a mistake, users will only be denied access for at most five minutes. However, this makes the policy practically toothless.

In the event of an attack, anybody who has not visited the real within the past five minutes is a potential victim. Even if a user has visited GitHub within the past five minutes, being denied access might just be put down to a temporary glitch. A savvy attacker may decide to wait until five minutes after the users last access to GitHub to ensure he will not be caught.


Mozilla is using HPKP much more effectively on its support site, as this site sets a much longer max-age attribute:

Public-Key-Pins: max-age=1296000;

This equates to 15 days, which means it will provide effective protection to anyone who visits the site at least once a fortnight.

Rather than using public key hashes that correspond to more than one certificate authority, Mozilla has chosen to pin to a single CA: both keys are controlled by DigiCert. In some respects, this is a safer policy by ensuring that only a single CA is able to issue new certificates; however, it leaves Mozilla beholden to DigiCert. If DigiCert were ever forced to stop issuance and Mozilla's certificate required replacement, visitors could be locked out of Mozilla's site for up to 15 days.


A much bolder implementation has been deployed by the Pixabay image library on Its Public-Key-Pins header specifies a max-age of one year.


Rather than pinning CA-controlled keys, Pixabay has pinned its own certificate's key, as well as a backup key held by Pixabay. This option trades complete defence against CA compromise with a significant risk if the backup pin cannot be used.

If Pixabay were to lose the private keys for both of these certificates, it would likely be catastrophic – visitors would be denied access to its site for an entire year. Pixabay has evidently decided that robust prevention of impersonation attacks is worth the risk.

Why are so few sites daring to use HPKP?

Only 0.09% of all certificates in Netcraft's March 2016 SSL Survey are using HPKP – that's fewer than 4,100 certificates in the whole world that are being delivered with the Public-Key-Pins header.

If that amount did not already seem astonishingly low, more than a quarter of these sites are using the HPKP header incorrectly, which effectively disables HPKP. Consequently, the total number of certificates that are really using HPKP is actually less than 3,000.

Still in its infancy

One of the reasons why HPKP is so rarely deployed could be that it is a relatively new standard, and is still not supported by all mainstream browsers. However, this only partly explains its poor uptake on servers. Although the Public Key Pinning Extension for HTTP was not formalised until the publication of RFC 7469 in April 2015, a significant proportion of internet users have already been able to benefit from this feature since October 2014, when HPKP support was introduced to Chrome 38.

By the time HPKP support was also added to Firefox 35 in January 2015, around a quarter of all internet users were in a position to benefit from sites using HPKP. But today, HPKP remains unsupported in Internet Explorer, Edge, Safari and Opera Mini. Nonetheless, there are millions of people using browsers that do support HPKP, and the only reason they are not benefiting from this technology is because so few websites are deploying it.

Lack of awareness

Possibly the largest reason for the lack of HPKP deployment is that many website owners are simply unaware that this security feature exists, or do not realise the benefits it can bring. However, this is not the most significant problem for lots of websites, as most also lack simpler features that are widely supported, such as HSTS and "Secure" cookies. Implementing HPKP is largely redundant if a site does not also implement HSTS, as this would still allow a man-in-the-middle attacker to hijack unencrypted HTTP traffic and prevent the victim's browser being redirected to the HTTPS site.

Lack of understanding

Netcraft's SSL Survey shows that lots of trivial mistakes are being made when website administrators try to deploy HPKP headers, which indicates a widespread lack of understanding. The net result of these mistakes is that HPKP is not enabled on many sites.

Fear of the "HPKP Footgun"

HPKP is the best way of protecting a site from being impersonated by mis-issued certificates, but as we have already discussed, it is very easy for this protection to backfire with severe consequences. A small misconfiguration could result in a website becoming inaccessible to its own customers.


HPKP offers a very strong defence against man-in-the-middle attacks, providing it is used in conjunction with HTTPS, HSTS and HSTS Preloading – but despite the obvious security benefits, hardly anyone is using it. Currently, only 0.09% of all secure websites are making use of HPKP headers.

The risk of something going wrong when deploying HPKP is hard to overlook, as a small mistake could ultimately destroy a company's business by making its website inaccessible for months. Only a few thousand secure websites have accepted this risk so far, although you could argue that it only makes sense to deploy HPKP on the largest and most visible websites. For smaller websites, the high risk of something going wrong is outweighed by the incredibly low risk of being attacked: Fraudulently issued certificates are a very rare occurrence, and are more likely to be used to impersonate the biggest websites.

An even newer technology known as Expect CT could potentially provide a safer and easier approach to tackling fraudulently issued certificates. Opted-in websites will be able to tell browsers to expect to see their legitimate certificates in a Certificate Transparency log. These logs are open to public scrutiny, allowing mis-issued certificates to be identified by domain owners, CAs and domain users; and fraudulently issued certificates that do not appear in logs would not be trusted under Expect CT. CAs would be responsible for entering correct details into these logs, thus removing the burden from website operators.

Sites that have properly configured HPKP would be extremely hard to attack in practice, although it is still not impossible. Browsers that have never visited a site before could still be vulnerable to man-in-the-middle attacks if an attacker obtains a valid certificate, because unlike with HSTS, there is no common preload list available for HPKP (it is, however, possible to request special treatment in Google Chrome).

Netcraft offers a range of services that can be used to detect and defeat large-scale pharming attacks, and security testing services that identify man-in-the-middle vulnerabilities in web application and mobile apps. Contact for more information.

Hook, like and sinker: Facebook serves up its own phish

Fraudsters are abusing Facebook's app platform to carry out some remarkably convincing phishing attacks against Facebook users.

A phishing site displayed on the real Facebook website.

A phishing site displayed on the real Facebook website.

Masquerading as a Facebook Page Verification form, this phishing attack leverages Facebook's own trusted TLS certificate that is valid for all subdomains. This makes the page appear legitimate, even to many seasoned internet users; however, the verification form is actually served via an iframe from an external site hosted by HostGator. The external website also uses HTTPS to serve the fraudulent content, so no warnings are displayed by the browser.

The phishing attack does not require the victim to be already logged in.

The phishing attack does not require the victim to be already logged in.

This phishing attack works regardless of whether the victim is already logged in, so there is little chance of a victim being suspicious of being asked to log in twice in immediate succession.

The source code of the phishing content reveals that it sends the stolen credentials directly to the fraudster's website.

The source code of the phishing content reveals that it sends the stolen credentials directly to the fraudster's website.

To win over anyone who remains slightly suspicious, the phishing site always pretends that the first set of submitted credentials were incorrect. A suspicious user might deliberately submit an incorrect username and password in order to test whether the form is legitimate, and the following error message could make them believe that the credentials really are being checked by Facebook.

The phishing site always pretends the first submitted credentials are incorrect.

The phishing site always pretends the first submitted credentials are incorrect. Note that it now also asks for the victim's date of birth.

Those who were slightly suspicious might then believe it is safe to enter their real username and password. Anyone else who had already entered the correct credentials would probably just think they had made a mistake and try again. After the second attempt, the phishing site will act as if the correct credentials had been submitted:

On the second attempt, the phishing site will ask the victim to wait up to 24 hours.

On the second attempt, the phishing site will ask the victim to wait up to 24 hours.

The final response indicates that the victim will have to wait up to 24 hours for their submission to be approved. Without instant access to the content they were trying to view, the victim will probably carry on doing something else until they receive the promised email notification.

But of course, this email will never arrive. By this point, the fraudster already has the victim's credentials and is just using this tactic to buy himself some time. He can either use the stolen Facebook credentials himself, or sell them to others who might monetize them by posting spam or trying to trick victims' friends into helping them out of trouble by transferring money. If more victims are required, then the compromised accounts could also be used to propagate the attack to thousands of other Facebook users.

Some of Facebook's security settings.

Some of Facebook's security settings.

However, Facebook does provide some features that could make these attacks harder to pull off. For example, if login alerts are enabled, the victim will be notified that their account has been logged into from a different location – this might at least make the victim aware that something untoward is going on. Although not enabled by default, users can completely thwart this particular attack by activating Facebook's login approvals feature, which requires a security code to be entered when logging in from unknown browsers. Only the victim will know this code, and so the fraudster will not be able to log in.

DigitalOcean becomes the second largest hosting company in the world

DigitalOcean has grown to become the second-largest hosting company in the world in terms of web-facing computers, and shows no signs of slowing down.

The virtual private server provider has shown phenomenal growth over the past two-and-a-half years. First seen in our December 2012 survey, DigitalOcean today hosts more than 163,000 web-facing computers, according to Netcraft's May 2015 Hosting Provider Server Count. This gives it a small lead over French company OVH, which has been pushed down into third place.

Amazing growth at DigitalOcean

Amazing growth at DigitalOcean

DigitalOcean's only remaining challenge will be to usurp Amazon Web Services, which has been the largest hosting company since September 2012. However, it could be quite some time until we see DigitalOcean threatening to gain this ultimate victory: Although DigitalOcean started growing at a faster rate than Amazon towards the end of 2013, Amazon still has more than twice as many web-facing computers than DigitalOcean today.

Nonetheless, DigitalOcean seems committed to growing as fast as it can. Since October 2014, when we reported that DigitalOcean had become the fourth largest hosting company, DigitalOcean has introduced several new features to attract developers to its platform. Its metadata service enables Droplets (virtual private servers) to query information about themselves and bootstrap new servers, and a new DigitalOcean DNS service brought more scalability and reliability to creating and resolving DNS entries, allowing near-instantaneous propagation of domain names.

Other companies are also helping to fuel growth at DigitalOcean. Mesosphere created an automated provisioning tool which lets customers use DigitalOcean's resources to create self-healing environments that offer fault tolerance and scalability with minimal configuration. Mesosphere's API makes it possible to manage thousands of Droplets as if they were a single computer, and with DigitalOcean's low pricing models and SSD-only storage, it's understandable how this arrangement can appeal to particularly power-hungry developers.

In January, DigitalOcean introduced its first non-Linux operating system, FreeBSD. Although less commonly used these days, FreeBSD has garnered a reputation for reliability and it was not unusual to see web-facing FreeBSD servers with literally years of uptime in the past. In April, DigitalOcean launched the second version of its API, which lets developers programmatically control their Droplets and resources within the DigitalOcean cloud by sending simple HTTP requests.

DigitalOcean added a new Frankfurt region in April 2015.

DigitalOcean added a new Frankfurt region in April 2015.

More recently, DigitalOcean introduced a new European hosting region in Frankfurt, Germany. This is placed on the German Commercial Internet Exchange (DE-CIX), which is the largest internet exchange point worldwide by peak traffic, allowing Droplets hosted in this region to offer good connectivity to neighbouring countries. (An earlier announcement of an underwater Atlantis datacenter sadly turned out to be an April Fool's joke, despite the obvious benefits of free cooling).

Even so, Amazon still clearly dwarfs DigitalOcean in terms of variety of features and value-added services. Notably, Amazon offers a larger variety of operating systems on its EC2 cloud instances (including Microsoft Windows), and its global infrastructure is spread much wider. For example, EC2 instances can be hosted in America, Ireland, Germany, Singapore, Japan, Australia, Brazil, China or even within an isolated GloudGov US region, which allows US government agencies to move sensitive workloads into the cloud whilst fulfilling specific regulatory and compliance requirements. As well as these EC2 regions, Amazon also offers additional AWS Edge Locations to be used by its CloudFront content delivery network and its Route 53 DNS service.

Yet, as well as its low pricing, part of the appeal of using DigitalOcean could lie within its relative simplicity compared with Amazon's bewilderingly vast array of AWS services (AppStream, CloudFormation, ElastiCache, Glacier, Kinesis, Cognito, Simple Workflow Service, SimpleDB, SQS and Data Pipeline to name but a few). Signing up and provisioning a new Droplet on DigitalOcean is remarkably quick and easy, and likely fulfils the needs of many users. DigitalOcean's consistent and strong growth serves as testament to this, and will make the next year very interesting for the two at the top.

March 2016 Web Server Survey

In the March 2016 survey we received responses from 1,003,887,790 sites and 5,782,080 web-facing computers. This reflects a gain of nearly 70 million sites, but a loss of 14,100 computers.

This is the second time the total number of sites has reached more than a billion. This milestone was first reached in September 2014, although it was short-lived: By November 2014, the total fell back below one billion, and had stayed that way until the current month. During the intervening period, the total fell as low as 849 million sites in April 2015.

The total number of websites is typically prone to large fluctuations. Domain holding companies, typo squatters, spammers and link farmers can cause millions of sites to be deployed in a short space of time, without any significant outlay, but these types of site are intrinsically uninteresting to humans. Netcraft's active sites metric counters the effect of these by discounting sites that appear to be automatically generated. This leads to a more-stable metric that better illustrates real, practical use of the web.

The number of active sites currently stands at just 171 million, meaning around 1 in 6 sites are active. The total fell by 764,000 this month, but nginx stands out as being the only major vendor to increase its active site count — by an impressive 699,000. This has increased its active sites share to 16.4%, while Apache's loss of nearly a million active sites took its leading share down to 49.2%.

Typifying nginx's rise amongst active sites, it also showed the only growth in web-facing computers amongst the major server vendors. This month's survey found more than 15,000 additional computers running nginx on the web, while Microsoft's loss of 30,000 computers was the primary cause of the overall loss in this metric. Thankfully, the majority of this decline consisted of Windows Server 2003 computers, which arguably helps improve the safety of the internet — this server software is no longer supported by Microsoft.

China accounts for over 30% of all web-facing computers that run Windows Server 2003, making it the largest user of this obsolete operating system; however, more than half of this month's Windows Server 2003 losses were seen in China, which has helped to bring this share down slightly.

Apache's computer growth was relatively modest at only 447 computers, but Microsoft's large loss caused Apache's market share to increase by 0.12 to 47.9%. nginx's gain of 15,000 computers took its market share up by 0.30 to 14.3%, but Microsoft remains a fair way ahead of nginx with a 26.6% share of the market.

Total number of websites

Web server market share

DeveloperFebruary 2016PercentMarch 2016PercentChange
Continue reading

95% of HTTPS servers vulnerable to trivial MITM attacks

Only 1 in 20 HTTPS servers correctly implements HTTP Strict Transport Security, a widely-supported security feature that prevents visitors making unencrypted HTTP connections to a server.

The remaining 95% are therefore vulnerable to trivial connection hijacking attacks, which can be exploited to carry out effective phishing, pharming and man-in-the-middle attacks. An attacker can exploit these vulnerabilities whenever a user inadvertently tries to access a secure site via HTTP, and so the attacker does not even need to spoof a valid TLS certificate. Because no crypto-wizardry is required to hijack an HTTP connection, these attacks are far easier to carry out than those that target TLS, such as the recently announced DROWN attack.


The growth of HTTPS has been a mostly positive step in the evolution of the internet, enabling encrypted communications between more users and websites than ever before. Many high profile sites now use HTTPS by default, and millions of TLS certificates are currently in use on the web. With companies like Let's Encrypt offering free certificates and automated management tools, it is also easier than ever to deploy an HTTPS website that will be trusted by all modern browsers.

The primary purpose of a TLS certificate is to allow a browser to verify that it is communicating with the correct website. For example, if uses a valid TLS certificate, then a man-in-the-middle attacker would not be able to hijack a browser's connection to this site unless he is also able to obtain a valid certificate for that domain.

A man-in-the-middle attack like this is generally not possible if the customer uses HTTPS.

A man-in-the-middle attack like this is generally not possible if the initial request from the customer uses HTTPS.

It would be extremely difficult for the attacker to obtain a valid certificate for a domain he does not control, and using an invalid certificate would cause the victim's browser to display an appropriate warning message. Consequently, man-in-the-middle attacks against HTTPS services are hard to pull off, and often not very successful. However, there are plenty of realistic opportunities to use the unencrypted HTTP protocol to attack most HTTPS websites.

HTTP Strict Transport Security (HSTS)

Encrypted communications are an essential requirement for banks and other financial websites, but HTTPS alone is not sufficient to defend these sites against man-in-the-middle attacks. Astonishingly, many banking websites lurk amongst the 95% of HTTPS servers that lack a simple feature that renders them still vulnerable to pharming and man-in-the-middle attacks. This missing feature is HTTP Strict Transport Security (HSTS), and only 1 in 20 secure servers currently make use of it, even though it is supported by practically all modern browsers.

Each secure website that does not implement an HSTS policy can be attacked simply by hijacking an HTTP connection that is destined for it. This is a surprisingly feasible attack vector, as there are many ways in which a user can inadvertently end up connecting via HTTP instead of HTTPS.

Manually typed URLs often result in an initial insecure request, as most users do not explicitly type in the protocol string (http:// or https://). When no protocol is given, the browser will default to HTTP – unless there is an appropriate HSTS policy in force.

To improve accessibility, most secure websites also run an HTTP service to redirect users to the corresponding HTTPS site – but this makes them particularly prone to man-in-the-middle attacks if there is no HSTS policy in force. Not only would many users be accustomed to visiting the HTTP site first, but anyone else who visits the site via an old bookmark or search engine result might also initially access the site via an insecure HTTP address. Whenever this happens, the attacker can hijack the initial HTTP request and prevent the customer being redirected to the secure HTTPS website.

This type of attack can be automated with the sslstrip tool, which transparently hijacks HTTP traffic on a network and converts HTTPS links and redirects into HTTP. This type of exploit is sometimes regarded as a protocol downgrade attack, but strictly speaking, it is not: rather than downgrading the protocol, it simply prevents the HTTP protocol being upgraded to HTTPS.

NatWest's online banking website at lacks an HSTS policy and also offers an HTTP service to redirect its customers to the HTTPS site. This setup is vulnerable to the type of man-in-the-middle attack described above.

NatWest's online banking website at lacks an HSTS policy and also offers an HTTP service to redirect its customers to the HTTPS site. This setup is vulnerable to the type of man-in-the-middle attack described above.

Vulnerable sites can be attacked on a massive scale by compromising home routers or DNS servers to point the target hostname at a server that is controlled by the attacker (a so-called "pharming" attack). Some smaller scale attacks can be carried out very easily – for example, if an attacker sets up a rogue Wi-Fi access point to provide internet access to nearby victims, he can easily influence the results of their DNS lookups.

Even if a secure website uses HTTPS exclusively (i.e. with no HTTP service at all), then man-in-the-middle attacks are still possible. For example, if a victim manually types into his browser's address bar—without prefixing it with https://—the browser will attempt to make an unencrypted HTTP connection to, even if the genuine site does not run an HTTP service. If this hostname has been pharmed, or is otherwise subjected to a man-in-the-middle attack, the attacker can hijack the request nonetheless and eavesdrop the connection as it is relayed to the genuine secure site, or serve phishing content directly to the victim.

In short, failing to implement an HSTS policy on a secure website means attackers can carry out man-in-the-middle attacks without having to obtain a valid TLS certificate. Many victims would fall for these attacks, as they can be executed over an unencrypted HTTP connection, thus avoiding any of the browser's tell-tale warnings about invalid certificates.

Implementing HSTS: A simple one-liner

The trivial man-in-the-middle attacks described above can be thwarted by implementing an appropriate HSTS policy. A secure website can do this simply by setting a single HTTP header in its responses:

    Strict-Transport-Security: max-age=31536000;

This header can only be set over an HTTPS connection, and instructs compatible browsers to only access the site over HTTPS for the next year (31,536,000 seconds = 1 year). This is the most common max-age value, used by nearly half of all HTTPS servers. After this HSTS policy has been applied, even if a user manually prefixes the site's hostname with http://, the browser will ignore this and access the site over HTTPS instead.

The combination of HSTS and HTTPS therefore provides a good defence against pharming attacks, as the attacker will not be able to redirect and intercept plaintext HTTP traffic when a client obeys the HSTS policy, nor will he be able to present a valid TLS certificate for the site he is impersonating.

The attacker cannot even rely on a small proportion his victims unwisely ignoring the use of an invalid certificate, as browsers must regard this situation as a hard fail when an HSTS policy is in force. The browser will simply not let the victim access the site if it finds an invalid certificate, nor will it allow an exception to be added.

When Google Chrome encounters an invalid certificate for a site that has an effective HSTS policy, the victim is not allowed to bypass the browser's warning message or add an exception.

When Google Chrome encounters an invalid certificate for a site that has an effective HSTS policy, the victim is not allowed to bypass the browser's warning message or add an exception.

To prevent other types of attack, it is also wise to add the includeSubDomains directive to ensure that every possible subdomain of a site is protected by HSTS. This mitigates cookie injection and session fixation attacks that could be executed by impersonating an HTTP site on a non-existent subdomain such as, and using it to set a cookie which would be sent to the secure site at This directive can be enabled like so:

    Strict-Transport-Security: max-age=31536000; includeSubDomains

However, some thought is required before taking the carte blanche approach of including all subdomains in an HSTS policy. The website's administrators must ensure that every single one of its subdomains supports HTTPS for at least the duration specified by the max-age parameter, otherwise users of these subdomains risk being locked out.

Setting an HSTS policy will also protect first time visitors who habitually use search bars or search engines to reach their destination. For example, typing "paypal" into Google's HTTPS search engine will yield a link to, because Google will always link to the HTTPS version of a website if an appropriate HSTS policy exists.

HSTS preloading

HSTS is clearly an important security feature, but there are several circumstances under which its benefits will not work. Because HSTS directives are delivered via an HTTP header (over an HTTPS connection), HSTS can only instruct a browser to only use HTTPS after the browser's first visit to a secure website.

Men-in-the-middle can therefore still carry out attacks against users who have:

  • Never before visited the site.
  • Recently reinstalled their operating system.
  • Recently reinstalled their browser.
  • Switched to a new browser.
  • Switched to a new device (e.g. mobile phone).
  • Deleted their browser's cache.
  • Not visited the site within the past year (or however long the max-age period lasts).

These vulnerabilities can be eliminated by using HSTS Preloading, which ensures that the site's HSTS policy is distributed to supported browsers before the customer's first visit.

Website administrators can use the form at to request for domains to be included in the HSTS Preload list maintained by Google. Each site must have a valid certificate, redirect all HTTP traffic to HTTPS, and serve all subdomains over HTTPS. The HSTS header served from each site must specify a max-age of at least 18 weeks (10,886,400 seconds) and include the preload and includeSubdomains directives.

It can take several months for domains to be reviewed and propagated to the latest stable versions of Firefox, Safari, Internet Explorer, Edge and Chrome. When domains are added to the preload list, all users of these browsers will benefit from the security offered by HSTS, even if they have never visited the sites before.


HSTS is widely supported, but not widely implemented. Nearly all modern browsers obey HSTS policies, including Internet Explorer 11, Microsoft Edge, Firefox, Chrome, Safari and Opera – yet less than 5% of secure websites enable this important security feature.

Secure websites that do not use HSTS are trivial to attack if the attacker can hijack a victim's web traffic, but it is even easier to defeat such attacks by implementing an HSTS policy. This begs the question of why so few websites are using HSTS.

The HSTS specification (RFC 6797) was published in 2012, and so it can hardly be considered a new technology any more. Nonetheless, many website administrators might still be unaware of its existence, or may not yet feel ready to commit to running an HTTPS-only website. These are probably the most significant reasons for its low uptake.

Some website administrators have even disabled HSTS by explicitly setting a max-age of 0 seconds. This has the effect of switching off any previously established HSTS policies, but this backpedalling can only take proper effect if every client revisits the secure site after the max-age has been set to zero. When a site implements an HSTS policy, it is effectively committed to maintaining its HTTPS service for as long as the largest max-age it has ever specified, otherwise it risks denying access to infrequent visitors. Nearly 4% of all HTTPS servers that use the Strict-Transport-Security header currently set a max-age of zero, including Twitter's URL-shortener.

Browser support for HSTS can also introduce some privacy concerns. By initiating requests to several distinct hostnames (some of which enable HSTS), a hostile webpage can establish a "supercookie" to uniquely identify the client browser during subsequent visits, even if the user deletes the browser's conventional cookies. The browser will remember which pattern of hostnames had HSTS enabled, thus allowing the supercookie to persist. However, this privacy concern only affects clients and does not serve as an excuse for websites to avoid implementing their own HSTS policies.

Implementing an HSTS policy is very simple and there are no practical downsides when a site already operates entirely over HTTPS. This makes it even more surprising to see many banks failing to use HSTS, especially on their online banking platforms. This demonstrates poor security practices where it matters the most, as these are likely to be primary targets of pharming attacks.

Netcraft offers a range of services that can be used to detect and defeat large-scale pharming attacks, and security testing services that identify man-in-the-middle vulnerabilities in web application and mobile apps. Contact for more information.

September 2015 Web Server Survey

In the September 2015 survey we received responses from 892,743,625 sites and 5,438,101 web-facing computers. Both of these key metrics grew this month, with net gains of 18 million sites and 47,000 computers.

Microsoft made by far the largest gain in hostnames this month, with an additional 33.6 million sites bringing its total up to 265 million. Combined with a 15.9 million loss in Apache-powered sites, the difference between Microsoft's and Apache's market shares has now halved: Microsoft's share went up by 3.22 percentage points to 29.68%, while Apache's fell by 2.55 to 34.96%, reducing Apache's lead to just over five percentage points.

However, September's growth in web-facing computers paints a different picture, with Apache's net gain of 19,800 computers being more than six times higher than Microsoft's. Despite this, both Microsoft and Apache lost market share this month, while nginx – which gained the most web-facing computers in September (+22,100) – grew its share to nearly 13%. With an additional 6.9 million sites bumping its site share up to 15.60%, nginx was the only major server vendor to increase its market share in both sites and computers this month.

Despite no longer being supported by Microsoft, the number of websites using Microsoft IIS 6.0 (which typically runs on Windows Server 2003) has since grown by 19% and accounted for much of the overall Microsoft hostname growth this month. 153 million websites are now using Microsoft IIS 6.0, compared with 129 million in July; however, the number of web-facing computers using IIS 6.0 has fallen by 6%, and the number of active sites fell by 16%.

China accounted for around a third of this month's overall growth in web facing computers, outpacing the growth seen in the United States and Germany by a factor of three. Even so, China was responsible for only a tiny fraction of this month's site growth. Microsoft Windows continues to be the preferred hosting platform for computers in China, where it is currently used by 42% of all web-facing computers and 43% of all sites. Astoundingly, over half of these Windows computers are running Windows Server 2003, and some Chinese hosting providers continue to provide new installations of this deprecated operating system.

Amongst the world's top million websites, nginx has continued to increase its market share and now powers more than twice as many sites as Microsoft. Apache's share has been steadily declining over the past few years mostly as a result of nginx's gains, but it looks set to remain the dominant server vendor within the top million for a while longer, as it is still used by more than twice as many sites as nginx.

Total number of websites

Web server market share

DeveloperAugust 2015PercentSeptember 2015PercentChange
Continue reading