February 2016 Web Server Survey

In the February 2016 survey we received responses from 933,892,520 sites and 5,796,210 web-facing computers.

Microsoft has edged closer towards Apache, with an increase of 16.1 million sites bringing its total up by 6.14% to 279 million. Apache's relatively modest growth of 0.66% has put Microsoft within 3 percentage points of Apache's leading market share of 32.8%.

In terms of web-facing computers, Apache maintains a much clearer lead with a 47.8% share of the market. Microsoft also takes second place by this metric, albeit with a share that is more than 20 percentage points behind Apache. However, both Apache and Microsoft suffered small losses in market share as nginx continues to exhibit strong growth: This month, nginx gained 21,100 computers, increasing its market share by 0.26 points to 13.96%.

nginx 1.9.11 mainline was released on 9th February. This version introduced support for dynamic modules, enabling selective loading of both third-party and some native modules at runtime. In previous versions, nginx modules had to be statically linked into an nginx binary built from source, causing the module to be loaded every time even if it was not going to be used.

The latest version of Microsoft Internet Information Services, IIS 10.0, is still very rare in the wild, as its primary deployment platform (Windows Server 2016) has yet to be released. Fewer than 5,000 websites are currently using IIS 10.0, and these are being served either from technical preview versions of Windows Server 2016, or from Windows 10 machines.

The latest technical preview version of Windows Server 2016 also supports a headless deployment option known as Nano Server. This is a stripped-down version of Windows Server, without a graphical interface and a few other features that are not essential for modern web applications. As a result, it typically requires fewer updates to be installed – and consequently, fewer reboots, too.

Despite losing a small amount of market share, Apache also showed a reasonable growth of 15,600 computers. Similar to last month, a significant proportion of this growth was due to the appearance of more Western Digital My Cloud consumer storage devices.

The total number of My Cloud devices in the survey now stands at 583,400, which is 68,400 more than last month; however, the number of devices that are exposed directly to the internet grew by only 11,100.

Western Digital is using Amazon AWS to host the servers that proxy requests to My Cloud devices in Relay mode. Most of these relay servers have been configured to serve a few thousand devices each, and so the 331,000 devices that are currently using Relay mode contribute fewer than 200 computers towards Apache's total.

Interestingly, while most web-facing My Cloud devices are hosted in the US, more than half of the *.wd2go.com hostnames used by the relay servers are hosted in Amazon's EU regions.

Total number of websites

Web server market share

DeveloperJanuary 2016PercentFebruary 2016PercentChange
Apache304,271,06133.56%306,292,55732.80%-0.76
Microsoft262,471,88628.95%278,593,04129.83%0.88
nginx141,443,63015.60%137,459,39114.72%-0.88
Google20,799,0872.29%20,640,0582.21%-0.08
Continue reading

HTTP Public Key Pinning: You’re doing it wrong!

HTTP Public Key Pinning (HPKP) is a security feature that can prevent fraudulently issued TLS certificates from being used to impersonate existing secure websites.

Our previous article detailed how this technology works, and looked at some of the sites that have dared to use this powerful but risky feature. Notably, very few sites are making use of HPKP: Only 0.09% of the certificates in Netcraft's March 2016 SSL Survey are served with HPKP headers, which equates to fewer than 4,100 certificates in total.

But more surprisingly, around a third of these sites are using the HPKP header incorrectly, which effectively disables HPKP. Consequently, the total number of certificates that are actually using HPKP is effectively less than 3,000.

Firefox's developer console reveals that this site has failed to include a backup pin, and so its HPKP policy is ignored by the browser.Failing to include a backup pin is the most common type of mistake made by sites that try to use HPKP.

Firefox's developer console reveals that this site has failed to include a backup pin, and so its HPKP policy is ignored by the browser.
Failing to include a backup pin is the most common type of mistake made by sites that try to use HPKP.

HPKP is the best way of protecting a site from being impersonated by mis-issued certificates, but it is easy for this protection to backfire with severe consequences. Fortunately, most misconfigurations simply mean that a site's HPKP policy will be ignored by browsers. The site's administrators might not realise it, but this situation is essentially the same as not using HPKP at all.

How can it go wrong?

Our previous article demonstrated a few high-profile websites that were using HPKP to varying degrees. However, plenty of other sites have bungled HPKP to the extent that it simply does not work.

Zero max-age

Every HPKP policy must specify a max-age directive, which suggests how long a browser should regard the website as a "Known Pinned Host". The most commonly used max-age value is 5184000 seconds (60 days). Nearly 1,200 servers use this value, while around 900 use 2592000 seconds (30 days).

But around 70 sites feature pointlessly short max-age values, such as 5 or 10 seconds. These durations are far too short to be effective, as a victim's browser would rapidly forget about these known pinned hosts.

Additionally, a few sites explicitly specify a max-age of zero along with their public key pins. These sites are therefore not protected by HPKP, and are in some cases needlessly sending this header to every client request. It is possible that they are desperately trying to remove a previously set HPKP policy, but this approach obviously cannot be relied upon to remove cached pins from browsers that do not visit the site in the meantime. These sites would therefore have to continue using a certificate chain that conforms to their previous HPKP policy, or run the risk of locking out a few stragglers.

One of the sites that sets a zero max-age is https://vodsmarket.com. Even if this max-age were to be increased, HPKP would still not be enabled because there is only one pinned public key:

Public-Key-Pins: pin-sha256="sbKjNAOqGTDfcyW1mBsy9IOtS2XS4AE+RJsm+LcR+mU="; max-age=0;

Another example can be seen on https://wondershift.biz, which pins two certificates' public keys. Again, even if the max-age were to be increased, this policy would still not take effect because there are no backup pins specified (both of the pinned keys appear in the site's certificate chain):

Public-Key-Pins: pin-sha256="L7mpy8M0VvQcWm7Yyx1LFK/+Ao280UZkz5U38Qk5G5g=";
    pin-sha256="EohwrK1N7rr3bRQphPj4j2cel+B2d0NNbM9PWHNDXpM=";
    includeSubDomains;
    max-age=0;
    report-uri="https://yahvehyireh.com/incoming/hpkp/index.php"

Wrong pin directives

Each pinned public key must be specified via a separate pin-sha256 directive, and each value must be a SHA256 hash; but more than 1% of servers that try to use HPKP fail to specify these pins correctly.

For example, the Department of Technology at Aichi University of Education exhibits the following header on https://www.auetech.aichi-edu.ac.jp:

Public-Key-Pins: YEnyhAxjrMAeVokI+23XQv1lzV3IBb3zs+BA2EUeLFI=";
    max-age=5184000;
    includeSubDomains

This header appears to include a single public key hash, but it omits the pin-sha256 directive entirely. No browser will make any sense of this attempted policy.

In another example, the Fast Forward Imaging Customer Interface at https://endor.ffwimaging.com does something very peculiar. It uses a pin-sha512 directive, which is not supported by the RFC – but in any case, the value it is set to is clearly not a SHA512 hash:

Public-Key-Pins: pin-sha512="base64+info1="; max-age=31536000; includeSubDomains

Some sites try to use SHA1 public key hashes, which are also unsupported:

Public-Key-Pins: pin-sha1='ewWxG0o6PsfOgu9uOCmZ0znd8h4='; max-age=2592000; includeSubdomains

This one uses pin-sha instead of pin-sha256:

Public-Key-Pins: pin-sha="xZ4wUjthUJ0YMBsdGg/bXHUjpEec5s+tHDNnNtdkwq8=";
    max-age=5184000; includeSubDomains

And this one refers to the algorithm "SHA245", which does not exist:

Public-Key-Pins: pin-sha245="pyCA+ftfVu/P+92tEhZWnVJ4BGO78XWwNhyynshV9C4=";
    max-age=31536000; includeSubDomains

The above example was most likely a typo, as is the following example, which specifies a ping-sha256 value:

Public-Key-Pins: ping-sha256="5C8kvU039KouVrl52D0eZSGf4Onjo4Khs8tmyTlV3nU=";
    max-age=2592000; includeSubDomains

These are careless mistakes, but it is notable that these types of mistake alone account for more than 1% of all certificates that set the Public-Key-Pins header. The net effect of these mistakes is that HPKP is not enabled on these sites.

Only one pinned public key

As we emphasised in our previous article, it is essential that a secure site should specify at least two public key pins when deploying HPKP. At least one of these should be a backup pin, so that the website can recover from losing control of its deployed certificate. If the website owner still possesses the private key for one of the backup certificates, the site can revert to using one of the other pinned public keys without any browsers refusing to connect.

But 25% of servers that use HPKP specify only one public key pin. This means that HPKP will not be enabled on the sites that use these certificates.

To prevent sites from inadvertently locking out all of their visitors, and to force the use of backup pins, browsers should only cache a site's pinned public keys if the Public-Key-Pins header contains two or more hashes. At least one of these must correspond to a certificate that is in the site's certificate chain, and at least one must be a backup pin (if a hash cannot be found in the certificate chain, then the browser will assume it is a backup pin without verifying its existence).

https://xcloud.zone is an example of a site that only sets one public key pin:

Public-Key-Pins: pin-sha256="DKvbzsurIZ5t5PvMaiEGfGF8dD2MA7aTUH9dbVtTN28=";
    max-age=2592000; includeSubDomains

This single pin corresponds to the subscriber certificate issued to xcloud.zone. Despite the 30-day max-age value, this lonely public key hash will never be cached by a browser. Consequently, HPKP is not enabled on this site, and the header might as well be missing entirely.

No pins at all

As well as the 1,000+ servers that only have one pinned public key, some HPKP headers neglect to specify any pins at all, and a few try to set values that are not actually hashes (which has the same effect as not setting any pins at all). For example, the Hide My Ass! forum at https://forum.hidemyass.com sets the following:

Public-Key-Pins: pin-sha256="<Subject Public Key Information (SPKI)>";
    max-age=2592000; includeSubDomains

The ProPublica SecureDrop site at https://securedrop.propublica.org also made a subtle mistake last month by forgetting to enclose its pinned public key hashes in double-quotes:

Public-Key-Pins: max-age=86400;
    pin-sha256=rhdxr9/utGWqudj8bNbG3sEcyMYn5wspiI5mZWkHE8A=
    pin-sha256=lT09gPUeQfbYrlxRtpsHrjDblj9Rpz+u7ajfCrg4qDM=

The HPKP RFC mandates that the Base64-encoded public key hashes must be quoted strings, so the above policy would not have worked. ProPublica has since fixed this problem, as well as adding a third pin to the header.

ProPublica is an independent newsroom that produces investigative journalism in the public interest. It provides a SecureDrop site to allow tips or documents to be submitted securely; however, until recently the HPKP policy on this site was ineffectual.

ProPublica is an independent newsroom that produces investigative journalism in the public interest. It provides a SecureDrop site to allow tips or documents to be submitted securely; however, until recently the HPKP policy on this site was ineffectual.

If companies that specialise in online privacy and secure anonymous filesharing are making these kinds of mistake on their own websites, it's not surprising that so many other websites are also getting it wrong.

At least two pins, but no backup pins

A valid HPKP policy must specify at least two pins, and at least one of these must be a backup pin. A browser will assume that a pin corresponds to a backup certificate if none of the certificates in the site's certificate chain correspond to that pin.

The Samba mailing list website fails to include any backup pins. Consequently, its HPKP policy is not enforced.

The Samba mailing list website fails to include any backup pins. Consequently, its HPKP policy is not enforced.

The Samba mailing lists site at https://lists.samba.org specifies two pinned public key hashes, but both of these appear in its certificate chain. Consequently, a browser will not apply this policy because there is no evidence of a backup pin. HPKP is effectively disabled on this site.

Incidentally, the Let's Encrypt Authority X1 cross-signed intermediate certificate has the most commonly pinned public key in our survey. More than 9% feature this in their set of pins, although it should never be pinned exclusively because Let's Encrypt is not guaranteed to always use their X1 certificate. Topically, just a few days ago, Let's Encrypt started to issue all certificates via its new Let's Encrypt Authority X3 intermediate certificate in order to be compatible with older Windows XP clients; but fortunately, the new X3 certificate uses the same keys as the X1 certificate, and so any site that had pinned the public key of the X1 certificate will continue to be accessible when it renews its subscriber certificate, without having to change its current HPKP policy.

The next most common pin belongs to the COMODO RSA Domain Validation Secure Server CA certificate. This pin is used by more than 6% of servers in our survey, all of which – despite the use of HPKP – could be vulnerable to man-in-the-middle attacks if Comodo were to be hacked again.

Pinning only the public keys of subscriber certificates would offer the best security against these kinds of attack, but it is fairly common to also pin the keys of root and intermediate certificates to reduce the risk of "bricking" a website in the event of a key loss. This approach is very common among Let's Encrypt customers, as the default letsencrypt client software generates a new key pair each time a certificate is renewed. If the public key of the subscriber certificate were to be pinned, the pinning would no longer be valid when it is renewed.

Setting HPKP policies over HTTP

Some sites set HPKP headers over unencrypted HTTP connections, which is also ineffectual. For example, the Internet Storm Center website at www.dshield.org sets the following header on its HTTP site:

Public-Key-Pins: pin-sha256="oBPvhtvElQwtqQAFCzmHX7iaOgvmPfYDRPEMP5zVMBQ=";
    pin-sha256="Ofki57ad70COg0ke3x80cbJ62Tt3c/f3skTimJdpnTw=";
    max-age=2592000; report-uri="https://isc.sans.org/badkey.html"

The Public Key Pinning Extension for HTTP RFC states that browsers must ignore HPKP headers that are received over non-secure transport, and so the above header has no effect other than to consume additional bandwidth.

2.2.2.  HTTP Request Type
  Pinned Hosts SHOULD NOT include the PKP header field in HTTP
  responses conveyed over non-secure transport.  UAs MUST ignore any
  PKP header received in an HTTP response conveyed over non-secure
  transport.

One very good reason for ignoring HPKP policies that are set over unencrypted connections is to prevent "hostile pinning" by man-in-the-middle attackers. If an attacker were to inject a set of pins that the site owner does not control—and if the browser were to blindly cache these values—he would be able to create a junk policy on behalf of that website. This would prevent clients from accessing the site for a long period, without the attacker having to maintain his position as a man-in-the-middle.

If a visitor instead browses to https://www.dshield.org (using HTTPS), an HSTS policy is applied which forces future requests to use HTTPS. The HTTPS site also sets an HPKP header which is then accepted and cached by compatible browsers. However, as the HTTP site does not automatically redirect to the HTTPS site, it is likely that many visitors will never benefit from these HSTS or HPKP polices, even though they are correctly implemented on the HTTPS site.

In another bizarre example, HPKP headers are set by the HTTP site at http://www.msvmgroup.com, even though there is no corresponding HTTPS website (it does accept connections on port 443, but does not present a subscriber certificate that is valid for this hostname).

Not quite got round to it yet...

A few sites that use the Public-Key-Pins header have not quite got around to implementing it yet, such as https://justamagic.ru, which sets the following value:

Public-Key-Pins: TODO

Using HPKP headers to broadcast skepticism

One security company's website – https://websec-test.com – uses the Public-Key-Pins header to express its own skepticisms over the usefulness of HPKP:

Public-Key-Pins: This is like the most useless header I have ever seen.
    Preventing MITM, c'mon, whoever can't trust his own network shouldn't
    enter sensitive data anywhere.

Violation reports that will never be received

The Public-Key-Pins header supports an optional report-uri directive. In the event of a pin validation failure, the user's browser should send a report to this address, in addition to blocking access to the site. These reports are obviously valuable, as they will usually be the first indication that something is wrong.

However, if the report-uri address uses HTTPS, and is also known pinned host, the browser must also carry out pinning checks on this address when the report is sent. This makes it foolish to specify a report-uri that uses the same hostname as the site that is using HPKP.

An example of this configuration blunder can be seen on https://yahvehyireh.com, which sets the following Public-Key-Pins header:

Public-Key-Pins: pin-sha256="y+PfuAS+Dx0OspfM9POCW/HRIqMqsa83jeXaOECu1Ns=";
    pin-sha256="klO23nT2ehFDXCfx3eHTDRESMz3asj1muO+4aIdjiuY=";
    pin-sha256="EohwrK1N7rr3bRQphPj4j2cel+B2d0NNbM9PWHNDXpM=";
    includeSubDomains; max-age=0;
     report-uri="https://yahvehyireh.com/incoming/hpkp/index.php"

This header instructs the browser to send pinning validation failure reports to https://yahvehyireh.com/incoming/hpkp/index.php. However, if there were to be a pinning validation failure on yahvehyireh.com, then the browser would be unable to send any reports because the report-uri itself would also fail the pinning checks by virtue of using the same hostname.

Incidentally, Chrome 46 introduced support for a newer header, Public-Key-Pins-Report-Only, which instructs the browser to perform identical pinning checks to those specified by the Public-Key-Pins header, but it will never block a request when no pinned keys are encountered; instead, the browser will send a report to a URL specified by a report-uri parameter, and the user will be allowed to continue browsing the site. This mechanism would make it safe for site administrators to test the deployment of HPKP on their sites, without inadvertently introducing a denial of service.

Summary

The proportion of secure servers that use HPKP headers is woefully low at only 0.09%, but to make matters worse, many of these few HPKP policies have been implemented incorrectly and do not work as intended.

Without delving into developer settings, browsers offer no visible indications that a site has an invalid HPKP policy, and so it is likely that many website administrators have no idea that their attempts at implementing HPKP have failed. Around a third of the sites that attempt to set an HPKP policy have got it wrong, and consequently behave as if there was no HPKP policy at all. Every response from these servers will include the unnecessary overhead of a header containing a policy that will ultimately be ignored by all browsers.

But there is still hope for the masses: A more viable alternative to HPKP might arise from an Internet-Draft entitled TLS Server Identity Pinning with Tickets. It proposes to extend TLS with opaque tickets, similar to those being used for TLS session resumption, as a way to pin a server's identity. This feature would allow a client to ensure that it is connecting to the right server, even in the presence of a fraudulently issued certificate, but has a significant advantage over HPKP in that no manual management actions would be required. If this draft comes to fruition, and is subsequently implemented by browsers and servers, this ticket-based approach to pinning could potentially see a greater uptake than HPKP has.

Netcraft offers a range of services that can be used to detect and defeat large-scale pharming attacks, and security testing services that identify man-in-the-middle vulnerabilities in web application and mobile apps. Contact security-sales@netcraft.com for more information.

August 2016 Web Server Survey

In the August 2016 survey we received responses from 1,153,659,413 sites and 5,980,524 web-facing computers. This reflects an increase of 80 million sites, but a loss of 78,000 computers.

While the overall number of sites increased this month, this growth was not felt evenly by each web server vendor: Microsoft gained the largest number of sites with an increase of 66 million, while second-placed Apache lost 41 million sites. Tengine, the nginx-based web server from Chinese online shopping giant Taobao, gained 28 million sites.

Whilst there were large changes in total number of sites, these were accompanied by much more modest changes in active sites – a more stable metric designed to ignore automatically generated bulk content. Apache and Microsoft both suffered small drops in the number of active sites, -0.5% and -0.8% respectively, whilst Tengine and nginx gained 120,000 (7.3%) and 81,000 (0.2%).

The majority of this month’s drop in web facing computers were running Apache, with a decrease of just over 107,000 (3.8%) using the open-source server. One of the primary contributors to this drop was the loss of a large number of consumer-NAS devices running Apache. While these devices have steadily increased in number since the start of 2016, this month has seen a marked decline. These devices are mostly connected via home internet lines and are therefore likely to come and go from month to month. As a result, the Apache losses this month are spread over a large number of consumer ISPs. On the other hand, Apache continued to see growth amongst web hosting providers.

A gain of 24,000 web-facing computers for nginx, the largest gain in web facing computers this month, once more boosts its market share, which now stands at 17.0%. Microsoft also experienced a small increase in market share, despite its loss of 4,000 web-facing computers, given Apache’s large loss this month.

Windows Server 2016 — which will be the main platform for Microsoft IIS 10.0 — is edging closer to its official launch at Microsoft's Ignite conference in September. In the meantime, developers can try out many of the new features in IIS 10 by either installing the latest Windows Server 2016 Technical Preview 5, or by installing the self-contained IIS 10.0 Express on Windows 7 SP1 or later.

More than 11,000 websites are already using Microsoft IIS 10.0, with almost all of these sites using a version of Windows Server 2016.

The previous month saw two new releases of the mainline version of nginx, mostly incorporating bug fixes and feature additions, while the release of Apache 2.4.23 addressed a security issue which could have allowed clients to gain unauthorised access to protected resources if a server was configured to use HTTP/2.

Several web servers were also updated following the disclosure of a set of vulnerabilities dubbed httpoxy. These vulnerabilities can affect web applications running in CGI or CGI-compatible environments.

The vulnerability stems from a simple namespace conflict where the client-provided HTTP Proxy header was placed into an HTTP_PROXY environment variable as is the custom for CGI applications; but where HTTP_PROXY was trusted by the application and used to configure an outgoing proxy.

This type of vulnerability was first discovered in libwww-perl more than 15 years ago, but in July it was found to be still exploitable in PHP and many other modern languages and libraries. Successful exploitation of these issues could allow a remote attacker to proxy outgoing HTTP requests made by a vulnerable web application, which may expose sensitive data.

To mitigate the httpoxy vulnerability, Apache 2.4.24-dev avoids populating the HTTP_PROXY variable from a Proxy header in httpd CGI environments. Similar mitigations have also been implemented in Lighttpd 1.4.41 and LiteSpeed, while nginx and Varnish have published mitigation advice.

Total number of websites

Web server market share

DeveloperJuly 2016PercentAugust 2016PercentChange
Microsoft378,655,75935.26%445,105,75538.58%3.32
Apache340,551,07431.72%300,028,83226.01%-5.71
nginx170,896,71615.92%181,606,29715.74%-0.17
Google22,552,9012.10%22,111,4311.92%-0.18
Continue reading

July 2016 Web Server Survey

In the July 2016 survey we received responses from 1,073,777,722 sites and 6,058,513 web-facing computers. This reflects an increase of 28 million sites and 107,000 computers.

Microsoft and Apache continue to fluctuate between 1st and 2nd places for total number of websites, with Microsoft retaking the lead again this month after an increase of 36 million sites, and the loss of 20 million Apache sites. However, it is only in the hostnames metric that Microsoft leads, with Apache coming out on top by a considerable margin in all other areas - active sites, domains, IP addresses and web-facing computers.

Looking at the number of active sites, a more stable metric created by Netcraft to ignore automatically generated bulk content, Apache leads with a 46.4% market share. nginx is the second most popular vendor, with a 21.8% market share, and is the only major vendor to consistently gain share in recent months. A similar percentage of both Apache’s and nginx’s sites are deemed to be active, 23.7% of Apache sites and 22.1% for nginx. In comparison, only 4.5% of sites using Microsoft server software are considered active, leaving Microsoft in 3rd place by this active sites metric with a 9.8% market share.

Apache, Microsoft and nginx all gained web-facing computers this month; however, nginx once again saw the only increase in market share. nginx gained 46,000 computers, a growth of 4.8% on last month, and now stands just shy of 1 million web-facing computers. Apache and Microsoft gained 31,000 and 11,000 computers.

nginx also continues to gain market share among the top million busiest websites, where it is now used by 27.9% of sites. Apache, while still holding a dominant position, continues to slowly lose market share, falling 0.55 percentage points to 43.2%.

Total number of websites

Web server market share

DeveloperJune 2016PercentJuly 2016PercentChange
Microsoft342,605,66632.77%378,655,75935.26%2.50
Apache360,458,01834.48%340,551,07431.72%-2.76
nginx169,316,54716.19%170,896,71615.92%-0.28
Google21,662,6732.07%22,552,9012.10%0.03
Continue reading

March 2016 Web Server Survey

In the March 2016 survey we received responses from 1,003,887,790 sites and 5,782,080 web-facing computers. This reflects a gain of nearly 70 million sites, but a loss of 14,100 computers.

This is the second time the total number of sites has reached more than a billion. This milestone was first reached in September 2014, although it was short-lived: By November 2014, the total fell back below one billion, and had stayed that way until the current month. During the intervening period, the total fell as low as 849 million sites in April 2015.

The total number of websites is typically prone to large fluctuations. Domain holding companies, typo squatters, spammers and link farmers can cause millions of sites to be deployed in a short space of time, without any significant outlay, but these types of site are intrinsically uninteresting to humans. Netcraft's active sites metric counters the effect of these by discounting sites that appear to be automatically generated. This leads to a more-stable metric that better illustrates real, practical use of the web.

The number of active sites currently stands at just 171 million, meaning around 1 in 6 sites are active. The total fell by 764,000 this month, but nginx stands out as being the only major vendor to increase its active site count — by an impressive 699,000. This has increased its active sites share to 16.4%, while Apache's loss of nearly a million active sites took its leading share down to 49.2%.

Typifying nginx's rise amongst active sites, it also showed the only growth in web-facing computers amongst the major server vendors. This month's survey found more than 15,000 additional computers running nginx on the web, while Microsoft's loss of 30,000 computers was the primary cause of the overall loss in this metric. Thankfully, the majority of this decline consisted of Windows Server 2003 computers, which arguably helps improve the safety of the internet — this server software is no longer supported by Microsoft.

China accounts for over 30% of all web-facing computers that run Windows Server 2003, making it the largest user of this obsolete operating system; however, more than half of this month's Windows Server 2003 losses were seen in China, which has helped to bring this share down slightly.

Apache's computer growth was relatively modest at only 447 computers, but Microsoft's large loss caused Apache's market share to increase by 0.12 to 47.9%. nginx's gain of 15,000 computers took its market share up by 0.30 to 14.3%, but Microsoft remains a fair way ahead of nginx with a 26.6% share of the market.

Total number of websites

Web server market share

DeveloperFebruary 2016PercentMarch 2016PercentChange
Apache306,292,55732.80%325,285,18532.40%-0.39
Microsoft278,593,04129.83%317,761,31831.65%1.82
nginx137,459,39114.72%143,464,29314.29%-0.43
Google20,640,0582.21%20,790,7672.07%-0.14
Continue reading

DigitalOcean becomes the second largest hosting company in the world

DigitalOcean has grown to become the second-largest hosting company in the world in terms of web-facing computers, and shows no signs of slowing down.

The virtual private server provider has shown phenomenal growth over the past two-and-a-half years. First seen in our December 2012 survey, DigitalOcean today hosts more than 163,000 web-facing computers, according to Netcraft's May 2015 Hosting Provider Server Count. This gives it a small lead over French company OVH, which has been pushed down into third place.

Amazing growth at DigitalOcean

Amazing growth at DigitalOcean

DigitalOcean's only remaining challenge will be to usurp Amazon Web Services, which has been the largest hosting company since September 2012. However, it could be quite some time until we see DigitalOcean threatening to gain this ultimate victory: Although DigitalOcean started growing at a faster rate than Amazon towards the end of 2013, Amazon still has more than twice as many web-facing computers than DigitalOcean today.

Nonetheless, DigitalOcean seems committed to growing as fast as it can. Since October 2014, when we reported that DigitalOcean had become the fourth largest hosting company, DigitalOcean has introduced several new features to attract developers to its platform. Its metadata service enables Droplets (virtual private servers) to query information about themselves and bootstrap new servers, and a new DigitalOcean DNS service brought more scalability and reliability to creating and resolving DNS entries, allowing near-instantaneous propagation of domain names.

Other companies are also helping to fuel growth at DigitalOcean. Mesosphere created an automated provisioning tool which lets customers use DigitalOcean's resources to create self-healing environments that offer fault tolerance and scalability with minimal configuration. Mesosphere's API makes it possible to manage thousands of Droplets as if they were a single computer, and with DigitalOcean's low pricing models and SSD-only storage, it's understandable how this arrangement can appeal to particularly power-hungry developers.

In January, DigitalOcean introduced its first non-Linux operating system, FreeBSD. Although less commonly used these days, FreeBSD has garnered a reputation for reliability and it was not unusual to see web-facing FreeBSD servers with literally years of uptime in the past. In April, DigitalOcean launched the second version of its API, which lets developers programmatically control their Droplets and resources within the DigitalOcean cloud by sending simple HTTP requests.

DigitalOcean added a new Frankfurt region in April 2015.

DigitalOcean added a new Frankfurt region in April 2015.

More recently, DigitalOcean introduced a new European hosting region in Frankfurt, Germany. This is placed on the German Commercial Internet Exchange (DE-CIX), which is the largest internet exchange point worldwide by peak traffic, allowing Droplets hosted in this region to offer good connectivity to neighbouring countries. (An earlier announcement of an underwater Atlantis datacenter sadly turned out to be an April Fool's joke, despite the obvious benefits of free cooling).

Even so, Amazon still clearly dwarfs DigitalOcean in terms of variety of features and value-added services. Notably, Amazon offers a larger variety of operating systems on its EC2 cloud instances (including Microsoft Windows), and its global infrastructure is spread much wider. For example, EC2 instances can be hosted in America, Ireland, Germany, Singapore, Japan, Australia, Brazil, China or even within an isolated GloudGov US region, which allows US government agencies to move sensitive workloads into the cloud whilst fulfilling specific regulatory and compliance requirements. As well as these EC2 regions, Amazon also offers additional AWS Edge Locations to be used by its CloudFront content delivery network and its Route 53 DNS service.

Yet, as well as its low pricing, part of the appeal of using DigitalOcean could lie within its relative simplicity compared with Amazon's bewilderingly vast array of AWS services (AppStream, CloudFormation, ElastiCache, Glacier, Kinesis, Cognito, Simple Workflow Service, SimpleDB, SQS and Data Pipeline to name but a few). Signing up and provisioning a new Droplet on DigitalOcean is remarkably quick and easy, and likely fulfils the needs of many users. DigitalOcean's consistent and strong growth serves as testament to this, and will make the next year very interesting for the two at the top.