Most Reliable Hosting Company Sites in October 2016

Rank Performance Graph OS Outage
hh:mm:ss
Failed
Req%
DNS Connect First
byte
Total
1 Qube Managed Services Linux 0:00:00 0.000 0.153 0.059 0.122 0.122
2 One.com Linux 0:00:00 0.008 0.208 0.038 0.112 0.112
3 Hyve Managed Hosting Linux 0:00:00 0.013 0.104 0.058 0.123 0.123
4 ServerStack Linux 0:00:00 0.013 0.132 0.061 0.124 0.124
5 CWCS Linux 0:00:00 0.013 0.214 0.071 0.170 0.170
6 Datapipe Linux 0:00:00 0.017 0.149 0.012 0.024 0.031
7 Webair Internet Development Linux 0:00:00 0.017 0.160 0.052 0.105 0.107
8 Pickaweb Linux 0:00:00 0.021 0.141 0.005 0.154 0.154
9 SimpleServers Linux 0:00:00 0.021 0.126 0.005 0.155 0.155
10 XILO Communications Ltd. Linux 0:00:00 0.021 0.234 0.067 0.136 0.136

See full table

Qube Managed Services had the most reliable hosting company site in October, successfully responding to every request we made during the month. Qube has been in the top ten for eight months so far this year. Qube recently selected Epsilon to deliver its CloudLX connectivity platform, which will offer its customers direct access to on-demand Ethernet services.

One.com took second place in October with just two failed requests. This managed hosting provider was recently awarded Editor's Choice by Netzsieger's 2016 hosting comparison for its product coverage, performance, security, and support.

Hyve Managed Hosting came in third place with just three failed requests in October. This UK-based hosting provider specialises in mission-critical hosting, which is reflected in their 100% uptime since we started monitoring www.hyve.com in July. Fourth-place ServerStack and fifth-place CWCS also had three failed requests in October, with the tie being broken using the average connect time.

Netcraft measures and makes available the response times of around forty leading hosting providers' sites. The performance measurements are made at fifteen minute intervals from separate points around the internet, and averages are calculated over the immediately preceding 24 hour period.

From a customer's point of view, the percentage of failed requests is more pertinent than outages on hosting companies' own sites, as this gives a pointer to reliability of routing, and this is why we choose to rank our table by fewest failed requests, rather than shortest periods of outage. In the event the number of failed requests are equal then sites are ranked by average connection times.

Information on the measurement process and current measurements is available.

The Chancellor of the Exchequer sets out plans for the UK Government to work with Netcraft

Secretary of Defense Chuck Hagel hosts on honor cordon for United Kingdom's Secretary of State for Defense Phillip Hammond at the Pentagon May 2, 2013 (Pic 3)

Philip Hammond, Chancellor of the Exchequer.

Netcraft is in the news today after the Chancellor of the Exchequer announced plans to work with us to develop better automatic defences to reduce the impact of cyber attacks affecting the UK.

There is coverage in the Independent which says "Mr Hammond will set out plans to work in partnership with firms such as internet security company Netcraft to develop better automatic defences", ARS Technica notes "Number 11 said it would work closely with industry partners such as Bath-based Netcraft" while TechCrunch observes "One company the government is highlighting here is Netcraft for “automated defence techniques to reduce the impact of cyber-attacks.”"

Her Majesty's Government's own announcement outlines Netcraft's role in the cyber security strategy:

The strategy sets out how government will strengthen its own defences as well as making sure industry takes the right steps to protect Critical National Infrastructure in sectors like energy and transport. We will do this through working in partnership with industry - including companies such as the innovative SME Netcraft - to use automated defence techniques to reduce the impact of cyber-attacks [...]. Previously a website serving web-inject malware would stay active for over a month- now it is less than two days. UK-based phishing sites would remain active for a day- now it is less than an hour. And phishing sites impersonating government’s own departments would have stayed active for two days - now it is less than 5 hours.

October 2016 Web Server Survey

In the October 2016 survey we received responses from 1,429,331,486 sites and 6,144,093 web-facing computers. This reflects a large increase of 144 million sites, and a more modest increase of 25,300 computers.

Microsoft once again saw the largest increase of web sites this month, gaining 95 million. Apache and nginx made up the majority of the remainder of web site growth, gaining 25 million and 11 million. Despite Microsoft’s large gain of web sites, it lost both web-facing computers (-17,700) and active sites (-1.2 million).

Apache saw the largest increase of active sites this month, gaining 1.8 million, while nginx gained 400,000, the second largest growth. These gains, coupled with Microsoft’s loss of 1.2 million active sites, led to Microsoft’s share of active sites dropping to 9.27%, the first time that it has fallen below 10%. Apache increased its market share by 0.19 percentage points and continues to dominate, now with 46.30% of the active sites.

The largest increase of web-facing computers was made by nginx, gaining 20,000. Despite now having more than twice as many active sites as Microsoft, nginx remains in third place by number of web-facing computers with 17.41% of the market, compared to Microsoft’s 24.91%. Apache leads, running on 45.97% of all web-facing computers, however, both Apache and Microsoft are gradually losing market share to nginx.

Within the million busiest sites, the long-term trend is the ascent of nginx, at the expense of both Apache and Microsoft. This month continues that trend, with Apache losing 0.13 percentage points, Microsoft losing 0.14, and nginx gaining 0.20. However, Apache still leads by a significant margin over second-placed nginx, with 146,000 more of the million busiest sites using Apache.

Total number of websites

Web server market share

DeveloperSeptember 2016PercentOctober 2016PercentChange
Microsoft542,498,79642.19%637,583,71744.61%2.41
Apache316,042,28924.58%340,793,66223.84%-0.74
nginx186,529,03814.51%196,861,41513.77%-0.73
Google21,467,7291.67%21,516,3081.51%-0.16
Continue reading

Most Reliable Hosting Company Sites in September 2016

Rank Performance Graph OS Outage
hh:mm:ss
Failed
Req%
DNS Connect First
byte
Total
1 Datapipe Linux 0:00:00 0.013 0.149 0.012 0.025 0.031
2 Memset Linux 0:00:00 0.018 0.155 0.063 0.187 0.315
3 krystal.co.uk Linux 0:00:00 0.022 0.144 0.073 0.152 0.153
4 www.viawest.com Linux 0:00:00 0.026 0.300 0.005 0.199 0.199
5 EveryCity unknown 0:00:00 0.026 0.113 0.069 0.142 0.142
6 Netcetera unknown 0:00:00 0.026 0.095 0.083 0.169 0.169
7 Anexia Linux 0:00:00 0.026 0.213 0.083 0.180 0.180
8 Pair Networks FreeBSD 0:00:00 0.031 0.246 0.069 0.142 0.142
9 Multacom Linux 0:00:00 0.031 0.196 0.101 0.204 0.347
10 XILO Communications Ltd. Linux 0:00:00 0.035 0.228 0.066 0.138 0.138

See full table

Datapipe retained the top spot in September, successfully responding to all but three requests. Datapipe has been in the top ten for eight of the last nine months and has a 100% uptime record over the last ten years. In August, Datapipe announced its acquisition of UK managed cloud services provider Adapt, a move intended to further strengthen its European presence.

Memset took second place with four failed requests. Memset was the UK's first accredited carbon-neutral web host. In July, Memset won the Best Dedicated Hosting Award at the ISPA Awards for the second year running. The judges were impressed by Memset's response times, technical support, carbon neutrality, and by Memset having adopted IPv6 as standard.

In third place with five failed requests is Krystal, a UK-based hosting provider boasting 100% SSD hosting.

Netcraft measures and makes available the response times of around forty leading hosting providers' sites. The performance measurements are made at fifteen minute intervals from separate points around the internet, and averages are calculated over the immediately preceding 24 hour period.

From a customer's point of view, the percentage of failed requests is more pertinent than outages on hosting companies' own sites, as this gives a pointer to reliability of routing, and this is why we choose to rank our table by fewest failed requests, rather than shortest periods of outage. In the event the number of failed requests are equal then sites are ranked by average connection times.

Information on the measurement process and current measurements is available.

September 2016 Web Server Survey

In the September 2016 survey we received responses from 1,285,759,146 sites and 6,118,785 web-facing computers, reflecting large gains in both metrics: 132 million additional sites, and 138,000 more computers.

Microsoft made up the majority of this month's website growth, with the largest gain of 97 million sites, although it showed only modest increases of 5,200 web-facing computers and 693,000 active sites.

Apache was responsible for most of this month's additional web-facing computers, increasing its count by 87,000 to 2.8 million (+3.2%). Similarly, nginx made a 3.0% gain of 30,000 computers. However, Microsoft's 0.3% gain was not enough to stop its share falling by half a percentage point to 25.3% as a result of the gains made by Apache and nginx.

Although nginx made a healthy gain in web-facing computers, it lost more than 5 million active sites and 5,600 sites within the top-million. 27.6% of the busiest million sites now use nginx (-0.56 pp from last month), while Apache retains its lead with a 42.5% share.

Along with nginx, all of the major web server vendors suffered losses within the top million sites, largely due to the growth of OpenResty this month. More than 10,000 of the top million sites are now using OpenResty, compared with fewer than 4,000 last month, after millions of Tumblr blogs switched from nginx. As well as tumblr.com, basecamp.com — the home of the Basecamp web-based project management tool — ranks amongst the most visited sites to use OpenResty.

Tumblr's adoption of OpenResty has caused the web server to leap up the rankings to become the seventh largest web server vendor by websites, and fifth by active sites. This month, 87% of all OpenResty sites appear under the tumblr.com domain.

Although most OpenResty sites reside under the tumblr.com domain, the number of unique domains using OpenResty also increased noticeably this month.

Although most OpenResty sites reside under the tumblr.com domain, the number of unique domains using OpenResty also increased noticeably this month.

Switching from nginx to OpenResty is not such a paradigm shift as moving to, say, Apache or Microsoft IIS. The OpenResty web application platform is built around the standard nginx core, which offers some familiarity, as well as allowing the use of third-party nginx modules. One of the key additional features provided by OpenResty is the integration of the LuaJIT compiler and many Lua libraries – this gives scope for high performance web applications to be run completely within the bundled nginx server, where developers can take advantage of non-blocking I/O.

Another web server that has gained prominence over the past year is Cowboy, a small and fast modular HTTP server written in Erlang. Optimised for low latency and low memory usage, it is currently the fifth most common web server software installed on web-facing computers that accept HTTP connections. Most of the computers used by Cowboy servers are powered by the Heroku Cloud Application Platform and hosted at Amazon Web Services.

Total number of websites

Web server market share

DeveloperAugust 2016PercentSeptember 2016PercentChange
Microsoft445,105,75538.58%542,498,79642.19%3.61
Apache300,028,83226.01%316,042,28924.58%-1.43
nginx181,606,29715.74%186,529,03814.51%-1.23
Google22,111,4311.92%21,467,7291.67%-0.25
Continue reading

Most Reliable Hosting Company Sites in August 2016

Rank Performance Graph OS Outage
hh:mm:ss
Failed
Req%
DNS Connect First
byte
Total
1 Datapipe Linux 0:00:00 0.000 0.134 0.012 0.024 0.030
2 EveryCity SmartOS 0:00:00 0.008 0.100 0.065 0.131 0.131
3 Qube Managed Services Linux 0:00:00 0.013 0.129 0.059 0.119 0.119
4 CWCS Linux 0:00:00 0.013 0.197 0.071 0.147 0.147
5 Netcetera Linux 0:00:00 0.029 0.084 0.082 0.168 0.168
6 Aruba Windows Server 2012 0:00:00 0.034 0.186 0.083 0.175 0.175
7 Lightcrest unknown 0:00:00 0.042 0.142 0.012 0.029 0.051
8 Pair Networks FreeBSD 0:00:00 0.042 0.226 0.069 0.138 0.138
9 ServerStack Linux 0:00:00 0.046 0.112 0.061 0.122 0.122
10 XILO Communications Ltd. Linux 0:00:00 0.046 0.204 0.068 0.134 0.134

See full table

Returning to the top spot for the fourth time this year was Datapipe, without a single failed request, and an average connection time of just 0.012 seconds. Datapipe has managed to feature in the top ten for seven out of eight months so far this year and has maintained a 100% uptime record over the past ten years. In July, Datapipe announced an expansion of its presence in Europe with the opening of a new data centre in Moscow, becoming its 21st data centre worldwide.

In second place with just two failed requests was EveryCity, ranking in the top ten for a third consecutive month, and making its fourth appearance in the top ten this year. Based in London, EveryCity specialises in managed cloud hosting, and has maintained a 100% uptime over the last 6 months.

Third place this month goes to Qube Managed Services, who narrowly beat CWCS with the same number of failed requests but faster connection time. Placing in the top ten for the third month in a row, Qube has data centres in London, New York, and Zurich, offering cloud services, managed services, and colocation.

Linux continues to be the predominantly used operating system, powering six of the top ten company sites. However for the first time since June 2015, one of the top 10 was running on Windows.

Netcraft measures and makes available the response times of around forty leading hosting providers' sites. The performance measurements are made at fifteen minute intervals from separate points around the internet, and averages are calculated over the immediately preceding 24 hour period.

From a customer's point of view, the percentage of failed requests is more pertinent than outages on hosting companies' own sites, as this gives a pointer to reliability of routing, and this is why we choose to rank our table by fewest failed requests, rather than shortest periods of outage. In the event the number of failed requests are equal then sites are ranked by average connection times.

Information on the measurement process and current measurements is available.