Most Reliable Hosting Company Sites in June 2013

Rank Performance Graph OS Outage
DNS Connect First
1 ServerStack Linux 0:00:00 0.007 0.089 0.073 0.146 0.146
2 Codero Linux 0:00:00 0.010 0.234 0.084 0.266 0.528
3 Swishmail FreeBSD 0:00:00 0.014 0.133 0.070 0.137 0.184
4 Virtual Internet Linux 0:00:00 0.014 0.162 0.074 0.329 0.502
5 Datapipe FreeBSD 0:00:00 0.017 0.083 0.018 0.037 0.057
6 Bigstep Linux 0:00:00 0.017 0.289 0.072 0.147 0.228
7 Midphase Linux 0:00:00 0.017 0.246 0.111 0.225 0.380
8 Linux 0:00:00 0.017 0.209 0.129 0.214 0.517
9 Memset Linux 0:00:00 0.021 0.111 0.074 0.146 0.291
10 Iomart Linux 0:00:00 0.021 0.115 0.088 0.181 0.339

See full table

ServerStack had the most reliable hosting company site in June, with only two failed requests. ServerStack provides managed dedicated hosting from data centres in New Jersey, San Jose, and Amsterdam, and counts amongst its clients high-traffic sites such as MTV and academic publisher Elsevier. Over the eight months Netcraft has been monitoring ServerStack's performance, it has appeared in the top 10 five times and been the most reliable hosting company site twice.

Codero and Swishmail took second and third place respectively. Both companies had 100% uptime and just two failed requests separate the top three companies. Both Codero and Swishmail are based in the United States: Codero has a presence in Virginia, Illinois and Arizona, whilst Swishmail operates out of three New York data centres.

Bigstep, which focuses on providing hosting infrastructure for big data companies, started being monitored three months ago and has maintained a 100% uptime record thus far.

For the first time since May 2012, none of the companies in the top 10 Most Reliable Hosting Company Sites were running a version of Windows Server. ServerStack runs Linux, as do seven other hosting companies in the top 10. FreeBSD is used by the remaining two: Datapipe and Swishmail.

Netcraft measures and makes available the response times of around forty leading hosting providers' sites. The performance measurements are made at fifteen minute intervals from separate points around the internet, and averages are calculated over the immediately preceding 24 hour period.

From a customer's point of view, the percentage of failed requests is more pertinent than outages on hosting companies' own sites, as this gives a pointer to reliability of routing, and this is why we choose to rank our table by fewest failed requests, rather than shortest periods of outage. In the event the number of failed requests are equal then sites are ranked by average connection times.

Information on the measurement process and current measurements is available.