Most Reliable Hosting Company Sites in December 2012
1st January, 2013
|3||New York Internet||FreeBSD||0:00:00||0.006||0.078||0.025||0.677||0.774|
|4||Server Intellect||Windows Server 2008||0:00:00||0.006||0.035||0.066||0.132||0.328|
Serverstack had the most reliable hosting company site during December, responding to every request from our monitoring system. We have only been monitoring Serverstack for three months, but it has quickly established itself as one of the hosting company sites with the fewest failed requests over that period, despite being located in the area affected by Hurricane Sandy.
Swishmail (second), New York Internet (third), Datapipe (fifth) and Reliable Servers (tenth) are also hosted within the area in which Hurricane Sandy made landfall and the presence of five such affected companies in the top ten reinforces Datapipe founder Robb Allen’s assertion that the recent history of the US North East with events including grid blackouts, Hurricane Irene and the 9/11 attacks has helped improve the resilience of the internet connectivity and hosting industry in that area.
December saw New York Internet (third) named NJBIZ's "Emerging Business of the Year" for 2012. NJBIZ profiled New York Internet's New Jersey datacentre in 2011, and praised the company's renovation and retrofitting of an older property in order to accommodate modern technology.
December's top ten list is dominated by FreeBSD and Linux, with the exception of Windows specialists Server Intellect (fourth) who have the only site running Windows. Server Intellect, which now offers Windows Server 2012 as standard on all dedicated and cloud servers, was second last month and regularly features among the top ten most reliable hosting company sites.
During December we added a new performance measurement point hosted at Webair's datacentre, in Amsterdam, bringing the total number of measurement points to 11.
Netcraft measures and makes available the response times of around forty leading hosting providers' sites. The performance measurements are made at fifteen minute intervals from separate points around the internet, and averages are calculated over the immediately preceding 24 hour period.
From a customer's point of view, the percentage of failed requests is more pertinent than outages on hosting companies' own sites, as this gives a pointer to reliability of routing, and this is why we choose to rank our table by fewest failed requests, rather than shortest periods of outage. In the event the number of failed requests are equal then sites are ranked by average connection times.
Information on the measurement process and current measurements is available.