In the January 2016 survey we received responses from 906,616,188 sites and 5,753,264 web-facing computers, reflecting a modest increase of less than six million sites, but a significant gain of 174,000 computers.
Microsoft gained 22.5m sites (+9.40%), which has taken its market share up by 2.32 points. Meanwhile, Apache lost 16.4m sites, and nginx fell by 15.6m. Apache's market share is now less than 5 points ahead of Microsoft; this difference was more than twice as large just two months ago.
The web-facing computers metric is typically much more stable, but this month's overall gain of 174,000 computers is unusually large as a result of a 7.6% increase in the number of web-facing computers running Apache.
This large gain comprised of nearly 195,000 Apache computers, and the majority of these are Western Digital My Cloud personal storage devices. These consumer devices run web servers and can be accessed using public hostnames with a format similar to device1000000-a1b2c3d4.wd2go.com. Consumers can remotely access their files via the My Cloud web application, a mobile app, or via third-party applications that make use of the relatively new My Cloud OS 3 platform.
More than 240,000 of these wd2go.com hostnames point directly to a variety of consumer broadband connections, which is where the My Cloud devices are physically located.
Network Attached Storage (NAS) devices are rarely exposed to the internet on such a large scale, and so this provides some otherwise invisible insights into the usage of these particular devices. Although consumers do not have to enable the Cloud Access feature, the 240,000+ devices that are directly exposed to the internet are likely to be a fairly representative sample of all similar Western Digital devices.
Nearly half of the My Cloud devices that are exposed directly to the internet are located in the US, while the UK has the next largest share of 13%, and France follows with 6%. This suggests that nearly two-thirds of Western Digital's consumer NAS sales take place in these three countries alone.
As well as the My Cloud devices that are exposed directly to the internet, a further 273,000 wd2go.com hostnames resolve to fewer than 200 IP addresses hosted by Amazon AWS. These hostnames likely represent additional My Cloud devices that have been cloud-enabled using Relay mode. In this mode, requests bound for the device are relayed via the Amazon-hosted web service, which makes it possible for a consumer to gain remote access even when they are not able to set up port forwarding on their router.
However, whilst certainly convenient, exposing a My Cloud device to the internet (either directly or in relay mode) could undermine a consumer's security by revealing the device's internal IP address to the whole world. Each of the 500,000+ My Cloud devices that can be accessed via hostnames like device1070698-xxxxxxxx.wd2go.com also have corresponding DNS entries that reveal their local IP addresses:
$ host device1070698-xxxxxxxx.wd2go.com device1070698-xxxxxxxx.wd2go.com has address 78.72.xx.x $ host device1070698-xxxxxxx-local.wd2go.com device1070698-xxxxxxxx-local.wd2go.com has address 192.168.1.65
These "-local" DNS entries allow a remote attacker to discover the local IP address of a consumer's My Cloud device (in this case, 192.168.1.65), which would make it easier to carry out CSRF attacks against it. Even if the consumer has taken the precaution of changing the device's name so that his browser cannot reach it via the default local address (http://wdmycloud), it could still be reached by browsing directly to its local IP address. Devices that have not been updated recently might still be vulnerable to remote code execution via CSRF attacks.
The local IP address of the My Cloud device can also be used to infer the address of the consumer's broadband router, which may well be vulnerable to similar types of attack. Knowing some likely IP addresses of the router makes CSRF attacks much more feasible – for example, if the My Cloud device has an IP address of 10.10.0.31, the attacker could deduce that the router's IP address might be 10.10.0.1 or 10.10.0.255, rather than any of the other 17+ million IANA-reserved private network addresses. A successful exploit against a vulnerable router could give an attacker full control over the router's settings, which could ultimately lead to data theft or financial losses through pharming attacks.
While the influx of these My Cloud devices has resulted in strong growth for Apache, nginx continued its steady progress by gaining a further 23,300 (+3.0%) web-facing computers. Apache's market share in terms of computers now stands at 47.9% (+2.0), while Microsoft lost 20,600 computers, contributing to its share falling to 27.1%. Despite maintaining the consistent growth it has demonstrated for several years, nginx also suffered a minor loss in share by virtue of Apache's exceptional growth.
|Developer||December 2015||Percent||January 2016||Percent||Change|
Posted in Web Server Survey
A Brazilian government website has been compromised for the third time in less than two months. Each compromise resulted in the site hosting fraudulent content that was used in phishing attacks. One of these attacks also attempted to install drive-by malware on victims' computers.
The first compromise took place in December, when the Prefeitura Municipal de Esperança website was used to host a phishing attack against Wells Fargo bank. The fraudulent content used in this first attack was subsequently removed, but the site was compromised again last week and used to host two more phishing attacks.
The second phishing attack, which kicked off last week, was aimed at PayPal customers. This was arguably the most dangerous attack: As well as stealing victims' PayPal credentials and bank details, the phishing kit used in this attack also attempted to inject drive-by malware via hidden iframes.
Fraudsters often use ready-made phishing kits when deploying phishing sites, as it generally makes the process quick and easy. Kits typically consist of a collection of lookalike web pages, scripts and images which simply have to be uploaded to the compromised web server to create a ready-to-go phishing site. In most cases, all the fraudster has to do is edit a simple configuration file to tell the phishing site which email address to send the stolen credentials to.
The third attack – which is currently still live – uses a phishing kit that is designed to steal webmail credentials. Many slight variations of this kit exist, but all display an error message regardless of the validity of the submitted credentials.
Unbeknownst to the victim, the stolen credentials are emailed to the fraudster who deployed the kit; but these webmail phishing kits also contain an additional surprise. The fraudster may not realise that the kit also sends a copy of these stolen credentials to another email address, which presumably belongs to the original author of the kit. This address has been sneakily embedded into the kit in such a way that its presence it unlikely to be spotted by the deploying fraudster.
Webmail credentials are a popular target for phishers, as they can be used to compromised further accounts held by each victim. For example, if the victim's email address has been used to sign up for other services, the attacker might be able to use password resets to gain unauthorised access to those services.
The .gov.br second-level domain used by the compromised website is reserved for government entities within Brazil, yet the content of the site is physically hosted by HostGator in Texas. It is not unusual for South American governments to host websites in external countries such as the U.S., especially when the sites do not store or process any sensitive data. The most obvious motivation in this case is that hosting costs in the U.S. are typically lower than those in Brazil.
The fact that the website has been repeatedly compromised suggests there is still a vulnerability that allows remote attackers to upload arbitrary content onto the web server. One possible route of compromise could be the "unsafe" version of WordPress being used on www.prefeituradeesperanca.pb.gov.br. The Prefeitura Municipal de Esperança website uses WordPress 4.0.9 as its content management system, and although this version was released only a week ago (to address a cross-site scripting vulnerability), only the latest release in the 4.4.x series is officially actively maintained. The WordPress website explicitly points out that anything older than the current latest release (4.4.1) is not safe to use.
Another potential risk could be the site's reliance on a shared hosting platform: More than 70 other websites are served from the same IP address as that used by www.prefeituradeesperanca.pb.gov.br. Vulnerabilities exposed by any of these non-government sites could potentially be used to attack the government site. Also, in general, any web server that has previously been compromised could have had a backdoor installed by the attacker, making it trivial to gain unauthorised access at a later time.
The PayPal phishing kit
PayPal is one of the most common phishing targets, with many distinct phishing kits making it easy for even novices to carry out these types of attack. Last month alone, Netcraft blocked more than 60,000 phishing URLs that were designed to steal PayPal credentials.
The PayPal phishing kit used in last week's attack featured a few tricks that made it stand out from a typical kit. Although it exhibits a few tell-tale spelling mistakes, the designer of the phishing kit has been very careful in other respects. For example, the initial login page actually consists of a large background image, with two input fields and a submit button overlaid. This means the textual content of the page does not need to be written in the HTML document, which could in turn reduce the likelihood of the attack being spotted and blocked by certain internet security software.
However, this trick does not work too well in all browsers – if you look closely, you can see that the text fields do not quite line up with the placeholders in the background image:
The fact that the spelling mistakes are contained within images, rather than within an easily editable HTML document, could explain why subsequent users of this phishing kit have not corrected them.
Spelling mistakes aside, the developer has also implemented validation checks to prevent the login form being submitted with an invalid email address:
After stealing the victim's PayPal credentials, the phishing site takes the user through a three-stage "update" process. The first stage collates the victim's full address and date of birth, while the second gathers his payment card details, and the final stage steals his bank account numbers.
Each page validates the victim's input, and like the spoof login page, they also use background images in an attempt to evade detection.
But the nastiest feature is that each page in the phishing kit contains a set of hidden iframes that attempt to silently install malware on the victim's computer. This is a relatively unusual feature for a phishing kit, and was possibly included to the benefit of the phishing kit's author, rather than to the subordinate fraudsters who deploy it.
However, the malware component of the attack does not work, as the domain used for the malware delivery has been sinkholed. If it had not already been sinkholed and was still serving drive-by malware, any victim visiting the phishing site could have had his computer compromised as soon as the login page was viewed. If the victim was cautious enough to not submit the login form, the malware might still have allowed the attacker to steal the victim's credentials in other ways, or allow for other monetization opportunities, such as making the victim's computer part of a botnet.
After the victim has submitted his bank account details, the PayPal phishing site indicates that the account has been successfully updated, and redirects the victim to the genuine PayPal login page. Being prompted to enter a username and password a second time could ring alarm bells, as the victim has, ostensibly, already logged in. The phishing site explains away this concern by saying the user must re-login to save the changes.
All three of these phishing attacks were added to Netcraft's Phishing Site Feed. This feed is used by all major web browsers and many leading anti-virus and content-filtering companies, so most users are already protected against the latest webmail phishing attack. The fraudulent content used in the first two attacks has been removed from the Prefeitura Municipal de Esperança website.
Posted by Paul Mutton in Security
Despite widespread concerns over the security of the SHA-1 hash algorithm, the US Department of Defense is still issuing SHA-1 signed certificates, and using them to secure connections to .mil websites.
Since 1 January 2016, the CA/Browser Forum's Baseline Requirements [pdf] have banned the issuance of new SHA-1 certificates. Publicly-trusted certificate authorities are expected to comply with these Baseline Requirements in order to remain trusted by browsers and operating systems.
However, the US DoD is not a publicly-trusted certificate authority per se, and therefore it does not have to abide by the CA/Browser Forum's rules. With the exception of Apple platforms, most browser software does not include the DoD's root certificates by default. This means any secure site that uses a certificate issued by the DoD is unlikely to be trusted by a browser running on Windows or Linux, unless the user has explicitly installed the DoD's root certificates.
Even though the DoD does not have to abide by the CA/Browser Forum's rules, it is arguably a bad idea not to: The SHA-1 algorithm is now thought to be sufficiently weak that a well-funded attacker might be able to find a SHA-1 hash collision and hence impersonate any HTTPS website. It is also particularly surprising to see the DoD still using SHA-1 today when the US National Institute of Standards and Technology banned its use more than two years ago. Since NIST made this decision, the cost projections of finding a SHA-1 hash collision have reduced significantly.
On 4 January 2016, the DoD issued a SHA-1 certificate to necportal.riley.army.mil [site report], which is a SharePoint portal hosted by the United States Army Information Systems Command. It can be accessed remotely by Common Access Card (CAC) holders. The certificate is marked as being valid until 8 September 2017.
The DoD is America's largest government agency, and is tasked with protecting the security of its country, which makes its continued reliance on SHA-1 particularly remarkable. Besides the well known security implications, this reliance could already prove problematic amongst the DoD's millions of employees. For instance, Mozilla Firefox 43 began rejecting all new SHA-1 certificates issued since 1 January 2016. When it encountered one of these certificates, the browser displayed an Untrusted Connection error, although this could be overridden. If DoD employees become accustomed to ignoring such errors, it could become much easier to carry out man-in-the-middle attacks against them.
However, the latest version of Firefox no longer rejects SHA-1 certificates issued after 1 January 2016. This change was made to cater for users of certain man-in-the-middle products, which generate freshly issued certificates on the fly. Consequently, users of Firefox 43.0.4 who have installed the appropriate DoD root certificates will currently not receive any errors, or even warnings, when browsing to the site:
Google intends to block all SHA-1 certificates issued from 1 January 2016 with the release of Chrome 48. In the meantime, Chrome 47 affirmatively distrusts the SHA-1 certificate used by necportal.riley.army.mil because it does not expire until 2017.
Firefox will ultimately distrust all SHA-1 certificates by 2017, regardless of when they were issued, but Mozilla considered advancing this deadline to as early as 1 July 2016 when the new cost projections were realised.
More than 650,000 SSL certificates in use on the web are still using SHA-1, but this count has been rapidly falling since 2014. Nearly all of these certificates are due to expire by the end of 2016, in accordance with the Baseline Requirements; however, with most browser vendors contemplating an accelerated deprecation timeline, it is likely that many of these certificates will be replaced before the middle of the year.
With the US DoD PKI infrastructure seemingly still reliant on SHA-1, by the end of 2017, the DoD could account for a significant proportion of all SHA-1 certificates that are intended to be used by modern browsers.
Posted by Paul Mutton in Security
The BBC's websites are now back to normal, four days after being taken down by an effective DDoS attack on New Year's Eve.
The BBC mitigated the attack within a few hours by moving its main website onto the Akamai content delivery network, which restored access to its millions of users. However, during this mitigation period, some of the BBC's other websites – which were still hosted at the BBC – remained mostly unreachable.
The BBC's DDoS mitigation was only temporary, and last night it moved its main website off Akamai, back onto a netblock owned by the BBC. This move resulted in another short outage on 4th January, followed by several hours of slightly slower response times within the UK. By the 5th January, the response times had settled down to be almost comparable with when it was using Akamai.
However, as expected, response times from other countries are no longer as fast as they were when the BBC's main website was hosted on the Akamai CDN. Response times from the US are notably slower, but currently no worse than they were before the DDoS attacks on New Year's Eve.
During the period in which the BBC's main website was hosted on the Akamai CDN, its legacy News website at news.bbc.co.uk remained hosted at the BBC. This was mostly unavailable during this period, with most client connection attempts being reset.
This site's availability was restored to normal at the same time that the main BBC website moved off Akamai. This suggests that the connection resets were a deliberate attempt to mitigate basic DDoS attacks, rather than as a direct side effect of a sustained DDoS attack. However, this approach was not ideal – while some browsers (such as Chrome) would automatically retry the connection attempt (often successfully), other browsers would give up at the first failure.
Since suffering a crippling DDoS attack on New Year's Eve, some BBC websites are still experiencing significant performance issues.
Around 07:00 UTC on 31 December 2015, the main BBC website at www.bbc.co.uk was knocked offline after being subjected to a distributed denial of service attack. For the following few hours, requests to the BBC website either eventually timed out, or were responded to with its 500 Internal Error test card page. A group called New World Hacking later claimed responsibility for the attack, which it carried out as a test of its capabilities.
The British Broadcasting Corporation is the public service broadcaster of the United Kingdom, and the outage had a significant impact on its user base: The BBC's news, sport, weather and iPlayer TV and radio catchup services are all delivered via www.bbc.co.uk.
At the time of the attack, www.bbc.co.uk was served from a netblock owned by the BBC. It seems that service was restored by migrating the site onto the Akamai content delivery network, after which there were no apparent outages.
|OS||Server||Last seen||IP address||Netblock Owner|
Moving www.bbc.co.uk onto the Akamai CDN also resulted in some significant performance benefits, particularly from locations outside of the UK. For example, prior to the attack, most requests from Netcraft's New York performance collector took around 0.4-0.6 seconds to receive a response, whereas after the site had migrated to Akamai, all requests were served in well under 0.1 seconds. These performance benefits are typical when using a globally distributed CDN, as cached content can be delivered from an edge server within the client's own country, rather than from a remote server that can only be reached via transatlantic cables.
However, not all of the BBC's websites have migrated to Akamai, and some of these are still exhibiting connectivity issues in the aftermath of the attack. For example, search.bbc.co.uk and news.bbc.co.uk are still hosted directly at the BBC, and these are still experiencing problems today.
The BBC's News service is currently found at www.bbc.co.uk/news, but up until a few years ago it used to be served from its own dedicated hostname, news.bbc.co.uk. This legacy hostname is still used by some webpages today, but mostly redirects visitors to the new site at www.bbc.co.uk/news. This conveniently collates all of the BBC's main online services under the same hostname, but at the expense of introducing a single point of failure. If each service were still to be found under a different hostname and on different servers, it might have offered further resilience to the initial attack.
As shown above, news.bbc.co.uk was also affected by the DDoS attack which took down the main BBC website, but eventually came back online later that day without having to relocate the website. However, the following morning (New Year's Day), it started to experience significant connectivity problems.
It is unclear whether this indicates a separate ongoing attack, or an attempt at mitigating such attacks, but nonetheless, it is likely to affect lots of users: Many old news articles are still served directly from news.bbc.co.uk, and some users habitually reach the news website by typing news.bbc.co.uk into their browsers. Some regularly updated pages also continue to be served from news.bbc.co.uk, such as horse racing results.
EveryCity had the most reliable hosting company site in December 2015. Despite moving into new offices, its website was the only one to respond to all of Netcraft's requests. EveryCity has maintained its 100% uptime record throughout 2015, and has made it into the top ten 11 times during the year. It also had the most reliable hosting company site in May.
In second place in December was Lightcrest, which also appeared in the top ten in November. It experienced only one failed request, with an impressively fast average connection time of 6 milliseconds. Lightcrest operates its cloud services using its own Kahu Compute Fabric infrastructure, without outsourcing any components to third-party cloud providers.
In third place – also with a single failed request, but with a slower average connection time – was One.com. Established in 2002, One.com now has over 270 employees with companies registered in Denmark, India and Dubai.
Six of December's top ten hosting company sites ran on Linux operating systems, while Swishmail used FreeBSD, Codero used a Citrix Netscaler device, and EveryCity used SmartOS. The latter is a community fork of OpenSolaris, featuring the ZFS file system, DTrace dynamic tracing, kernel-based virtual machines and Solaris Zones operating system-level virtualisation.
Netcraft measures and makes available the response times of around forty leading hosting providers' sites. The performance measurements are made at fifteen minute intervals from separate points around the internet, and averages are calculated over the immediately preceding 24 hour period.
From a customer's point of view, the percentage of failed requests is more pertinent than outages on hosting companies' own sites, as this gives a pointer to reliability of routing, and this is why we choose to rank our table by fewest failed requests, rather than shortest periods of outage. In the event the number of failed requests are equal then sites are ranked by average connection times.
Information on the measurement process and current measurements is available.
Your link here? Advertising on the Netcraft Blog