WikiLeaks is currently under another distributed denial of service (DDoS) attack. This time the target appears to be cablegate.wikileaks.org – the website which hosts the leaked US embassy cables.
When the cablegate site was launched on Sunday, WikiLeaks' main website at www.wikileaks.org was subjected to a similar attack, causing it to go offline for several hours. The cablegate site itself was not affected by those attacks.
Today's attack is still ongoing, and has caused noticeable downtime over the past couple of hours:
The cablegate hostname is still configured to use three different IP addresses on a round-robin basis, essentially acting as a load balancer, although this does not appear to have prevented the current attack from succeeding. The performance graph shows the site may have been attacked over shorter periods earlier in the week, even though it has only made available a small fraction of the 250,000 cable messages. The attacks are likely to be symbolic more than anything else, as several large media groups have already been supplied with the full set of leaked messages.
A real-time performance graph for cablegate.wikileaks.org can be viewed here.
WikiLeaks experienced some website downtime last night, coinciding with its release of the US embassy cables at cablegate.wikileaks.org.
Just before the latest leak was released to the world via their new "cablegate" site last night, WikiLeaks tweeted that they were under a mass distributed denial of service attack, but defiantly stated that "El Pais, Le Monde, Speigel, Guardian & NYT will publish many US embassy cables tonight, even if WikiLeaks goes down".
Twitter user th3j35t3r claimed to be carrying out the denial of service attack against www.wikileaks.org, although in a tweet that has since been deleted, th3j35t3r stated that it was not a distributed attack. If WikiLeaks believed the attack to be distributed, it could suggest that other parties had also been carrying out separate attacks at the same time.
th3j35t3r's Twitter profile lists his location as "Everywhere" and he describes himself as a "Hacktivist for good. Obstructing the lines of communication for terrorists, sympathizers, fixers, facilitators, oppressive regimes and other general bad guys.".
th3j35t3r's Twitter feed lists dozens of other sites that have also been taken down, mainly communicated through "TANGO DOWN" messages posted via the XerCeS Attack Platform. The "tango down" phrase is used by special forces and is often heard in FPS games such as Rainbow 6 and Call of Duty, where it is used to describe a terrorist being eliminated.
Referring to the success of the attack, th3j35t3r also tweeted, “If I was a wikileaks 'source' right now I'd be getting a little twitchy, if they cant protect their own site, how can they protect a src? "
The main www.wikileaks.org site appeared to bear the brunt of the attack, suffering patchy or slow availability for several hours. Last night, the site was hosted from a single IP address, but has since been configured to distribute its traffic between two Amazon EC2 IP addresses on a round-robin basis. One of these instances is hosted in the US, while the other is in Ireland.
Meanwhile, cablegate.wikileaks.org has so far escaped any significant downtime. This site has used 3 IP addresses since its launch, probably in anticipation of being attacked or deluged with legitimate traffic. Two of these IP addresses are at Octopuce in France, which also hosts the single IP address now used by warlogs.wikileaks.org. Ironically, the third IP address being used to distribute secret US embassy cables is an Amazon EC2 instance hosted in – you guessed it – the US.
Performance graphs are available here:
The Iraq War Logs site run by WikiLeaks has been showing some choppy performance since last weekend, when its remaining Amazon EC2 instance stopped responding to HTTP requests.
Over the past week, the DNS configuration for warlogs.wikileaks.org had been directing traffic to two IP addresses on a round robin basis. One of these IP addresses was at Octopuce in France, and successfully handled half of the HTTP requests sent to http://warlogs.wikileaks.org; however, the remaining 50% were directed towards an Amazon EC2 IP address in Ireland, which stopped accepting connections to port 80 last weekend.
WikiLeaks appeared to fix the DNS problem today (Friday) – warlogs.wikileaks.org is now being served from just a single IP address in France. This is in contrast to the situation a few weeks earlier, when the site was being served from as many as 5 IP addresses, presumably to make the site more resilient to attack and high demand.
In the November 2010 survey we received responses from 249,461,227 sites.
Apache continues to gain market share, with an increase of 1.29 percentage points since last month. This is the result of 12.9M new Apache hostnames, mostly in the United States (8.1M) and the Netherlands (1.6M). As seen in previous months, other server vendors lost market share as a result, though all of the major vendors apart from Google actually gained hostnames this month.
nginx saw an overall increase of 927k hostnames, despite a loss of 135k at China Telecom, as the resulting loss in Asia was outweighed by large growth in both EMEA and North America. The most significant changes were 213k new hostnames at BurstNet and 207k new hostnames at ServePath, both in the United States. As a result, nginx overtakes Google in this metric, although nginx still trails in terms of active sites, where Google maintains a lead of more than 4M.
At the end of September, Microsoft announced the migration of Windows Live Spaces sites to WordPress.com, which will happen over the next few months. Wordpress.com uses load-balanced hosting at Layered Technologies and Peer1 and this month both companies saw modest increases in the number of sites using nginx (60k and 48k hostnames respectively). For the moment, Windows Live Spaces sites in the sites.live.com domain whose blogs have been moved to WordPress.com remain online redirecting users to their new location. For example, http://mikese.mobile.spaces.live.com still exists served by Microsoft but when accessed redirects to http://mikese.wordpress.com, which is running nginx. In contrast, blogs on their own domains will result in losses for Microsoft as the DNS can simply be updated with no need for redirection. An example of a site in this category is http://ozzie.net which switched over in the middle of October; at the time it was not clear if this change from IIS on Windows to nginx on Linux was a deliberate move by Ray Ozzie as he prepared to step down as Microsoft's Chief Software Architect, though it now appears to be part of the wider Windows Live Spaces to WordPress.com migration. Since WordPress.com is served by nginx, we expect to see a continued increase in sites using nginx as the migration takes place.
Despite the changes described above, Microsoft gained 3.1M hostnames this month, mostly in the United States. The largest increases were 942k hostnames at GoDaddy and 717k hostnames at Demand Media Inc.
Lighttpd gained 690k hostnames, making up for the large loss last month. The growth came as the result of large number of new hostnames at SAVVIS Communications in Australia.Total Sites Across All Domains
August 1995 - November 2010
Market Share for Top Servers Across All Domains
August 1995 - November 2010
Developer October 2010 Percent November 2010 Percent Change Apache 135,209,162 58.07% 148,085,963 59.36% 1.29 Microsoft 53,525,841 22.99% 56,637,980 22.70% -0.28 nginx 14,130,907 6.07% 15,058,114 6.04% -0.03 14,971,028 6.43% 14,827,157 5.94% -0.49 lighttpd 1,380,160 0.59% 2,070,300 0.83% 0.24
Earlier this morning, GitHub announced that it had changed its revision control website to use SSL only; however, a significant flaw in the implementation means that session cookies can still be captured by Firesheep and other network sniffing tools.
Firesheep brought session hijacking to the masses when it was released last month. Ironically, its own GitHub repository includes a github.js handler, which was designed to capture unencrypted session cookies from GitHub users. This allowed novice attackers to monitor shared network traffic (such as public WiFi) and hijack those sessions.
A day after its release, Firesheep's author stated that a basic expectation of privacy should not be a premium feature, referring to the fact that, at the time, you had to pay GitHub if you wanted to use full-session SSL. GitHub's move to SSL this morning should have eliminated the session hijacking vulnerability, rendering Firesheep useless; however, the session cookies used by the site are not always handled securely.
When a user logs in to GitHub, the server sets a
_gh_sessession cookie in the client browser. This cookie is not marked with the Secure flag, which means it will be transmitted unencrypted if the user subsequently visits http://github.com, even though that page immediately redirects the user to https://github.com. This means the site's users may still be vulnerable to sniffing tools such as Firesheep.
Netcraft successfully hijacked a session from the GitHub site by sniffing the cookies that were sent via unencrypted HTTP. Many legacy URLs will still point to the HTTP version of the site, so an attacker may not even need to entice a victim into visiting the HTTP site. Once a session has been hijacked, the attacker can freely create repositories, delete/add email addresses and change passwords, so it looks like the sidejack prevention that GitHub implemented a week ago (which did use a Secure cookie) has been undone.
Although GitHub's move to SSL has not yet been implemented securely, it is at least a step in the right direction for Firesheep's author, Eric Butler. When he released the tool on 24 October 2010, he said:
Websites have a responsibility to protect the people who depend on their services. They've been ignoring this responsibility for too long, and it's time for everyone to demand a more secure web. My hope is that Firesheep will help the users win.
GitHub announced the SSL-only change on Twitter this morning, and is expected to publish a blog post about it soon.
GitHub has since fixed the session cookie to be secure. Now that it can only be transmitted over encrypted connections, this makes the site invulnerable to Firesheep.
Rank Company site OS Outage
DNS Connect First
Total 1 Virtual Internet Linux 0:00:00 0.015 0.211 0.068 0.138 0.138 2 New York Internet FreeBSD 0:00:00 0.019 0.159 0.082 0.173 0.464 3 INetU FreeBSD 0:00:00 0.022 0.157 0.082 0.186 0.493 4 www.codero.com Linux 0:00:00 0.030 0.157 0.065 0.351 0.642 5 Datapipe FreeBSD 0:00:00 0.034 0.069 0.010 0.021 0.026 6 iWeb Technologies Linux 0:00:00 0.041 0.112 0.087 0.174 0.174 7 www.logicworks.net Linux 0:00:00 0.041 0.192 0.099 0.384 0.563 8 Swishmail FreeBSD 0:00:00 0.049 0.316 0.070 0.140 0.363 9 www.acens.com Linux 0:00:00 0.049 0.659 0.074 0.313 0.570 10 Multacom FreeBSD 0:00:00 0.056 0.172 0.137 0.275 0.752
Top of the rankings this month is Virtual Internet, whose site responded to all but four of Netcraft's requests. Virtual Internet focuses on availability and reliability, with a high capacity data centre network throughout Europe. Its UK data centres provide high connectivity as well as redundant power and cooling, multiple fault-tolerant distribution paths and strict access controls.
In second place this month is New York Internet. The company has consistently performed well in Netcraft's most reliable hosters rankings, having been in the top five every month for the last six months. NYI has a strong commitment to network availability, maintaining upstream connectivity to multiple top tier providers, as well as its own peering points with small to medium ISPs.
Third place goes to INetU, which failed to respond to only six of Netcraft's requests in the last month. INetU has also been a regular fixture in the most reliable hosters recently, appearing in the top five eight times in the last year.
In terms of operating systems used by the most reliable hosters in October, the top ten are evenly split between Linux and FreeBSD.
Netcraft measures and makes available the response times of around forty leading hosting providers' sites. The performance measurements are made at fifteen minute intervals from separate points around the internet, and averages are calculated over the immediately preceding 24 hour period.
From a customer's point of view, the percentage of failed requests is more pertinent than outages on hosting companies' own sites, as this gives a pointer to reliability of routing, and this is why we choose to rank our table by fewest failed requests, rather than shortest periods of outage.
Information on the measurement process and current measurements is available.