Desperately seeking Web Search 2.0

It is a moot point whether the first Web era began with the announcement of the general availability of Tim Berners-Lee's initial code; with Mosaic, the first popular browser; or with Netscape Navigator, its commercial offspring and nemesis. But the Web only turned from an exciting technology into a mass medium once directories like Galaxy and Yahoo, and early search engines such as Lycos, the World Wide Web Worm and Webcrawler, provided ordinary users with something just as important as the browser, and complementing it: a way to find things.

Subsequent developments in the navigational field were largely a matter of scaling-up. Those around at the time will probably remember the excitement in early 1996 when Digital's Altavista first appeare d offering an unprecedented full-text search of no less than 16 million Web pages. The culmination of what might be called Web Search 1.0 was, of course, Google. Forget about the fancy algorithms: what really counted was the fact that it was just so much bigger than anything that had gone before.

Today, though, sheer size is not enough. It has been claimed that Google employs 100,000 computers for its search platform - making it the biggest and highest-profile deployment of GNU/Linux in the world. But its store of 4 billion pages is only 20 times the current number on the upstart search engine Gigablast, which runs on just eight servers, and which ultimately aims to index 5 billion pages.

Moreover, alongside innovative minnows like Gigablast, there are well-endowed whales such as Microsoft. Even if it failed to come up with any new approach to online searching, it could still out-Google Google by creating a server farm made up of a few million machines - it needs to do something with the $50 billion of cash that it is sitting on. The first glimpses of Microsoft's approach in the search engine arena are not too exciting, but then neither was Windows, and look where that is today.

Also likely to be a major player in the search sector is Yahoo, which recently parted company with Google, and unveiled its own search engine. In its simplicity and overall design, Yahoo's front end looks remarkably like Google's, but the real challengers may be those that adopt quite different approaches.

Amazon is not a name that springs to mind in this context, and yet search has been a core technology for some time. In 1999, it bought Alexa, one of the most innovative Web search companies. More recently, it has added a feature called Search Inside the Book to its main site, offering the ability to search through the texts of many of the books that it sells. Now it has launched its experimental, which combines the latter capability with results provided by Google and other new features.

The arrival of Amazon upon the search engine scene with its full-text index of physical books supports a prediction by Microsoft Research's Jim Gray in a recent Netcraft Interview: that alongside traditional search engines focusing on Web pages, other, more specialised sites will appear, including those that concentrate on different kinds of online data, currently inaccessible - what is sometimes called the "Deep Web".

Another example of this trend is eBay. After all, its huge popularity depends crucially on the ability of visitors to find what they are looking for among the huge range of goods on offer - a kind of search engine for information about the world of objects. Perhaps an example of Web Search 2.0 has been staring everyone in the face for years.

Glyn Moody welcomes your comments.