Quit the web 2.0/web 3.0 crap already!

HerkoDuring the meanwhile 1 Comment

Sorry for the strong title of this post, but that’s how I currently feel about the whole web 2.0/web 3.0 hype. People who are shouting about, saying that something is very ‘Web 2.0’ (or worse: web 3.0) make my skin crawl. Worse, it’s spreading like a virus, now everything needs to be ‘two point oh’. Stop it! You’re all acting silly, and you don’t know what you’re talking about!

Let me explain to you my personal position.

First came the Web 2.0

The term Web 2.0 was first coined at the 2004 Web 2.0 Conference by Tim O’Reilly, defining it as

Web 2.0 is the business revolution in the computer industry caused by the move to the internet as platform, and an attempt to understand the rules for success on that new platform. Chief among those rules is this: Build applications that harness network effects to get better the more people use them. (This is what I’ve elsewhere called “harnessing collective intelligence.”) (source)

However, the term ‘Web 2.0’ has started to live a life of its own, becoming a hyped term. Web 2.0 means you have to spell your URL badly like flickr or tubmlr or toddlr or whatevr. It means you have to have users who generate content for your site. And that the users have the ability to connect, making it a social, sometimes collaborative network. Also, the site has to start with a beta period for invited people only -with users having to beg for invites everywhere- followed by a long open-to-everyone public beta. In fact, there’s a big chance most web 2.0 ventures will never ever make it out of the beta stages.

And then there is the Web 3.0

And now the Next Best Thing Since Sliced Bread has been announced, and it’s called the Web 3.0.  The term is sketchy, as you can see in the Wikipedia definition:

Web 3.0 is one of the terms used to describe the evolutionary stage of the Web that follows Web 2.0. Given that technical and social possibilities identified in this latter term are yet to be fully realised the nature of defining Web 3.0 is highly speculative. In general it refers to aspects of the internet which, though potentially possible, are not technically or practically feasible at this time.

When people refer to the Web 3.0, they usually think of things like the Semantic Web, and artificial intelligence. Some others coined the Mobile Web as the next big step in internet evolution, but that movement seems to have evolved away.

So, what is my problem with these terms? I’ll tell you. To me, this all is the fullfillment of the promise of the Web 1.0.

A (very brief) lesson in Web History

When Tim Berners-Lee invented the World Wide Web back in the days he created a means to access the data stored in the CERN archives. In fact, he tells us:

“I just had to take the hypertext idea and connect it to the Transmission Control Protocol and domain name system ideas and — ta-da! — the World Wide Web.”

Up untill that time, almost all our information was structured to be read by humans, to be distributed on paper. The archives were full of papers, research documents, notes, excerpts, etc. (Note that our current information-products vocabulary still uses terms from the hardcopy-only day and age: papers, documents…). What Sir Berners-Lee did however, was add a unique factor to it: the hyperlink.

The promise of hypertext

Lets grab the Wikipedia entry for the ‘hyperlink’, and look at its history:

The term “hyperlink” was coined in 1965 (or possibly 1964) by Ted Nelson at the start of Project Xanadu. Nelson had been inspired by “As We May Think,” a popular essay by Vannevar Bush. In the essay, Bush described a microfilm-based machine (the Memex) in which one could link any two pages of information into a “trail” of related information, and then scroll back and forth among pages in a trail as if they were on a single microfilm reel. The closest contemporary analogy would be to build a list of bookmarks to topically related Web pages and then allow the user to scroll forward and backward through the list.

In a series of books and articles published from 1964 through 1980, Nelson transposed Bush’s concept of automated cross-referencing into the computer context, made it applicable to specific text strings rather than whole pages, generalized it from a local desk-sized machine to a theoretical worldwide computer network, and advocated the creation of such a network. Meanwhile, working independently, a team led by Douglas Engelbart (with Jeff Rulifson as chief programmer) was the first to implement the hyperlink concept for scrolling within a single document (1966), and soon after for connecting between paragraphs within separate documents (1968).

The very first thing I notice is that the first mention of the hyperlink is way back in 1965 (or ’64). So this isn’t a completely new concept at all, and that its use and experiments with early hyperlinks basically describe the modern world wide web.

The second thing that strikes me is that its purpose is to link data in a cognitive manner, based on the way we humans think. So basically it allows us to relate information (stored anywhere on the network), based on our own logic.

Following these conclusions, by adding hypertext to the World Wide Web, he created a means to access any piece of data or information stored on any machine connected to the network. Note the absence of the terms ‘pages’, ‘documents’ and ‘sites’ in this last sentence.

User generated content

The very first modern webbrowser (WorldWideWeb, running on the NeXTSTEP system) was a browser and an editor all in one. In fact, the web was meant to be editable by users. What use would having access to all this information be if you couldn’t enrich it with human cognitive logic, linking data and making sure that all other users can connect to the same network of links and data nodes you created? Exactly. Not much.

So the promise of the Web 1.0 was that it’d allow users to generate and enrich content, stored anywhere on the world wide network, using their own human logic to connect the nodes. Does that sound familiar at all?

This is where the masses intervened, and when it all went downhill.

The decline of the web

The CERN wasn’t the only organisation conected to the internet who was interested in Berner-Lee’s application. There were serval other research facilities, defence organisations and universities who wanted to use the same technology to publish their own archives and data. Hang on -publishing it? Isn’t publishing it the old fashioned way of distributing hardcopy works? Exactly, and that’s where it all goes wrong. People started developing the World Wide Web as a publication platform, not to provide access to data, but to publish it. And publishing it means that you create a storefront, and try to attract customers to your store in order to get them to access your products (for free or payment).

So, you got sites, with documents which were little less then digitised versions of their hadcopy originals. The promise of hypertext was reduced to a means to navigate the potential user through the store and to access the product catalogs, not the actual content itself.

And from that point on, the web became a mass collection of sites and documents and pages. Because hypertext was so ill used, the need for search engines became apparent, with webcrawler and altavista and some other early pioneers paving the way for the Google’s and Yahoo’s and Livesearch companies of today.

But the web 1.0 was never designed to be like that.

So, Web 2.0/3.0, it’s all the same: web 1.0!

And you can see it in the way Web 2.0 is used now. Yes, web 2.0 is a revolution. But it is in the way we use the web itself. We finally embraced part of its potential, what it was designed to do in the first place.

It’s like we pushed the car around for a few years, and only just found out what the silly metal keys in the egnition are for.

And Web 3.0, the Semantic Web isn’t Tim Berners-Lee’s pet project for nothing. The World Wide Web was designed to link data all across the network.

Fundamentally, no new technology has been applied to fulfill the Web 2.0 and Web 3.0 promises. The Web, as designed years ago, still functions as then. It’s just that we now discovered how to use it properly. We finally let go of the rather silly notion that the World Wide Web is a mass library of books, and that you need to browse through the catalogs to get the book you need. And that information isn’t connected unless we, human beings, connect it using our own logic, the same way our brain connects the dots.

We’ve finally learnt to appreciate and realise the promise of the Web 1.0.

And that is why I’m against the Web 2.0 and Web 3.0 crap. So next time you think it’s good to tell everyone that your idea is very ‘web 2.0’, please forgive me for scratching profusely and making vomiting noises in the back…

Comments 1

  1. Herko,

    This is one of those posts that, as a developer, I have to say, RIGHT ON!

    Given that I spend a lot of time in the social media circles, I’ve seen web 2.0 and 3.0 thrown around everywhere. I’ve seen it used to describe AJAX effects, I’ve seen it used to describe business models, I’ve even seen it used to describe web hosting. Web 2.0 web hosting? WTF?!

    It is truly refreshing to see that there are people out there who understand and appreciate what the WWW really is and what is is truly capable of.

    I too am truly fed up with all the “two point oh” jargon. I’m equally fed up with people pimping these new buzz-word labeled services and applications like they are something new. To date, I haven’t seen any real innovation come to the WWW in many years. What I have seen is people take existing technology and finally use it in a logical manor, then slap a buzzword badge on it.

    I better stop now before I go all out on my own rant. 😉 Great post! You are definitely going in my feed reader!

Leave a Reply

Your email address will not be published. Required fields are marked *