Obviously, for the foreseeable future, “traditional” native applications will continue to have significant advantages over web applications. As much of a “fandroid”/fan of Google as I am, I find it hard to see how I could use a Chromebook (a laptop running Google’s ChromeOS) over a real PC today because of my heavy use of apps like Excel or whenever I code.
However, you can do some pretty cool things with web applications/HTML5 which give you a sense of what can one day be possible. Case in point: enter Chrome Remote Desktop (HT: Google Operating System), a beta extension for Google Chrome which basically allows you to take control of another computer running Chrome a la remote desktop/VNC. While this capability is nothing new (Windows had “remote desktop” built in since, at latest, Windows XP, and there are numerous VNC/remote desktop clients), what is pretty astonishing is that this app is built entirely using web technologies – whereas traditional remote desktops use non-web based communications and native graphics to create the interface to the other computer, Chrome Remote Desktop is doing all the graphics in the browser and all the communications using either the WebSocket standard from HTML5 or Google Talk’s chat protocol! (see below as I use my personal computer to remote-control my work laptop where I am reading a PDF on microblogging in China and am also showing my desktop background image where the Jedi Android slashes up a Apple Death Star)
How well does it work? The control is quite good – my mouse/keyboard movements registered immediately on the other computer – but the on-screen graphics/drawing speed was quite poor (par for the course for most sophisticated graphics drawing apps in the browser and for a beta extension). The means of controlling another desktop, while easy to use (especially if you are inviting someone to take a look at your machine) is very clumsy for some applications (i.e. a certain someone who wants to leave his computer in the office and use VNC/remote desktop to access it only when he needs to).
So, will this replace VNC/remote desktop anytime soon? No (nor, does it seem, were they the first to think up something like this), but that’s not the point. The point, at least to me, is that the browser is picking up more and more sophisticated capabilities and, while it may take a few more versions/years before we can actually use this as a replacement for VNC/remote desktop, the fact that we can even be contemplating that at all tells you how far browser technology has come and why the browser as a platform for applications will grow increasingly compelling.
I’ve written before about my love for the Economist. One of the reasons I stated before were their irreverent titles/illustrations/covers. As a social media aficionado, I had to share this amazing cover. This will hopefully tickle you as much as it did me 🙂 [apologies for the poor resolution, this is the best quality cover I could find on the Economist website – the main blocks of text that you can’t see are, on the TV: “news breaketh”; on the wall, from left to right: “Pitt the Younger on Tumblr”, “Gratis Wye-Fye”, and “Marie Antoinette’s Blog: New Cake Recipe”; pamphlets towards the bottom right from top to bottom: “Wikye Leakes Latest: Josephine Bonaparte’s emails”, “Tea Party Gazette: Bachmann Doth Rock”, and “Chronic Times”]
On a more substantive note, the special report on the future of news which inspired the cover was quite interesting and I’d recommend any one who’s interested in the future of journalism and the news business to take a look.
I just read a really interesting Economist article which, at first, I thought was very counter-intuitive.
In the early days of television, there was very little in the way of network selection for the average TV-viewer. There were only a handful of stations and, regardless of how bad the content on a given station was, those stations would stay in business because they were the only stations around. Heck, even if the station’s programming was completely awful, there would still be plenty of people watching it simply because it was one of the only things that was on.
Flash forward a few decades. We now not only have many television stations, not to mention cable, satellite, and internet video. We have enough DVDs to last a lifetime. The web has made it so that anyone with a camera and an internet connection can broadcast to YouTube.
Given all this, what would you expect to happen to what people watch? If you’re anything like me, you would’ve concluded that the power of the internet to connect people with what they want and the abundance of new video content would have encouraged people to “spread out.” Why would you stick to a few “staple” networks, when you can now watch the SyFy channel for science fiction, CNN for the news, and YouTube if you want to keep LonelyGirl company?
Well, writer Chris Anderson seemed to agree and he popularized this idea in a book (and “theory”) he called The Long Tail (book cover to the right). In it he describes exactly what I just laid out, that because technology makes it easier for people to find the things which the majority of consumers aren’t interested in, the future of business would be less about selling a few popular items that “sort of” appeal to the “average person” and much more about selling a lot of the “the long tail” (pictured graphically below) of things which really appealed to a few people apiece. Or, to use the TV analogy again, the idea behind the Long Tail is that it makes more sense to create a bunch of small TV stations which focus only on a few niches than it is to have one big station that tries to satisfy everyone at once. This is, after all, one of the big ideas behind eCommerce sites like eBay. Wal-mart or Target can and will stock lamps which sells several millions of units, but because they can’t possibly stock every lamp, they won’t satisfy everyone. The Long Tail theory says that the real money to be made is in selling the millions of things which are only going to sell a few items apiece, but because those items are exactly what those people want, you can probably make a little extra profit off of each.
After all, how many authors or singers have you absolutely loved, but knew they could never go “mainstream”?
As appealing as that idea was (especially to the snobs out there, myself included, who just wanted to assure themselves that the real money wasn’t in going mainstream but in going for the nobody’s-ever-heard-of-them CD/book/electronic/movie), reality has not played out quite that way.
While there is no doubt that the internet has expanded choice such that people now have access to the long tail, instead of seeing a diminishing “head”, the size of the biggest hits has increased dramatically. Take books as an example. From the Economist:
A study of the Australian market by Nielsen, a research firm, found that the number of titles bought each year (measured by ISBNs) has risen dramatically, from about 275,000 in 2004 to almost 450,000 in 2007. Niche titles selling fewer than 1,000 copies each accounted for nearly all the growth in variety. Yet their market share fell. In Britain, sales of the ten bestselling books increased from 3.4m to 6m between 1998 and 2008.
So, instead of seeing a migration from the “head” to the Long Tail, we’re seeing Goldilocks’s middle-of-the-road players getting crushed by blockbuster hits on the one side and the long tail on the other. This begs the question: why are hits doing so well when there’s so much else out there? Again, the Economist:
A lot of the people who read a bestselling novel, for example, do not read much other fiction. By contrast, the audience for an obscure novel is largely composed of people who read a lot. That means the least popular books are judged by people who have the highest standards, while the most popular are judged by people who literally do not know any better. An American who read just one book this year was disproportionately likely to have read “The Lost Symbol”, by Dan Brown. He almost certainly liked it.
Ironically, it turns out the technology which makes the long tail more accessible is even better at turning hits into even bigger hits. After all, the internet helps spreads word-of-mouth for hits and long tail products alike. If anything, the fact that technology today makes it so easy to choose between different things is going to drive people to look for hits if only so they have something to talk about with one another.
Unfortunately for content and product people, this makes business very tricky. It means you can take one of two routes to success: make a blockbuster hit or sell a lot of niche products which appeal a great deal to a few people each. The former is tough to do because its hard to know what will be a hit. The latter is tough to do because you’ll need to have a very lean cost structure to be able to profitably make a lot of products which are only selling a few units a piece. But, the worst part is that trying to split the difference between both is especially hard as the former requires a big budget for marketing and for getting the best writers/artists/coders and the latter falls apart because you need to make many different things.
How things will ultimately shape out is anyone’s guess, but my perspective is that the smart companies out there will do three things:
Invest in strong PR and marketing muscle. If people seek hits because they want to be able to talk about things with each other, then the job of the product/content company is not just to make the best product possible, but its also to get people to talk about it. This means the smart companies will invest heavily in either a strong internal public relations/marketing group or a partnership with someone particularly strong in that area. This will be especially critical for the largest product companies/studios as having a strong PR/marketing capability will be an asset they can leverage across all their products, both those that need to be hits and those going for the long tail.
Find ways to cross-sell. The economics of a long tail business are grim, because they involve keeping and developing a wide range of products that are each only going to sell a few items. How do you do well with that type of strategy? The answer is that you do everything possible to turn flops and long tails into hits. One approach is to take a page out of Amazon.com’s playbook: recommendations. Amazon has found a way to encourage buyers to not just buy one thing, but to buy several by using a sophisticated computer algorithm to find products which people tend to buy together. This lets a company use one product to many others: free marketing, in a sense.
Be a lean, mean product-making machine. The only way to survive hits turning into flops is to make sure the hits don’t cost too much to make. The only way to make the long tail profitable is to make sure you have a lean operation in place which quickly and cheaply cranks out high quality ideas. Take some of the large social gaming companies like Zynga, maker of the very popular Facebook game Farmville. Don’t tell me that Fishville, Petville, Yoville, and Cafe World are completely original ideas :-). This is not to bash on Zynga, as I think the idea is brilliant and the quality of the games lies in the execution not necessarily in the originality of the concept, but in a world of hits and long tails, the best strategy is to find some core engine which you can re-hash and improve upon again and again. And few can question Zynga’s winning formula in that arena.
About a year ago, I met up with Teresa Wu (of My Mom is a Fob and My Dad is a Fob fame). It was our first “Tweetup”, a word used by social media types to refer to meet-up’s between people who had only previously been friends over Twitter. It was a very geeky conversation (and what else would you expect from people who referred to their first face-to-face meeting as a Tweetup?), and at one point the conversation turned to discuss our respective visions of “Web 3.0”, which we loosely defined as what would come after the current also-loosely-defined “Web 2.0” wave of today’s social media websites.
On some level, trying to describe “Web 3.0” is as meaningless as applying the “Web 2.0” label to websites like Twitter and Facebook. It’s not an official title, and there are no set rules or standards on what makes something “Web 2.0”. But, the fact that there are certain shared characteristics between popular websites today versus their counterparts from only a few years ago gives the “Web 2.0” moniker some credible intellectual weight; and the fact that there will be significant investment in a new generation of web companies lends special commercial weight as to why we need to come up with a good conception of “Web 2.0” and a good vision for what comes after (Web 3.0).
So, I thought I would get on my soapbox here and list out three drivers which I believe will define what “Web 3.0” will look like, and I’d love to hear if anyone else has any thoughts.
A flight to quality as users start struggling with ways to organize and process all the information the “Web 2.0” revolution provided.
The development of new web technologies/applications which can utilize the full power of the billions of internet-connected devices that will come online by 2015.
Browser improvement will continue and enable new and more compelling web applications.
I. Quality over quantity
In my mind, the most striking change in the Web has been the evolution of its primary role. Whereas “Web 1.0” was oriented around providing information to users, generally speaking, “Web 2.0” has been centered around user empowerment, both in terms of content creation (blogs, Youtube) and information sharing (social networks). Now, you no longer have to be the editor of the New York Times to have a voice – you can edit a Wikipedia page or upload a YouTube video or post up your thoughts on a blog. Similarly, you no longer have to be at the right cocktail parties to have a powerful network, you can find like-minded individuals over Twitter or LinkedIn or Facebook.
The result of this has been a massive explosion of the amount of information and content available for people and companies to use. While I believe this has generally been a good thing, its led to a situation where more and more users are being overwhelmed with information. As with the evolution of most markets, the first stage of the Web was simply about getting more – more information, more connections, more users, and more speed. This is all well and good when most companies/users are starving for information and connections, but as the demand for pure quantity dries up, the attention will eventually focus on quality.
While there will always be people trying to set up the next Facebook or the next Twitter (and a small percentage of them will be successful), I strongly believe the smart money will be on the folks who can take the flood of information now available and milk that into something more useful, whether it be for targeting ads or simply with helping people who feel they are “drinking from a fire hose”. There’s a reason Google and Facebook invest so much in resources to build ads which are targeted at the user’s specific interests and needs. And, I feel that the next wave of Web startups will be more than simply tacking on “social” and “online” to an existing application. It will require developing applications that can actually process the wide array of information into manageable and useful chunks.
II. Mo’ devices, mo’ money
A big difference between how the internet was used 10 years ago and how it is used today is the rise in the number of devices which can access the internet. This has been led by the rise of new smartphones, gaming consoles, and set-top-boxes. Even cameras have been released with the ability to access the internet (as evidenced by Sony’s Cybershot G3). While those of us in the US think of the internet as mainly a computer-driven phenomena, in much of the developing world and in places like Japan and Korea, computer access to the internet pales in comparison to access through mobile phones.
The result? Many of these interfaces to the internet are still somewhat clumsy, as they were built to mimic PC type access on a device which is definitely not the PC. While work by folks at Apple and at Google (with the iPhone and Android browsers) and at shops like Opera (with Opera Mini) and Skyfire have smoothed some of the rougher edges, there is only so far you can go with mimicking a computer experience on a device that lacks the memory/processing power limitations and screen size of a larger PC.
This isn’t to say that I think the web browsing experience on an iPhone or some other smartphone is bad – I actually am incredibly impressed by how well the PC browsing experience transferred to the mobile phone and believe that web developers should not be forced to make completely separate web pages for separate devices. But, I do believe that the real potential of these new internet-ready devices lies in what makes those individual devices unique. Instead of more attempts to copy the desktop browsing experience, I’d like to see more websites use the iPhone’s GPS to give location-specific content, or use the accelerometer to control a web game. I want to see social networking sites use a gaming console’s owner’s latest scores or screenshots. I want to see cameras use the web to overlay the latest Flickr comments on the pictures you’ve taken or to do augmented reality. I want to see set-top boxes seamlessly mix television content with information from the web. To me, the true potential of having 15 billion internet-connected devices is not 15 billion PC-like devices, but 15 billion devices each with its own features and capabilities.
III. Browser power
Is it any wonder, then, that Google, who views web applications as a big piece of its quest for web domination, created a free browser (Chrome) and two web-capable operating systems (ChromeOS and Android), and is investigating ways for web applications to access the full processing power of the computer (Native Client)? The result of Google’s pushes as well as the internet ecosystem’s efforts has been a steady improvement in web browser capability and a strong push on the new HTML5 standard.
So, what does this all mean for the shape of “Web 3.0”? It means that, over the next few years, we are going to see web applications dramatically improve in quality and functionality, making them more and more credible as disruptive innovations to the software industry. While it would be a mistake to interpret this trend, as some zealots do, as a sign that “web applications will replace all desktop software”, it does mean that we should expect to see a dramatic boost in the number and types of web applications, as well as the number of users.
I’ll admit – I kind of cheated. Instead of giving a single coherent vision of what the next wave of Web innovation will look like, I hedged my bets by outlining where I see major technology trends will take the industry. But, in the same way that “Web 2.0” wasn’t a monolithic entity (Facebook, WordPress, and Gmail have some commonalities, but you’d be hard pressed to say they’re just different variants of the same thing), I don’t think “Web 3.0” will be either. Or, maybe all the innovations will be mobile-phone-specific, context-sensitive, super powerful web applications…