Skip to content →

Tag: quality

Why I Favor Google over Apple

image Many of my good friends are big fans of Apple and its products. But not me. This good-natured difference in opinion leads us into never-ending mini-debates over Twitter or in real life over the relative merits of Apple’s products and those of its competitors.

I suspect many of them (respectfully) think I’m crazy. “Why would you want an inferior product?” “Why do you back a company that has all this information about you and follows you everywhere on the internet?”

I figured that one of these days, I should actually respond to them (fears of flamers/attacks on my judgment be damned!).

imageFirst thing’s first. I’ll concede that, at least for now, Apple tends to build better products. Apple has remarkable design and UI sense which I have yet to see matched by another company. Their hardware is of exceptionally high quality, and, as I mentioned before, they are masters at integrating their high-end hardware with their custom-built software to create a very solid user experience. They are also often pioneers in new hardware innovations (e.g., accelerometer, multitouch, “retina display”, etc.).

So, given this, why on earth would I call myself a Google Fanboi (and not an Apple one)? There are a couple of reasons for it, but most of them boil down basically to the nature of Google’s business model which is focused around monetizing use rather than selling a particular piece of content/software/hardware. Google’s dominant source of profit is internet advertising – and they are able to better serve ads (get higher revenue per ad) and able to serve more ads (higher number of ads) by getting more people to use the internet and to use it more. Contrast this with Apple who’s business model is (for the most part) around selling a particular piece of software or hardware – to them, increased use is the justification or rationale for creating (and charging more for) better products. The consequence of this is that the companies focus on different things:

  • image Cheap(er) cost of access – Although Apple technology and design is quite complicated, Apple’s product philosophy is very simple: build the best product “solution” and sell it at a premium. This makes sense given Apple’s business model focus on selling the highest-quality products. But it does not make sense for Google which just wants to see more internet usage. To achieve this, Google does two main things. First, Google offers many services and development platforms for little or no cost. Gmail, Google Reader, Google Docs, and Google Search: all free, to name a few. Second, Google actively attacks pockets of control or profitability in the technology space which could impede internet use. Bad browsers reducing the willingness of people to use the internet? Release the very fast Google Chrome browser. Lack of smartphones? Release the now-very-popular Android operating system. Not enough internet-connected TV solutions? Release Google TV. Not enough people on high-speed broadband? Consider building a pilot high-speed fiber optic network for a lucky community. All of these efforts encourage greater Web usage in two ways: (a) they give people more of a reason to use the Web more by providing high-value web services and “complements” to the web (like browsers and OS’s) at no or low cost and (b) forcing other businesses to lower their own prices and/or offer better services. Granted, these moves oftentimes serve other purposes (weakening competitive threats on the horizon and/or providing new sources of revenue) and aren’t always successes (think OpenSocial or Google Buzz), but I think the Google MO (make the web cheaper and better) is better for all end-users than Apple’s.
  • Choice at the expense of quality – Given Apple’s interest in building the best product and charging for it, they’ve tended to make tradeoffs in their design philosophy to improve performance and usability. This has proven to be very effective for them, but it has its drawbacks. If you have followed recent mobile tech news, you’ll know Apple’s policies on mobile application submissions and restrictions on device functionality have not met with universal applause. This isn’t to say that Apple doesn’t have the right to do this (clearly they do) or that the tradeoffs they’ve made are bad ones (the number  of iPhone/iPad/iPod Touch purchases clearly shows that many people are willing to “live with it”), but it is a philosophical choice. But, this has implications for the ecosystem around Apple versus Google (which favors a different tradeoff). Apple’s philosophy provides great “out of the box” performance, but at the expense of being slower or less able to adopt potential innovations or content due to their own restrictions. image Case in point: a startup called Swype has built a fascinating new way to use soft keyboards on touchscreens, but due to Apple’s App Store not allowing an application that makes such a low-level change, the software is only available on Android phones. Now, this doesn’t preclude Swype from being on the iPhone eventually, but it’s an example where Apple’s approach may impede innovation and consumer choice – something which a recent panel of major mobile game developers expressed concern about — and its my two cents worth that the Google way of doing things is better in the long run.
  • image Platforms vs solutions – Apple’s hallmark is the vertically integrated model, going so far as to have their own semiconductor solution and content store (iTunes). This not only lets them maximize the amount of cash they can pull in from a customer (I don’t just sell you a device, I get a cut of the applications and music you use on it), it also lets them build tightly integrated, high quality product “solution”. Google, however, is not in the business of selling devices and has no interest in one tightly integrated solution: they’d rather get as many people on the internet as possible. So, instead of pursuing the “Jesus phone” approach, they pursue the platform approach, releasing “horizontal” software and services platforms to encourage more companies and more innovators to work with it. With Apple, you only have one supplier and a few product variants. With Google, you enable many suppliers (Samsung, HTC, and Motorola for starters in the high-end Android device world, Sony and Logitech in Google TV) to compete with one another and offer their own variations on hardware, software, services, and silicon. This allows companies like Cisco to create a tablet focused on enterprise needs like the Cius using Android, something which the more restrictive nature of Apple’s development platform makes impossible (unless Apple creates its own), or researchers at the MIT Media lab to create an interesting telemedicine optometry solution. A fair response to this would be that this can lead to platform fragmentation, but whether or not there is a destructive amount of it is an open question. Given Apple’s track record the last time it went solo versus platform (something even Steve Jobs admits they didn’t do so well at), I feel this is a major strength for Google’s model in the long-run.
  • (More) open source/standards – Google is unique in the tech space for the extent of its support for open source and open standards. Now, how they’ve handled it isn’t perfect, but if you take a quick glance at their Google Code page, you can see an impressive number of code snippets and projects which they’ve open sourced and contributed to the community. They’ve even gone so far as to provide free project hosting for open source projects. But, even beyond just giving developers access to useful source code, Google has gone further than most companies in supporting open standards going so far as to provide open access to its WebM video codec which it purchased the rights to for ~$100M to provide a open HTML5 video standard and to make it easy to access your data from a Google service however you choose (i.e., IMAP access to Gmail, open API access to Google Calendar and Google Docs, etc.). This is in keeping with Google’s desire to enable more web development and web use, and is a direct consequence of it not relying on selling individual products. Contrast this with an Apple-like model – the services and software are designed to fuel additional sales. As a result, they are well-designed, high-performance, and neatly integrated with the rest of the package, but are much less likely to be open sourced (with a few notable exceptions) or support easy mobility to other devices/platforms. This doesn’t mean Apple’s business model is wrong, but it leads to a different conclusion, one which I don’t think is as good for the end-user in the long run.

These are, of course, broad sweeping generalizations (and don’t capture all the significant differences or the subtle ones between the two companies). Apple, for instance, is at the forefront of contributors to the open source Webkit project which powers many of the internet’s web browsers and is a pioneer behind the multicore processing standard OpenCL. On the flip side, Google’s openness and privacy policies are definitely far from perfect. But, I think those are exceptions to the “broad strokes” I laid out.

In this case, I believe that, while short-term design strength and solution quality may be the strengths of Apple’s current model, I believe in the long run, Google’s model is better for the end-customer because their model is centered around more usage.

(Image credit) (Image credit) (Image credit) (Image credit) (Image credit)

14 Comments

Web 3.0

About a year ago, I met up with Teresa Wu (of My Mom is a Fob and My Dad is a Fob fame). It was our first “Tweetup”, a word used by social media types to refer to meet-up’s between people who had only previously been friends over Twitter. It was a very geeky conversation (and what else would you expect from people who referred to their first face-to-face meeting as a Tweetup?), and at one point the conversation turned to discuss our respective visions of “Web 3.0”, which we loosely defined as what would come after the current also-loosely-defined “Web 2.0” wave of today’s social media websites.

On some level, trying to describe “Web 3.0” is as meaningless as applying the “Web 2.0” label to websites like Twitter and Facebook. It’s not an official title, and there are no set rules or standards on what makes something “Web 2.0”. But, the fact that there are certain shared characteristics between popular websites today versus their counterparts from only a few years ago gives the “Web 2.0” moniker some credible intellectual weight; and the fact that there will be significant investment in a new generation of web companies lends special commercial weight as to why we need to come up with a good conception of “Web 2.0” and a good vision for what comes after (Web 3.0).

So, I thought I would get on my soapbox here and list out three drivers which I believe will define what “Web 3.0” will look like, and I’d love to hear if anyone else has any thoughts.

  1. A flight to quality as users start struggling with ways to organize and process all the information the “Web 2.0” revolution provided.
  2. The development of new web technologies/applications which can utilize the full power of the billions of internet-connected devices that will come online by 2015.
  3. Browser improvement will continue and enable new and more compelling web applications.

I. Quality over quantity

In my mind, the most striking change in the Web has been the evolution of its primary role. Whereas “Web 1.0” was oriented around providing information to users, generally speaking, “Web 2.0” has been centered around user empowerment, both in terms of content creation (blogs, Youtube) and information sharing (social networks). Now, you no longer have to be the editor of the New York Times to have a voice – you can edit a Wikipedia page or upload a YouTube video or post up your thoughts on a blog. Similarly, you no longer have to be at the right cocktail parties to have a powerful network, you can find like-minded individuals over Twitter or LinkedIn or Facebook.

The result of this has been a massive explosion of the amount of information and content available for people and companies to use. While I believe this has generally been a good thing, its led to a situation where more and more users are being overwhelmed with information. As with the evolution of most markets, the first stage of the Web was simply about getting more – more information, more connections, more users, and more speed. This is all well and good when most companies/users are starving for information and connections, but as the demand for pure quantity dries up, the attention will eventually focus on quality.
image
While there will always be people trying to set up the next Facebook or the next Twitter (and a small percentage of them will be successful), I strongly believe the smart money will be on the folks who can take the flood of information now available and milk that into something more useful, whether it be for targeting ads or simply with helping people who feel they are “drinking from a fire hose”. There’s a reason Google and Facebook invest so much in resources to build ads which are targeted at the user’s specific interests and needs. And, I feel that the next wave of Web startups will be more than simply tacking on “social” and “online” to an existing application. It will require developing applications that can actually process the wide array of information into manageable and useful chunks.

II. Mo’ devices, mo’ money

image A big difference between how the internet was used 10 years ago and how it is used today is the rise in the number of devices which can access the internet. This has been led by the rise of new smartphones, gaming consoles, and set-top-boxes. Even cameras have been released with the ability to access the internet (as evidenced by Sony’s Cybershot G3). While those of us in the US think of the internet as mainly a computer-driven phenomena, in much of the developing world and in places like Japan and Korea, computer access to the internet pales in comparison to access through mobile phones.

The result? Many of these interfaces to the internet are still somewhat clumsy, as they were built to mimic PC type access on a device which is definitely not the PC. While work by folks at Apple and at Google (with the iPhone and Android browsers) and at shops like Opera (with Opera Mini) and Skyfire have smoothed some of the rougher edges, there is only so far you can go with mimicking a computer experience on a device that lacks the memory/processing power limitations and screen size of a larger PC.

This isn’t to say that I think the web browsing experience on an iPhone or some other smartphone is bad – I actually am incredibly impressed by how well the PC browsing experience transferred to the mobile phone and believe that web developers should not be forced to make completely separate web pages for separate devices. But, I do believe that the real potential of these new internet-ready devices lies in what makes those individual devices unique. Instead of more attempts to copy the desktop browsing experience, I’d like to see more websites use the iPhone’s GPS to give location-specific content, or use the accelerometer to control a web game. I want to see social networking sites use a gaming console’s owner’s latest scores or screenshots. I want to see cameras use the web to overlay the latest Flickr comments on the pictures you’ve taken or to do augmented reality. I want to see set-top boxes seamlessly mix television content with information from the web. To me, the true potential of having 15 billion internet-connected devices is not 15 billion PC-like devices, but 15 billion devices each with its own features and capabilities.

III. Browser power

image
While the Facebooks and Twitters of the world get (and deserve) a lot of credit for driving the Web 2.0 wave of innovation, a lot of that credit actually belongs to the web standards/browser development pioneers who made these innovations possible. Web applications ranging from office staples like Gmail and Google Docs would have been impossible without new browser technologies like AJAX and more powerful Javascript engines like Chrome’s V8, Webkit’s JavascriptCore, and Mozilla’s SpiderMonkey. Applications like YouTube and Picnik and Photoshop.com depend greatly on Adobe’s Flash product working well with browsers, and so, in many ways, it is web browser technology that is the limiting factor in the development of new web applications.

Is it any wonder, then, that Google, who views web applications as a big piece of its quest for web domination, created a free browser (Chrome) and two web-capable operating systems (ChromeOS and Android), and is investigating ways for web applications to access the full processing power of the computer (Native Client)? The result of Google’s pushes as well as the internet ecosystem’s efforts has been a steady improvement in web browser capability and a strong push on the new HTML5 standard.

So, what does this all mean for the shape of “Web 3.0”? It means that, over the next few years, we are going to see web applications dramatically improve in quality and functionality, making them more and more credible as disruptive innovations to the software industry. While it would be a mistake to interpret this trend, as some zealots do, as a sign that “web applications will replace all desktop software”, it does mean that we should expect to see a dramatic boost in the number and types of web applications, as well as the number of users.

Conclusion

I’ll admit – I kind of cheated. Instead of giving a single coherent vision of what the next wave of Web innovation will look like, I hedged my bets by outlining where I see major technology trends will take the industry. But, in the same way that “Web 2.0” wasn’t a monolithic entity (Facebook, WordPress, and Gmail have some commonalities, but you’d be hard pressed to say they’re just different variants of the same thing), I don’t think “Web 3.0” will be either. Or, maybe all the innovations will be mobile-phone-specific, context-sensitive, super powerful web applications…

(Image credit) (Image credit – PhD comics) (Image credit – mobile phone) (Image credit – Browser wars)

2 Comments