Skip to content →

Tag: Tech

Gaze Into Your Crystal Ball

crystalballIn the Pipeline’s Derek Lowe wrote a very thoughtful opinion piece for the ACS (American Chemical Society) journal Medicinal Chemistry Letters where he does something which I encourage all career-minded working people to do: hold up a mirror to his own industry (medicinal chemistry … obviously) and then gaze into his crystal ball to see where it might go in the future:

it is now the absolute worst time ever to be an ordinary medicinal chemist in a high-wage part of the world. The days when you could make a reliable living doing methyl–ethyl–butyl–futile work in the United States or Western Europe are gone, and what mechanism will ever be found to bring them back? There’s still a lot of that work that needs to be done, but it is getting done somewhere else, and as long as “somewhere else” operates more cheaply and reasonably on time, that situation will not change.

This means that the best advice is not to be ordinary. That is not easy, and it is no guarantee, either, but it is the only semisafe goal for which to aim. Medicinal chemists have to offer their employers something that cannot be had more cheaply in Shanghai or Bangalore. New techniques, proficiency with new equipment, ideas that have not become commodified yet: Those seem to be the only form of insurance, and even then, they are not always enough.

I may be slightly biased as much of my work has been in the technology industry where large industry changes happen a little faster than in other industries so I’m particularly attuned to how those will impact companies, but very rarely do I notice people – in and out of the technology industry – give some careful thought to how their industries will change over time – and I think that’s a shame.

In the same way that the medicinal chemists from 5-10 years ago that Derek Lowe is writing about were caught off-guard by the impact of globalization, people in the postal service are watching technologies like email and internet advertising change the foundation of their jobs, people in the healthcare industry are watching new laws and regulations slowly come down the pipeline, and people in the book publishing industry are watching as eBooks and eReaders take off. I’m not claiming that these changes were obviously predictable – that’s what makes my job in venture interesting! — but, changes in science & technology, in globalization, and in demographics have and will dramatically impact every aspect of life/business and, frankly speaking, its the people who work in an industry (in the case of medicinal chemistry, it was guys like Derek Lowe) who have the best shot at gazing at a crystal ball, predicting and understanding the changes that will come down the pipeline, and, then, figuring out ways to get ahead of it (whether that means changing jobs, learning new skills, etc).

So, do yourself a favor 5-10 years from now – and gaze into your crystal ball.

(Image credit: PE2011 Facts)

One Comment

A “Fandroid” Forced to Use an iPhone 4 for Two Weeks

I recently came back from a great two week trip to China and Japan. Because I needed an international phone plan/data access, I ended up giving up my beloved DROID2 (which lacks international roaming/data) for two weeks and using the iPhone 4 my company had given me.

Because much has changed in the year and a half since I wrote that first epic post comparing my DROID2 with an iPhone 4 – for starters, my iPhone 4 now runs the new iOS 5 operating system and my DROID2 now runs Android 2.3 Gingerbread — I thought I would revisit the comparison, having had over a year to use both devices in various capacities.

Long story short: I still prefer my DROID2 (although to a lesser extent than before).

So, what were my big observations after using the iPhone 4 for two weeks and then switching back to my DROID2?

  • Apple continues to blow me away with how good they are at
    • UI slickness: There’s no way around it – with the possible exception of the 4.0 revision of Android Ice Cream Sandwich (which I now have and love on my Motorola Xoom!) – no Android operating system comes close to the iPhone/iPad’s remarkable user interface smoothness. iOS animations are perfectly fluid. Responsiveness is great. Stability is excellent (while rare, my DROID2 does force restart every now and then — my iPhone has only crashed a handful of times). It’s a very well-oiled machine and free of the frustrations I’ve had at times when I. just. wished. that. darn. app. would. scroll. smoothly.
    • Battery life: I was at or near zero battery at the end of every day when I was in Asia – so even the iPhone needs improvement in that category. But, there’s no doubt in my mind that my DROID2 would have given out earlier. I don’t know what it is about iOS which enables them to consistently deliver such impressive battery life, but I did notice a later onset of “battery anxiety” during the day while using the iPhone than I would have on my DROID2.
  • Apple’s soft keyboard is good – very good — but nothing beats a physical keyboard plus SwiftKey. Not having my beloved Android phone meant I had to learn how to use the iPhone soft keyboard to get around – and I have to say, much to my chagrin, I actually got the hang of it. Its amazingly responsive and has a good handle on what words to autocorrect, what to leave alone, and even on learning what words were just strange jargon/names but still legitimate. Even back in the US on my DROID2, I find myself trying to use the soft keyboard a lot more than I used to (and discovering, sadly, that its not as good as the iPhone’s). However:
    • You just can’t type as long as you can on a hard physical keyboard.
    • Every now and then the iPhone makes a stupid autocorrection and it’s a little awkward to override it (having to hit that tiny “x”).
    • The last time I did the iPhone/DROID comparison, I talked about how amazing Swype was. While I still think it’s a great product, I’ve now graduated to SwiftKey(see video below) not only because I have met and love the CEO Jonathan Reynolds but because of its uncanny ability to compose my emails/messages for me. It learns from your typing history and from your blog/Facebook/Gmail/Twitter and inputs it into an amazing text prediction engine which not only predicts what words you are trying to type but also the next word after that! I have literally written emails where half of my words have been predicted by SwiftKey.
  • Notifications in iOS are terrible.
    • A huge issue for me: there is no notification light on an iPhone. That means the only way for me to know if something new has happened is if I hear the tone that the phone makes when I get a new notification (which I don’t always because its in my pocket or because – you know – something else in life is happening at that moment) or if I happen to be looking at the screen at the moment the notifications shows up (same problem). This means that I have to repeatedly check the phone throughout the day which can be a little obnoxious when you’re with people/doing something else and just want to know if an email/text message has come in.
    • What was very surprising to me was that despite having the opportunity to learn (and dare I say, copy) from what Android and WebOS  had done, Apple chose quite possibly the weakest approach possible. Not only are the notifications not visible from the home screen – requiring me to swipe downward from the top to see if anything’s there — its impossible to dismiss notifications one at a time, really hard (or maybe I just have fat fingers?) to hit the clear button which dismisses blocks of them at a time, even after I hit clear, I’m not sure why some of the notifications don’t disappear, and it is surprisingly easy to accidentally hit a notification when you don’t intend to (which will force you into a new application — which wouldn’t be a big deal if iOS had a cross-application back button… which it doesn’t). Maybe this is just someone who’s too used to the Android way of doing things, but while this is way better than the old “in your face” iOS notifications, I found myself very frustrated here.
  • selectionCursor positioning feels a more natural on Android. I didn’t realize this would bug me until after using the iPhone for a few days. The setup: until Android’s Gingerbread update, highlighting text and moving the caret (where your next letter comes out when you type) was terrible on Android. It was something I didn’t realize in my initial comparison and something I came to envy about iOS: the magnifying glass that pops up when you want to move your cursor and the simple drag-and-drop highlighting of text. Thankfully with the Gingerbread update, Android completely closes that gap (see image on the right) and improves upon it. Unlike with iOS, I don’t need to long-hold on the screen to enter some eery parallel universe with a magnified view – in Android, you just click once, drag the arrow to where you want the cursor to be, and you’re good to go.
  • No widgets in iOS. There are no widgets in iOS. I can see the iOS fans thinking: “big deal, who cares? they’re ugly and slow down the system!” Fair points — so why do I care? I care because widgets let me quickly turn on or off WiFi/Bluetooth/GPS from the homescreen in Android, but in iOS, I would be forced to go through a bunch of menus. It means, on Android, I can see my next few calendar events, but in iOS, I would need to go into the calendar app. It means, on Android I can quickly create a new Evernote note and see my last few notes from the home screen, but in iOS, I would need to open the app. It means that on Android I can see what the weather will be like from the homescreen, but in iOS, I would need to turn on the weather app to see the weather. It means that on Android, I can quickly glance at a number of homescreens to see what’s going on in Google Voice (my text messages), Google Reader, Facebook, Google+, and Twitter, but on iOS, I need to open each of those apps separately. In short, I care about widgets because they are convenient and save me time.
  • Apps play together more nicely with Android. Android and iOS have a fundamentally different philosophy on how apps should behave with one another. Considering most of the main iOS apps are also on Android, what do I mean by this? Well, Android has two features which iOS does not have: a cross-application back button and a cross-application “intent” system. What this means is that apps are meant to push information/content to each other in Android:
    • android-sharing-500x500If I want to “share” something, any app of mine that mediates that sharing – whether its email, Facebook, Twitter, Path, Tumblr, etc – its all fair game (see image on the right). On iOS, I can only share things through services that the app I’m in currently supports. Want to post something to Tumblr or Facebook or over email in an app that only supports Twitter? Tough luck in iOS. Want to edit a photo/document in an app that isn’t supported by the app you’re in? Again, tough luck in iOS. With the exception of things like web links (where Apple has apps meant to handle them), you can only use the apps/services which are sanctioned by the app developer. In Android, apps are supposed to talk with one another, and Google goes the extra mile to make sure all apps that can handle an “action” are available for the user to choose from.
    • In iOS, navigating between different screens/features is usually done by a descriptive back button in the upper-left of the interface. This works exactly like the Android back button does with one exception. These iOS back buttons only work within an application. There’s no way to jump between applications. Granted, there’s less of a need in iOS since there’s less cross-app communication (see previous bullet point), but when you throw in the ability of iOS5’s new notification system to take you into a new application altogether and when you’re in a situation where you want to use another service, the back button becomes quite handy.
  • And, of course,  deluge of the he-said-she-said that I observed:
    • Free turn-by-turn navigation on Android is AWESOME and makes the purchase of the phone worth it on its own (mainly because my driving becomes 100x worse when I’m lost). Not having that in iOS was a pain, although thankfully, because I spent most of my time in Asia on foot, in a cab, or on public transit, it was not as big of a pain.
    • Google integration (Google Voice, Google Calendar, Gmail, Google Maps) is far better on Android — if you make as heavy use of Google services as I do, this becomes a big deal very quickly.
    • Chrome to Phone is awesome – being able to send links/pictures/locations from computer to phone is amazingly useful. I only wish someone made a simple Phone-to-Chrome capability where I could send information from my phone/tablet to a computer just as easily.
    • Adobe Flash performance is, for the record, not great and for many sites its simply a gateway for advertisements. But, its helpful to have to be able to open up terrible websites (especially those of restaurants) — and in Japan, many a restaurant had an annoying Flash website which my iPhone could not open.
    • Because of the growing popularity of Android, app availability between the two platforms is pretty equal for the biggest apps (with just a few noteworthy exceptions like Flipboard). To be fair, many of the Android ports are done haphazardly – leading to a more disappointing experience – but the flip side of this is that the more open nature of Android also means its the only platform where you can use some pretty interesting services like AirDroid (easy-over-Wifi way of syncing and managing your device), Google Listen (Google Reader-linked over-the-air podcast manager), BitTorrent Remote (use your phone to remote login to your computer’s BitTorrent client), etc.
    • I love that I can connect my Android phone to a PC and it will show up like a USB drive. iPhone? Not so much (which forced me to transfer my photos over Dropbox instead).
    • My ability to use the Android Market website to install apps over the air to any of my Android devices has made discovering and installing new apps much more convenient.
    • The iOS mail client (1) doesn’t let you collapse/expand folders and (2) doesn’t let you control which folders to sync to what extents/at what intervals, but the Android Exchange client does. For someone who has as many folders as I do (one of which is a Getting Things Done-esque “TODO” folder), that’s a HUGE plus in terms of ease of use.

To be completely fair – I don’t have the iPhone 4S (so I haven’t played with Siri), I haven’t really used iCloud at all, and the advantages in UI quality and battery life are a big deal. So unlike some of the extremists out there who can’t understand why someone would pick iOS/Android, I can see the appeal of “the other side.” But after using the iPhone 4 for two weeks and after seeing some of the improvements in my Xoom from Ice Cream Sandwich, I can safely say that unless the iPhone 5 (or whatever comes after the 4S) brings with it a huge change, I will be buying another Android device next. If anything, I’ve noticed that with each generation of Android, Android devices further closes the gap on the main advantages that iOS has (smoothness, stability, app selection/quality), while continuing to embrace the philosophy and innovations that keep me hooked.

(Image Credit – Android text selection: Android.com) (Image Credit – Android sharing: talkandroid.com)

29 Comments

Qualcomm Trying to Up its PR with Snapdragon Stadium

I’m very partial towards “enabling technologies” – the underlying technology that makes stuff tick. That’s one reason I’m so interested in semiconductors: much of the technology we see today has its origins in something that a chip or semiconductor product enabled. But, despite the key role they (and other enabling technologies) play in creating the products that we know and love, most people have no idea what “chips” or “semiconductors” are.

Part of that ignorance is deliberate – chip companies exist to help electronics/product companies, not steal the spotlight. The only exception to that rule that I can think of is Intel which has spent a fair amount over the years on its “Intel Inside” branding and the numerous Intel Inside commercials that have popped up.

While NVIDIA has been good at generating buzz amongst enthusiasts, I would maintain that no other semiconductor company has quite succeeded at matching Intel in terms of getting public brand awareness – an awareness that probably has helped Intel command a higher price point because the public thinks (whether wrongly or rightly) that computers with “Intel inside” are better.

Well Qualcomm looks like they want to upset that. Qualcomm make chips that go into mobile phones and tablets and has benefitted greatly from the rise in smartphones and tablets over the past few years, getting to the point where some might say they have a shot at being a real rival for Intel in terms of importance and reach. But for years, the most your typical non-techy person might have heard about them is the fact that they have the naming rights to San Diego’s Qualcomm Stadium – home of the San Diego Chargers and former home of the San Diego Padres.

Well, on December 16th, in what is probably a very interesting test by Qualcomm to see if they can boost the consumer awareness of the Snapdragon product line they’re aiming at the next-generation of mobile phones and tablets, Qualcomm announced it will rename Qualcomm Stadium to Snapdragon Stadium for 10 days (coinciding with the San Diego County Credit Union Poinsettia Bowl and Bridgepoint Education Holiday Bowl) – check out the pictures from the Qualcomm blog below!

dsc_8635_0

cropped

Will this work? Well, if the goal is to get millions of people to, overnight, buy phones with Snapdragon chips inside – the answer is probably a no. Running this sort of rebranding for only 10 days for games that aren’t the SuperBowl just won’t deliver the right PR boost. But, as a test to see if their consumer branding efforts raises consumer awareness about the chips that power their phones, and potentially demand for “those Snapdragon watchamacallits” in particular? This might be just what the doctor ordered.

I, for one, am hopeful that it does work – I’m a sucker for seeing enabling technologies and the companies behind them like Qualcomm and Intel get the credit they deserve for making our devices work better, and, frankly, having more people talk about the chips in their phones/tablets will push device manufacturers and chip companies to innovate faster.

(Image credit: Qualcomm blog)

Leave a Comment

Google Reader Blues

If it hasn’t been clear from posts on this blog or from my huge shared posts activity feed, I am a huge fan of Google Reader. My reliance/use of the RSS reader tool from Google is second only to my use of Gmail. Its my main primary source of information and analysis on the world and, because a group of my close friends are actively sharing and commenting on the service, it is my most important social network.

Yes, that’s right. I’d give up Facebook and Twitter before I’d give up Google Reader.

I’ve always been disappointed by Google’s lack of attention to the product, so you would think that after announcing that they would find a way to better integrate the product with Google+ that I would be jumping for joy.

However, I am not. And, I am not the only one. E. D. Kain from Forbes says it best when he writes:

[A]fter reading Sarah Perez and Austin Frakt and after thinking about just how much I use Google Reader every day, I’m beginning to revise my initial forecast. Stay calm is quickly shifting toward full-bore Panic Mode.

(bolding and underlining from me)

Now, for the record, I can definitely see the value of integrating Google+ with Google Reader well. I think the key to doing that is finding a way to replace the not-really-used-at-all Sparks feature (which seems to have been replaced by a saved searches feature) in Google+ with Google Reader to make it easier to share high quality blog posts/content. So why am I so anxious? Well, looking at the existing products, there are two big things:

  • Google+ is not designed to share posts/content – its designed to share snippets. Yes, there are quite a few folks (i.e. Steve Yegge who made the now-famous-accidentally-public rant about Google’s approach to platforms vs Amazon/Facebook/Apple’s on products) who make very long posts on Google+ using it almost as a mini-blog platform. And, yes, one can share videos and photos on the site. However, what the platform has not proven to be able to share (and is, fundamentally, one of the best uses/features for Google Reader) is a rich site with embedded video, photos, rich text, and links. This blog post that you’re reading for instance? I can’t share this on Google+. All I can share is a text excerpt and an image – that reduces the utility of the service as a reading/sharing/posting platform.
  • Google Reader is not just “another circle” for Google+, it’s a different type of online social behavior. I gave Google props earlier this year for thinking through online social behavior when building their Circles and Hangouts features, but it slipped my mind then that my use of Google Reader was yet another way to do online social interaction that Google+ did not capture. What do I mean by that? Well, when you put friends in a circle, it means you have grouped that set of friends into one category and think of them as similar enough to want to receive their updates/shared items together and to send them updates/shared items, together. Now, this feels more natural to me than the original Facebook concept (where every friend is equal) and Twitter concept (where the idea is to just broadcast everything to everybody), but it misses one dynamic: followers may have different levels of interest in different types of sharing. When I share an article on Google Reader, I want to do it publicly (hence the public share page), but only to people who are interested in what I am reading/thinking. If I wanted to share it with all of my friends, I would’ve long ago integrated Google Reader shares into Facebook and Twitter. On the flip side, whether or not I feel socially close to the people I follow on Google Reader is irrelevant: I follow them on Google Reader because I’m interested in their shares/comments. With Google+, this sort of “public, but only for folks who are interested” sharing and reading mode is not present at all – and it strikes me as worrisome because the idea behind the Google Reader change is to replace its social dynamics with Google+

Now, of course, Google could address these concerns by implementing additional features – and if that were the case, that would be great. But, putting my realist hat on and looking at the tone of the Google Reader blog post and the way that Google+ has been developed, I am skeptical. Or, to sum it up, in the words of Austin Frakt at the Incidental Economist (again bolding/underlining is by me)

I will be entering next week with some trepidation. I’m a big fan of Google and its products, in general. (Love the Droid. Love the Gmail. Etc.) However, today, I’ve never been more frightened of the company. I sure hope they don’t blow this one!

4 Comments

Chrome Remote Desktop

A few weeks ago, I blogged about how the web was becoming the most important and prominent application distribution platform and about Google’s efforts to embrace that direction with initiatives like ChromeOS (Google’s operating system which is designed only to run a browser/use the internet), Native Client, and the Chrome Web Store.

Obviously, for the foreseeable future, “traditional” native applications will continue to have significant advantages over web applications. As much of a “fandroid”/fan of Google as I am, I find it hard to see how I could use a Chromebook (a laptop running Google’s ChromeOS) over a real PC today because of my heavy use of apps like Excel or whenever I code.

However, you can do some pretty cool things with web applications/HTML5 which give you a sense of what can one day be possible. Case in point: enter Chrome Remote Desktop (HT: Google Operating System), a beta extension for Google Chrome which basically allows you to take control of another computer running Chrome a la remote desktop/VNC. While this capability is nothing new (Windows had “remote desktop” built in since, at latest, Windows XP, and there are numerous VNC/remote desktop clients), what is pretty astonishing is that this app is built entirely using web technologies – whereas traditional remote desktops use non-web based communications and native graphics to create the interface to the other computer, Chrome Remote Desktop is doing all the graphics in the browser and all the communications using either the WebSocket standard from HTML5 or Google Talk’s chat protocol! (see below as I use my personal computer to remote-control my work laptop where I am reading a PDF on microblogging in China and am also showing my desktop background image where the Jedi Android slashes up a Apple Death Star)

image

How well does it work? The control is quite good – my mouse/keyboard movements registered immediately on the other computer – but the on-screen graphics/drawing speed was quite poor (par for the course for most sophisticated graphics drawing apps in the browser and for a beta extension). The means of controlling another desktop, while easy to use (especially if you are inviting someone to take a look at your machine) is very clumsy for some applications (i.e. a certain someone who wants to leave his computer in the office and use VNC/remote desktop to access it only when he needs to).

So, will this replace VNC/remote desktop anytime soon? No (nor, does it seem, were they the first to think up something like this), but that’s not the point. The point, at least to me, is that the browser is picking up more and more sophisticated capabilities and, while it may take a few more versions/years before we can actually use this as a replacement for VNC/remote desktop, the fact that we can even be contemplating that at all tells you how far browser technology has come and why the browser as a platform for applications will grow increasingly compelling.

One Comment

Web vs native

imageWhen Steve Jobs first launched the iPhone in 2007, Apple’s perception of where the smartphone application market would move was in the direction of web applications. The reasons for this are obvious: people are familiar with how to build web pages and applications, and it simplifies application delivery.

Yet in under a year, Apple changed course, shifting the focus of iPhone development from web applications to building native applications custom-built (by definition) for the iPhone’s operating system and hardware. While I suspect part of the reason this was done was to lock-in developers, the main reason was certainly the inadequacy of available browser/web technology. While we can debate the former, the latter is just plain obvious. In 2007, the state of web development was relatively primitive relative to today. There was no credible HTML5 support. Javascript performance was paltry. There was no real way for web applications to access local resources/hardware capabilities. Simply put, it was probably too difficult for Apple to kludge together an application development platform based solely on open web technologies which would get the sort of performance and functionality Apple wanted.

But, that was four years ago, and web technology has come a long way. Combine that with the tech commentator-sphere’s obsession with hyping up a rivalry between “native vs HTML5 app development”, and it begs the question: will the future of application development be HTML5 applications or native?

There are a lot of “moving parts” in a question like this, but I believe the question itself is a red herring. Enhancements to browser performance and the new capabilities that HTML5 will bring like offline storage, a canvas for direct graphic manipulation, and tools to access the file system, mean, at least to this tech blogger, that “HTML5 applications” are not distinct from native applications at all, they are simply native applications that you access through the internet. Its not a different technology vector – it’s just a different form of delivery.

Critics of this idea may cite that the performance and interface capabilities of browser-based applications lag far behind those of “traditional” native applications, and thus they will always be distinct. And, as of today, they are correct. However, this discounts a few things:

  • Browser performance and browser-based application design are improving at a rapid rate, in no small part because of the combination of competition between different browsers and the fact that much of the code for these browsers is open source. There will probably always be a gap between browser-based apps and native, but I believe this gap will continue to narrow to the point where, for many applications, it simply won’t be a deal-breaker anymore.
  • History shows that cross-platform portability and ease of development can trump performance gaps. Once upon a time, all developers worth their salt coded in low level machine language. But this was a nightmare – it was difficult to do simple things like showing text on a screen, and the code written only worked on specific chips and operating systems and hardware configurations. I learned C which helped to abstract a lot of that away, and, keeping with the trend of moving towards more portability and abstraction, the mobile/web developers of today develop with tools (Python, Objective C, Ruby, Java, Javascript, etc) which make C look pretty low-level and hard to work with. Each level of abstraction adds a performance penalty, but that has hardly stopped developers from embracing them, and I feel the same will be true of “HTML5”.
  • Huge platform economic advantages. There are three huge advantages today to HTML5 development over “traditional native app development”. The first is the ability to have essentially the same application run across any device which supports a browser. Granted, there are performance and user experience issues with this approach, but when you’re a startup or even a corporate project with limited resources, being able to get wide distribution for earlier products is a huge advantage. The second is that HTML5 as a platform lacks the control/economic baggage that iOS and even Android have where distribution is controlled and “taxed” (30% to Apple/Google for an app download, 30% cut of digital goods purchases). I mean, what other reason does Amazon have to move its Kindle application off of the iOS native path and into HTML5 territory? The third is that web applications do not require the latest and greatest hardware to perform amazing feats. Because these apps are fundamentally browser-based, using the internet to connect to a server-based/cloud-based application allows even “dumb devices” to do amazing things by outsourcing some of that work to another system. The combination of these three makes it easier to build new applications and services and make money off of them – which will ultimately lead to more and better applications and services for the “HTML5 ecosystem.”

Given Google’s strategic interest in the web as an open development platform, its no small wonder that they have pushed this concept the furthest. Not only are they working on a project called Native Client to let users achieve “native performance” with the browser, they’ve built an entire operating system centered entirely around the browser, Chrome OS, and were the first to build a major web application store, the Chrome Web Store to help with application discovery.

While it remains to be seen if any of these initiatives will end up successful, this is definitely a compelling view of how the technology ecosystem evolves, and, putting on my forward-thinking cap on, I would not be surprised if:

  1. The major operating systems became more ChromeOS-like over time. Mac OS’s dashboard widgets and Windows 7’s gadgets are already basically HTML5 mini-apps, and Microsoft has publicly stated that Windows 8 will support HTML5-based application development. I think this is a sign of things to come as the web platform evolves and matures.
  2. Continued focus on browser performance may lead to new devices/browsers focused on HTML5 applications. In the 1990s/2000s, there was a ton of attention focused on building Java accelerators in hardware/chips and software platforms who’s main function was to run Java. While Java did not take over the world the way its supporters had thought, I wouldn’t be surprised to see a similar explosion just over the horizon focused on HTML5/Javascript performance – maybe even HTML5 optimized chips/accelerators, additional ChromeOS-like platforms, and potentially browsers optimized to run just HTML5 games or enterprise applications?
  3. Web application discovery will become far more important. The one big weakness as it stands today for HTML5 is application discovery. Its still far easier to discover a native mobile app using the iTunes App Store or the Android Market than it is to find a good HTML5 app. But, as platform matures and the platform economics shift, new application stores/recommendation engines/syndication platforms will become increasingly critical.

I can’t wait :-).

(Image credit – iPhone SDK)

22 Comments

HP 2.0

The technology ecosystem just won’t give me a break – who would’ve thought that in the same week Google announced its bold acquisition of Motorola Mobility, that HP would also announce a radical restructuring of its business?

For those of you not up to speed, last Friday, HP’s new CEO Leo Apothekar announced that HP would:

    • Spend over $10 billion to acquire British software company Autonomy Corp
    • Shut down its recently-acquired-from-Palm-for-$1-billion WebOS hardware business (no more tablets or phones)
    • Contemplate spinning out its PC business

hpRadical change is not unheard of for long-standing technology stalwarts like HP. The “original Hewlett Packard”, focused on test and measurement devices like oscilloscopes and precision electronic components was spun out in 1999 as Agilent, one of the tech industry’s largest IPO’s. It acquired Compaq in 2001 to bolster its PC business for a whopping $25 billion. To build an IT services business, it acquired EDS in 2008 at a massive $14 billion valuation. To compete with Cisco in networking gear, it acquired 3Com for almost $3 billion. And, to compete in the enterprise storage space, it bought 3PAR after a furious bidding war with Dell for $2 billion. But, while this sort of change might not be unheard of, the billion dollar question remains: is this a good thing for HP and its shareholders? My conclusion: in the long-run, this is a good thing for HP. But how they announced it was very poor form.

Why good for the long-run?

    • HP needed focus. With the exception of the Agilent spinoff and the Compaq acquisition, all the “bold strategic changes” that I mentioned happened in the span of less than 3 years (EDS: 2008, 3com: 2009, Palm and 3PAR: 2010). Success in the technology industry requires you to disrupt existing spaces (and avoid being disrupted), play nicely with the ecosystem, and consistently overachieve. Its hard to do that when you are simultaneously tackling a lot of difficult challenges. At the end of the day, for HP to continue to thrive, it needs to focus and not always chase the technology “flavor of the week.”
    • HP had a big hill to climb to be a leading consumer hardware play. Despite being a very slick product, WebOS was losing the war of the smartphone/tablet operating systems to Google’s Android and Apple’s iOS. Similarly, in its PC business, with the exception of channel reach and scale, HP had no real advantage over Apple, Dell, or rapidly growing low-cost Asian competitors. It’s fair to say that HP might have been able to change that with time. After all, HP had barely had time to announce one generation of new products since Palm was acquired, let alone had time for the core PC division to work together with the engineers and user experience folks at Palm to cook up something new. But, suffice to say, getting to mass market success would have required significant investment and time. Contrast that with…
    • HP as a leading enterprise IT play is a more natural fit. With its strong server and software businesses and recent acquisitions of EDS, 3Com, and 3PAR, HP already has a broad set of assets that it could combine to sell as “solutions” to enterprises. Granted, there is significant room for improvement in how HP does all of this – these products and services have not been integrated very well, and HP lacks the enormous success that Dell has achieved in new cloud computing architectures and the services success that IBM has, to name two uphill battles HP will have to face, but it feels, at least to me, that this is a challenge that HP is already well-equipped to solve with its existing employees, engineering, and assets.
    • Moreover, for better or for worse, HP’s board chose a former executive of enterprise software company SAP to be CEO. What did they expect, that he would miraculously be able to turn HP’s consumer businesses around? I don’t know what happened behind closed doors so I don’t know how seriously Apothekar considered pushing down the consumer line, but I don’t think anyone should be surprised that he’s trying to build a complete enterprise IT stack akin to what IBM/Microsoft/Oracle are trying to do.

With all that said, I’m still quite appalled by how this was announced. First, after basically saying that HP didn’t have the resources to invest in its consumer hardware businesses, Apothekar turns around and pays a huge amount for Autonomy (at a valuation ten times its sales – by most objective measures, a fairly high price). I don’t think HP’s investors or the employees and business partners of HP’s soon-to-be-cast-aside will find the irony there particularly amusing.

Adding to this is the horrible manner in which Apothekar announced his plans. Usually, this sort of announcement only happens after the CEO has gone out of his way to boost the price he can command for the business units he intends to get rid of. In this case, not only are there no clear buyers lined up for the divisions HP plans to dump, the prices that those units could command will be hurt by the fact that their futures are in doubt. Rather than reassure employees, potential buyers, customers, and partners that existing business relationships and efforts will be continued, Apothekar has left them with little reason to be confident. This is appalling behavior from someone who’s main job is to be a steward for shareholder value as he could’ve easily communicated the same information without basically tanking his ability to sell those businesses off at a good valuation.

In any event, as I said in my Googorola post, we definitely live in interesting times :-).

One Comment

Standards Have No Standards

Many forms of technology requires standards to work. As a result, it is in the best interest of all parties in the technology ecosystem to participate in standards bodies to ensure interoperability.

The two main problem with getting standards working can be summed up, as all good things in technology can be, in the form of webcomics. 🙂

Problem #1, from XKCD: people/companies/organizations keep creating more standards.

standards

The cartoon takes the more benevolent look at how standards proliferate; the more cynical view is that individuals/corporations recognize that control or influence over an industry standard can give them significant power in the technology ecosystem. I think both the benevolent and the cynical view are always at play – but the result is the continual creation of “bigger and badder” standards which are meant to replace but oftentimes fail to completely supplant existing ones. Case in point, as someone who has spent a fair amount of time looking at technologies to enable greater intelligence/network connectivity in new types of devices (think TVs, smart meters, appliances, thermostats, etc.), I’m still puzzled as to why we have so many wireless communication standards and protocols for achieving it (Bluetooth, Zigbee, ZWave, WiFi, DASH7, 6LowPAN, etc)

Problem #2: standards aren’t purely technical undertakings – they’re heavily motivated by the preferences of the bodies and companies which participate in formulating them, and like the US’s “wonderful” legislative process, involves mashing together a large number of preferences, some of which might not necessarily be easily compatible with one another. This can turn quite political and generate standards/working papers which are too difficult to support well (i.e. like DLNA). Or, as Dilbert sums it up, these meetings are full of people who are instructed to do this:

66480.strip

Or this:

129847.strip

Our one hope is that the industry has enough people/companires who are more vested in the future of the technology industry than taking unnecessarily cheap shots at one another… It’s a wonder we have functioning standards at all, isn’t it?

Leave a Comment

The Prodigal Tablet Convert

lg-android-tabletWhen the Wifi-only version of the first Android Honeycomb tablet, the Motorola Xoom, became available for sale, I bought the device, partly because of my “Fandroid” love for Android devices but mostly because, as a Tech enthusiast, I felt like I needed to own a tablet just to understand what everyone was talking about.

While I liked the device (especially after the Honeycomb 3.1 update), I felt a little weird because I didn’t really have a good idea of why I would ever need it. Tablets, while functional and cool, were not as large in screen size or as powerful as a laptop (some of which are also pretty portable: take my girlfriend’s recently purchased Lenovo X220 or the new MacBook Airs for instance), they weren’t as cheap/didn’t have as long of a battery life/didn’t have the amazing displays of dedicated eReaders like Amazon’s Kindle, and they weren’t as portable as a smartphone. I was frankly baffled: just when would you use an iPad/an Android tablet instead of a laptop, eReader, or a smartphone?

It didn’t help that many of my friends seemed to give waffling answers (and no, it didn’t really vary whether or not they had an iPad or an Android tablet – sorry Apple fanboi’s, you’re not that special :-)) or that one of the partners at my fund had misplaced his iPad and didn’t realize it for a month! To hopefully discover the “killer application” for these mysterious devices, I pushed myself to use the tablet more to see if I could find a “natural fit” and, except for gaming and for reading/browsing casually in bed, the whole experience felt very “well, I needed a bigger screen than my phone and was too lazy to turn on my laptop.” So for quite some time, I simply chalked up the latest demand as people wanting the latest gadget rather than anything particularly useful.

ASUS_EeePad_Transformer_-550x412This changed recently when, on a whim, I decided to buy carrying case and Bluetooth keyboard for my Xoom. And, upon receiving it, I was kind of blown away. Although it looked (and still does) a little funny to me — why use a Tablet plus Bluetooth Keyboard when you could just use a laptop –  that was enough to change my perspective on the utility of the device. It was no longer just a “bigger smartphone” – it became the full potential of what the netbook category itself had aimed to be: an easy-to-use, cheap consumer-grade laptop replacement that was not sucked into the “Wintel” dominion of Intel and Microsoft. It was that realization/newfound purpose for the device (as well as a nifty $100 off coupon) which also sucked my girlfriend, a long skeptic of why I had bought a tablet, in to buying an ASUS Eee Pad Transformer and dock (see image to the left).

I know its not the most profound of epiphanies – after all, even in my first comment on the iPad speculations I had suggested the potential risk to Apple of letting the iPad be so good that it starts replacing lower-end Macbook Air/Macbook devices – but suddenly the ability to write longer emails/compose documents made my tablet the go-to device for everything but the most processor-intensive or intricate of tasks, and that, combined with the abundance of tablets I’ve seen in Silicon Valley business settings, has convinced me that the “killer app” for the iPad and the Xoom and the whole host of coming Android tablets will be as computer replacements.

So, (hardware and software) developers out there and folks who want to pursue something potentially very disruptive or who want the venture capital side of me to pay attention to you: find me killer new apps/services designed to help tablets more replace computers (especially in the enterprise – I have become somewhat enchanted by that opportunity) and you’ll get it.

(Image credit – Tablet) (Image credit – ASUS Eee Pad Transformer)

4 Comments

Community

As someone who tried to build a fashion social network and is now an investor who sees his fair share of social networking startup ideas, I can attest to the difficulties in building a genuine community.

image

So, when people question why Friendfeed users like myself are so dedicated to the site and why we don’t switch over to the new Facebook Groups feature (which has integrated many of Friendfeed’s features), I find myself scratching my forehead as to why so many web experts seem to miss out on the obvious.

The point so many web sites seem to misunderstand is that community is not a feature. If I got paid everytime someone said “we’ve added a ‘Post to Facebook’/‘bulletin board’/‘chat’/[insert other cliché “community” feature] feature” as evidence that they had a strong community, I would be a very wealthy man. To be fair, not having certain social features makes it harder to have a community, but having those features doesn’t necessarily mean you will have a community. You don’t add community to a website the way you might add Google Analytics or a new banner ad.

Community is something which has to be built and nurtured. At its core, its about users experiencing a genuine connection with other people and wanting to engage more: both on and off the site.

Similarly, community is not just having a lot number of users. Sure, Twitter/Facebook/LinkedIn have a ton of users. But, that alone doesn’t make them a community. Walmart has a lot of employees too – I doubt an outsider would consider that a tight-knit community.

What matters is not so much the number of users, but the number and quality of connections that they make. That’s one reason I actually consider the core group of Twitter users that I engage with a closer community than my LinkedIn or Facebook circle  (which is composed mostly of people that I actually know and have interacted with “in the real world”!) – I “talk with” (or Tweet) that group on Twitter more than I engage with people on Facebook, get a lot more value out of those internet relationships (I learn about interesting things, keep up with the daily actions of people I know, and get comments on things I share/say) than I do through those other sites. It doesn’t mean I don’t find LinkedIn or Facebook valuable (I do, for other reasons), but its that community which keeps me coming back and more engaged with Twitter, and Friendfeed for that matter, than with LinkedIn or Facebook.

image

So, back to the original question – why do I stick with Friendfeed?

  • Bookmarklet: The FriendFeed bookmarklet is extremely powerful: its not only my primary means of sharing things on Twitter, it also lets me pull in additional content beyond Twitter’s 140 character limit. This convenience and pattern of use is difficult to break.
  • Feature set: There are practically zero features on Friendfeed which haven’t been replicated by someone else (esp. Facebook). However, I have yet to see the killer social feature which has convinced me to replace Friendfeed with something else – simply put, its good enough for what I need and, until it stops being good enough or I find something else far better, I’ll be sticking around.
  • Quality of Community: The people I engage with (and people-watch) on Friendfeed and the sorts of conversations that are had are deeper and more satisfying than almost any online forum I’ve been on (with the noteworthy exception of the group of friends I interact with on Google Reader). That exclusivity and depth of engagement is something I have yet to see Facebook or any other social media site replicate and, until they do and until the community that I like engaging with on Friendfeed chooses to move elsewhere, I don’t plan on stopping.

(Image credit) (Image credit)

2 Comments

Disruptive ARMada

I’ve mentioned before that one of the greatest things about being in the technology space is how quickly the lines of competition rapidly change.

image Take ARM, the upstart British chip company which licenses the chip technology which powers virtually all mobile phones today. Although they’ve traditionally been relegated to “dumb” chips because of their low cost and low power consumption, they’ve been riding a wave of disruptive innovation to move beyond just low cost “dumb” featurephones into more expensive smartphones and, potentially, into new low-power/always-connected netbooks.

More interestingly, though, is the recent revelation that ARM chips have been used in more than just low-power consumer-oriented devices, but also in production grade servers which can power websites, something which has traditionally been in the domain of more expensive chips by companies like AMD, Intel, and IBM.

And now, with:

  1. A large semiconductor company like Marvell officially announcing that they will be releasing a high-end ARM chip called the Armada 310 targeted at servers
  2. A new startup called Smooth Stone (its a David-vs-Goliath allusion) raising $48M (some of it from ARM itself!) to build ARM chips aimed at data center servers
  3. ARM announced their Cortex A15 processor, a multicore beast with support for hardware virtualization and physical address extensions — features you generally would only see in a server product
  4. Dell (which is the leading supplier of servers for this new generation of webscale data centers/customers) has revealed they have built test servers  running on ARM chips as proof-of-concept and look forward to the next generation of ARM chips

It makes you wonder if we’re on the verge of another disruption in the high-end computer market. Is ARM about to repeat what Intel/AMD chips did to the bulkier chips from IBM, HP, and Sun/Oracle?

(Image credit)

Leave a Comment

Apple TV Disassembled

My good friend Joe and I spent a little amount of time last week disassembling an Apple TV, on a personal level to both take a look inside as well as to get a sense of what folks at companies like iSuppli and Portelligent do when they do their teardowns. I was also asked to take a look at how prominent/visible a chip company in the portfolio of my venture capital employer is inside the Apple TV.

I apologize for the poor photo quality (I wasn’t originally planning on posting these and I just wanted to document how we took it apart so that I knew how to put it back together). But, if you bear with me, here’s a picture of the Apple TV itself before we “conducted open heart surgery” (it fits in the palm of your hand!):

2010-10-26_20-39-46_5

Here is the what it looks like after we pry off the cover (with a flathead screwdriver or a paint wedge thing) – notice how thick the edges of the device are. This is important as I have nothing but sympathy for the poor engineers who had to design the infrared sensor and “blaster” for the remote such that it was powerful enough to penetrate that wall (but cheap enough/energy efficient enough for Apple to include it).

2010-10-26_20-43-55_652

I’m not exactly sure what the pink is, but my guess based on how “squishy” it was, is that it is some sort of shock absorber to help protect the device. Unscrewing the outermost a set of screws (two of which are hidden under the shock absorber), we finally get at the circuit board at the heart of the device:

2010-10-26_20-47-15_465

Using tweezers, we removed one of the connectors (I assumed it linked the chips on the board with the power supply) allowing us to detach the board from the enclosure:

2010-10-26_20-49-26_699

2010-10-26_20-54-17_591

We then had to remove the pesky electromagnetic shield (the metallic cover for most of the board), unveiling the chips inside (unfortunately, I didn’t take a picture of the opposite side of the board where you would actually see the Apple A4 chip):

2010-10-26_21-01-45_749

To give a little perspective on the size of those chips, here is the board next to the original device (which, by the way, fits in the palm of your hand):

2010-10-26_21-02-13_296

Cool, isn’t it? (At least Joe and I thought so Smile)

As for reflections on the process:

  • It’s a lot simpler than you would expect. Granted, we didn’t tear down a mobile phone (which is sealed somewhat more securely – although Joe and I might for kicks someday Smile), but at the end of the day, much of the “magic” is not in the hardware packaging, but in software, in the chips, or in the specialized hardware (antenna, LCD).
  • With that said, it’d probably be pretty difficult to tear down a device without someone knowing. The magic may not be in the packaging, but ODMs/EMSs like Foxconn have built a solid business around precision placement and sealing, and human hands are unlikely to have the same precision (or be able to remove/replace an EMI shield without deforming it Smile).
  • Given how simple this is, I personally believe that no tech analyst worth his or her pay should be allowed to go on without either doing their own teardown or buying a teardown from someone else. It’s a very simple way to understand how chip companies will do (just look at the board to see if they are getting sales!), and it’s a great way to get an understanding of what the manufacturing cost, design process, and technological capabilities of a device are.

 

4 Comments

Linux: Go Custom or Go Home

In a post I wrote a few weeks ago about why I prefer the Google approach to Apple’s, I briefly touched on what I thought was one of the most powerful aspects of Android, and something I don’t think is covered enough when people discuss the iPhone vs Android battle:

With Google[’s open platform strategy], you enable many suppliers (Samsung, HTC, and Motorola for starters in the high-end Android device world, Sony and Logitech in Google TV) to compete with one another and offer their own variations on hardware, software, services, and silicon. This allows companies like Cisco to create a tablet focused on enterprise needs like the Cius using Android, something which the more restrictive nature of Apple’s development platform makes impossible (unless Apple creates its own), or researchers at the MIT Media lab to create an interesting telemedicine optometry solution.

imageTo me, the most compelling reason to favor a Linux/Android approach is this customizability. Too often, I see people in the Linux/Android community focus on the lack of software licensing costs or emphasize a high-end feature or the ability to emulate some Windows/Mac OS/iOS feature.

But, while those things are important, the real power of Android/Linux is to go where Microsoft and Apple cannot. As wealthy as Microsoft and Apple are, even they can’t possibly create solutions for every single device and use case. iOS may work well for a general phone/tablet like the iPhone and iPad, but what about phones targeted for the visually impaired? What about tablets which can do home automation? Windows might work great for a standard office computer, but what about the needs of scientists? Or students? The simple fact of the matter is neither company has the resources to chase down every single use case and, even if they did, many of these use cases are too niche for them to ever justify investment.

Linux/Android, on the other hand? The open source nature allows for customization (which others can then borrow for still other forms of customization) to meet a market’s (or partner’s) needs. The lack of software licensing costs means that the sales needed to justify an investment goes down. Take some recent, relatively high-profile examples:

Now, none of these are silver bullets which will drive 100% Linux adoption – but they convey the power of the open platform approach. Which leads me to this, potentially provocative conclusion: the real opportunity for Android/Linux (and the real chance to win) is not as a replacement for a generic Windows or Mac OS install, but as a path to highly customized applications.

Now I can already hear the Apple/GNOME contingent disagreeing with me because of the importance of user experience. And, don’t get me wrong, user experience is important and the community does need to work on it (I still marvel that the Android Google Maps application is slower than the iPhone’s or my inability to replace Excel/Powerpoint/other apps with OpenOffice/Wine), but I would say the war against the Microsoft/Apple user experience is better fought by focusing on use-case customization rather than trying to beat a well-funded, centrally managed effort.

Consider:

  1. Would you use iOS as the software for industrial automation? Or to run a web server? No. As beautiful and easy-to-use as the iOS design is, because its not built as a real-time operating system or built for web server use, it won’t compete along those dimensions.
  2. How does Apple develop products with such high quality? Its simple: focus on a few things. An Android/Linux setup should not try to be the same thing to all applications (although some of the underlying systems software can be). Instead, different Android/Linux vendors should focus on customizing their distributions for specific use-cases. For example, a phone guy should gut the operating system of anything that’s not needed for a phone and spend time building phone-specific capabilities.

The funny thing is the market has already proven this. Where is Linux currently the strongest? I believe its penetration is highest in three domains: smartphones, servers, and embedded systems. Ignoring smartphones (where Android’s leadership is a big win for Linux) which could be a special case, the other two applications are not particularly sexy or consumer-facing, but they are very educational examples. In the case of servers, the Linux community’s (geeky) focus on high-end features made it a natural fit for servers. Embedded systems have heavily used Linux because of the ability to customize the platform in the way that the silicon vendor or solution vendor wants.

image

Of course, high levels of customization can introduce fragmentation. This is a legitimate problem wherever software compatibility is important (think computers and smartphones), and, to some extent, the Android smartphone ecosystem is facing this as more and more devices and phone manufacturer customizations (Samsung, HTC, and Motorola put out fairly different devices). But, I think this is a risk that can be managed. First, a strong community and support for industry standards can help limit issues with fragmentation. Take the World Wide Web. The same website can work on MacOS and Windows because the HTML is a standard that browsers adhere to — and the strength of the web standards and development community help to reduce unnecessary fragmentation and provide support for developers where such fragmentation exists. Secondly, the open source nature of Linux/Android projects means that customizations can be more easily shared between development teams and that new projects can draft off of old projects. This doesn’t mean that they become carbon copies of one another, but it helps to spread good customizations farther, helping to control some of the fragmentation problems. Lastly, and this may be a cop-out answer, but I believe universal compatibility between Linux-based products is unnecessary. Why does there have to be universal compatibility between a tablet, a server, and a low-end microcontroller? Or, for that matter, between a low-end feature phone and a high-end smartphone? So long as the customizations are purpose-driven, the incompatibilities should not jeopardize the quality of the final product, and in fact, may enhance it.

Given all this, in my mind, the Android/Linux community need to think of better ways to target customizations. I think its the best shot they have at beating out the larger and less nimble companies which make up their competition, and of living up to its full potential as the widely used open source operating system it can be.

(Comic credit – XKCD) (Image credit)

Leave a Comment

Nokia beta-ing classifieds service in India

image A couple of weeks ago, I posted a long tract about what Nokia needs to do to turn its fortunes around. One of those things was focusing on the rapidly growing phone market in emerging markets like Brazil, India, and China with smart featurephones and valuable services. Thankfully, someone at Nokia is thinking along the same lines as this recent Register article calls out:

Nokia has started testing Nokia Listings in India – a service much like Craigslist, only without the internet.

Instead of expecting users to have computers, or even smartphones, Nokia Listings uses GPRS and runs on Series 40 handsets to provide information to the barely-connected. The service can even fall back on SMS connectivity when particularly network challenged.

Is Nokia Listings a sure-fire hit? No. For starters, there’s no clear profit move here which is sizable enough to change Nokia’s short-term prospects. Furthermore, foreign companies oftentimes find their moves into new markets get rapidly copied and beaten by local upstarts.

image image image

With that said, this is exactly the sort of interesting business approach which takes advantage of Nokia’s key assets (wide distribution of very good feature phones in emerging markets and strong services foundation, experience with simple J2ME apps) to penetrate into a rapidly growing market with a clear use case and value for end-user.

(Images from Nokia Listings site)

Leave a Comment

fbPhone

image

This past weekend, a TechCrunch article caught the tech blogosophere off guard with an interesting claim:

Facebook is building a mobile phone, says a source who has knowledge of the project. Or rather, they’re building the software for the phone and working with a third party to actually build the hardware. Which is exactly what Apple and everyone else does, too.

The question is, does a Facebook phone platform (or, fbPhone to borrow the i/g prefix style corresponding to Apple and Google) make sense for Facebook to pursue?

On the one hand, Facebook is rapidly becoming an “operating system” of sorts for the web. According to Facebook’s statistics page, Facebook has over 550K active applications developed on it and over 1 million additional third party websites which have integrated in some fashion with this monumental platform. But, beyond sheer numbers, Facebook’s platform passes what I consider to be the true “is it a real platform” test that Windows, Linux, and Mac OS have passed: it has the ability to sustain a large $100M+ software company like Zynga (which has been estimated to generate over $800 million in annual revenues), capable of now spending enormous amounts on R&D and sales & marketing (and even of experimenting with its own rival gaming platform). This is something which, to my knowledge, the iPhone and Android ecosystems have yet to achieve.

Given its status as an “operating system” for web developers, there is certainly some value Facebook could gain from expanding into the mobile operating system sphere. It would make the Facebook experience more sticky for users who, once they step away from their computers, can only interact with the most basic Facebook features (pictures, notifications, news feeds) by making it easier for developers to truly view Facebook (mobile and desktop) as one application platform.

image

On a strategic level, Facebook probably also sees potential dangers from Google and Apple’s control of the underlying smartphone software platforms. This control could transform Apple’s very shoddily constructed music “social networking service” Ping and Google’s thus-far unsuccessful attempts, as per its usual business strategy, to weaken Facebook’s dominant position in the social web into a serious threat to Facebook’s long-term position.

So, there are obvious benefits to Facebook in pursuing the platform route. However, I think there is an even more obvious downside: its HARD to build a mobile phone operating system. The TechCrunch article points out that Facebook has hired a number of the top mobile/tablet OS developers in the industry – while this means that its not impossible for Facebook to build a phone platform, its a long shot from building a full-fledged operating system. Assuming Facebook wants to build a phone, its unlikely to take the Apple route and build one monolithic phone. Like Google, Facebook’s business model is built around more user engagement, so a Facebook phone strategy would more likely be centered around getting as many users and phones possible to plug into Facebook.

The path towards such a phone platform (rather than single phone) requires many complicated relationships with carriers, with middleware providers, with hardware manufacturers, and with regulatory bodies (who are not too keen on Facebook’s privacy policies right now), not to mention deep expertise around hardware/software integration. Compare the dates for when Google and its wide swath of partners first announced the Open Handset Alliance (November 2007) to when the first Android phone was available (October 2008). A full year of committed development from industry giants HTC (hardware), Qualcomm (silicon), T-Mobile (carrier), and Google – and that’s assuming the alliance got started on the day that the project was announced and that partners like Verizon/Motorola/Samsung/ARM/etc did absolutely nothing.

From my perspective, Facebook has three much more likely (albeit still difficult) paths forward given the benefits I mentioned above for having its own mobile phone platform:

    • Build another “Open Handset Alliance” with the ecosystem: This is the only route that I see for Facebook to take if it wants its own, strong foothold in the mobile platform space. The challenge here is that the industry is not only tired of new platforms, but is also not likely to want to cede as much control to Facebook as they did to Google and Apple (and potentially Microsoft when it rolls out its Windows Phone 7 OS). This makes the path forward for Facebook complicated at best and, even when successful, requires it to compete against very well-established operating systems from Google & its partners and Apple.
    • Pull a HTC/Motorola and build a layer on top of or modify an open OS like Android or MeeGo: This, to me, makes the most sense. It eliminates the need for Facebook to invest heavily in hardware/network/silicon capabilities for deep phone platform development, and it also allows Facebook to leverage the application and ecosystem support that Android and MeeGo command (provided they don’t make too many modifications). Instead, Facebook can focus on building the tools and features that are most relevant to its own business goals. The downside to this, though, is that Facebook loses a fair amount of control over the final user experience and still has to play nice with the phone manufacturers, but these are things it would have to do no matter what strategy it picked
    • Just build a more complex mobile app which can support Facebook apps: This is the path of least resistance but leaves Facebook at the greatest mercy of Apple and Google, as well as forces Facebook to keep up with phone proliferation (iPhone 3G vs iPhone 3GS vs iPhone 4 vs DROID vs DROID 2 vs DROID X vs…)

Bottom-line: I don’t know if Facebook is even thinking about a bold mobile platform strategy, but if it is, I doubt it comes in the form of a full-fledged fbPhone. To me, it makes a lot more sense to stay the course and build more a sophisticated app in the short-term and, if needed, figure out ways to integrate rich user interface/development tool layers on an open operating system like Android or MeeGo.

(Image credit) (Image credit)

Leave a Comment

Why I Favor Google over Apple

image Many of my good friends are big fans of Apple and its products. But not me. This good-natured difference in opinion leads us into never-ending mini-debates over Twitter or in real life over the relative merits of Apple’s products and those of its competitors.

I suspect many of them (respectfully) think I’m crazy. “Why would you want an inferior product?” “Why do you back a company that has all this information about you and follows you everywhere on the internet?”

I figured that one of these days, I should actually respond to them (fears of flamers/attacks on my judgment be damned!).

imageFirst thing’s first. I’ll concede that, at least for now, Apple tends to build better products. Apple has remarkable design and UI sense which I have yet to see matched by another company. Their hardware is of exceptionally high quality, and, as I mentioned before, they are masters at integrating their high-end hardware with their custom-built software to create a very solid user experience. They are also often pioneers in new hardware innovations (e.g., accelerometer, multitouch, “retina display”, etc.).

So, given this, why on earth would I call myself a Google Fanboi (and not an Apple one)? There are a couple of reasons for it, but most of them boil down basically to the nature of Google’s business model which is focused around monetizing use rather than selling a particular piece of content/software/hardware. Google’s dominant source of profit is internet advertising – and they are able to better serve ads (get higher revenue per ad) and able to serve more ads (higher number of ads) by getting more people to use the internet and to use it more. Contrast this with Apple who’s business model is (for the most part) around selling a particular piece of software or hardware – to them, increased use is the justification or rationale for creating (and charging more for) better products. The consequence of this is that the companies focus on different things:

  • image Cheap(er) cost of access – Although Apple technology and design is quite complicated, Apple’s product philosophy is very simple: build the best product “solution” and sell it at a premium. This makes sense given Apple’s business model focus on selling the highest-quality products. But it does not make sense for Google which just wants to see more internet usage. To achieve this, Google does two main things. First, Google offers many services and development platforms for little or no cost. Gmail, Google Reader, Google Docs, and Google Search: all free, to name a few. Second, Google actively attacks pockets of control or profitability in the technology space which could impede internet use. Bad browsers reducing the willingness of people to use the internet? Release the very fast Google Chrome browser. Lack of smartphones? Release the now-very-popular Android operating system. Not enough internet-connected TV solutions? Release Google TV. Not enough people on high-speed broadband? Consider building a pilot high-speed fiber optic network for a lucky community. All of these efforts encourage greater Web usage in two ways: (a) they give people more of a reason to use the Web more by providing high-value web services and “complements” to the web (like browsers and OS’s) at no or low cost and (b) forcing other businesses to lower their own prices and/or offer better services. Granted, these moves oftentimes serve other purposes (weakening competitive threats on the horizon and/or providing new sources of revenue) and aren’t always successes (think OpenSocial or Google Buzz), but I think the Google MO (make the web cheaper and better) is better for all end-users than Apple’s.
  • Choice at the expense of quality – Given Apple’s interest in building the best product and charging for it, they’ve tended to make tradeoffs in their design philosophy to improve performance and usability. This has proven to be very effective for them, but it has its drawbacks. If you have followed recent mobile tech news, you’ll know Apple’s policies on mobile application submissions and restrictions on device functionality have not met with universal applause. This isn’t to say that Apple doesn’t have the right to do this (clearly they do) or that the tradeoffs they’ve made are bad ones (the number  of iPhone/iPad/iPod Touch purchases clearly shows that many people are willing to “live with it”), but it is a philosophical choice. But, this has implications for the ecosystem around Apple versus Google (which favors a different tradeoff). Apple’s philosophy provides great “out of the box” performance, but at the expense of being slower or less able to adopt potential innovations or content due to their own restrictions. image Case in point: a startup called Swype has built a fascinating new way to use soft keyboards on touchscreens, but due to Apple’s App Store not allowing an application that makes such a low-level change, the software is only available on Android phones. Now, this doesn’t preclude Swype from being on the iPhone eventually, but it’s an example where Apple’s approach may impede innovation and consumer choice – something which a recent panel of major mobile game developers expressed concern about — and its my two cents worth that the Google way of doing things is better in the long run.
  • image Platforms vs solutions – Apple’s hallmark is the vertically integrated model, going so far as to have their own semiconductor solution and content store (iTunes). This not only lets them maximize the amount of cash they can pull in from a customer (I don’t just sell you a device, I get a cut of the applications and music you use on it), it also lets them build tightly integrated, high quality product “solution”. Google, however, is not in the business of selling devices and has no interest in one tightly integrated solution: they’d rather get as many people on the internet as possible. So, instead of pursuing the “Jesus phone” approach, they pursue the platform approach, releasing “horizontal” software and services platforms to encourage more companies and more innovators to work with it. With Apple, you only have one supplier and a few product variants. With Google, you enable many suppliers (Samsung, HTC, and Motorola for starters in the high-end Android device world, Sony and Logitech in Google TV) to compete with one another and offer their own variations on hardware, software, services, and silicon. This allows companies like Cisco to create a tablet focused on enterprise needs like the Cius using Android, something which the more restrictive nature of Apple’s development platform makes impossible (unless Apple creates its own), or researchers at the MIT Media lab to create an interesting telemedicine optometry solution. A fair response to this would be that this can lead to platform fragmentation, but whether or not there is a destructive amount of it is an open question. Given Apple’s track record the last time it went solo versus platform (something even Steve Jobs admits they didn’t do so well at), I feel this is a major strength for Google’s model in the long-run.
  • (More) open source/standards – Google is unique in the tech space for the extent of its support for open source and open standards. Now, how they’ve handled it isn’t perfect, but if you take a quick glance at their Google Code page, you can see an impressive number of code snippets and projects which they’ve open sourced and contributed to the community. They’ve even gone so far as to provide free project hosting for open source projects. But, even beyond just giving developers access to useful source code, Google has gone further than most companies in supporting open standards going so far as to provide open access to its WebM video codec which it purchased the rights to for ~$100M to provide a open HTML5 video standard and to make it easy to access your data from a Google service however you choose (i.e., IMAP access to Gmail, open API access to Google Calendar and Google Docs, etc.). This is in keeping with Google’s desire to enable more web development and web use, and is a direct consequence of it not relying on selling individual products. Contrast this with an Apple-like model – the services and software are designed to fuel additional sales. As a result, they are well-designed, high-performance, and neatly integrated with the rest of the package, but are much less likely to be open sourced (with a few notable exceptions) or support easy mobility to other devices/platforms. This doesn’t mean Apple’s business model is wrong, but it leads to a different conclusion, one which I don’t think is as good for the end-user in the long run.

These are, of course, broad sweeping generalizations (and don’t capture all the significant differences or the subtle ones between the two companies). Apple, for instance, is at the forefront of contributors to the open source Webkit project which powers many of the internet’s web browsers and is a pioneer behind the multicore processing standard OpenCL. On the flip side, Google’s openness and privacy policies are definitely far from perfect. But, I think those are exceptions to the “broad strokes” I laid out.

In this case, I believe that, while short-term design strength and solution quality may be the strengths of Apple’s current model, I believe in the long run, Google’s model is better for the end-customer because their model is centered around more usage.

(Image credit) (Image credit) (Image credit) (Image credit) (Image credit)

14 Comments

Decade of Moore’s Law

image

I’ve mentioned Moore’s Law in passing a few times before. While many in the technology industry see the concept only on its most direct level – that of semiconductor scaling (the ability of the semiconductor industry, so far, to double transistor density every two or so years) – I believe this fails to capture its true essence. It’s not so much a law pertaining to a specific technology (which will eventually run out of steam when it hits a fundamental physical limit), but an “economic law” about an industry’s learning curve and R&D cycle relative to cost per “feature”.

Almost all industries experience a learning curve of some sort. Take the automotive industry – despite all of its inefficiencies, the cost of driving one mile has declined over the years because of improvements in engine technology, the building of the parts, general manufacturing efficiency, and supply chain management – but very few have a learning curve which operates on the same speed (how rapidly an industry improves its economic performance) and steepness (how much efficiency improves given a certain amount of “industry experience”) as the technology industry which can rely not only on learning curves but disruptive technological changes.

One of the best illustrations I’ve seen of this is a recent post on MacStories comparing a 2000 iMac and Apple’s new iPhone 4:

2000 iMac 2010 iPhone 4
Processor 500 MHz PowerPC G3 CPU 1 Ghz ARM A4 CPU
RAM 128MB 512MB
Graphics ATI Rage 128 Pro                             (8 million triangles) PowerVR SGX 535              (28 million triangles)
Storage 30GB Hard Drive 32GB NAND Flash
Weight 34.7 pounds 4.8 ounces

Although the comparisons are not necessarily apples-to-apples, they give a sense of the speed at which Moore’s Law progresses. Amazing, no?

(Image credit)

One Comment

Nokia Conducting Search for a New CEO

Very provocative headline for an interesting WSJ piece:

“They are serious about making a change,” one person familiar with the matter said. Nokia board members are “supposed to make a decision by the end of the month,” that person said.

image

They should be very serious about making a change – its been disappointment after disappointment at the former Finnish phone giant (and its stock price, see above). But, this gives me a great chance to play $100-armchair CEO. So, what would I do if I was in the big chair at Nokia? I’d be focusing on three things:

  • Change the OS approach: With Nokia’s next OS Symbian^3 delayed and widely perceived to be inadequate, you really need to question the ability of Nokia to keep up in the industry-shaking smartphone platform war. In particular, Nokia’s challenge is that its attempting to take a software platform built to enable carrier services and high reliability on lower-end phones that weren’t meant to run software and somehow force it into achieving the same high-end software functionality that Apple’s iOS and Google’s Android provide. While there’s nothing that says this is impossible, this is an order of magnitude more difficult than Apple/Google’s initial problem of just creating a software platform without the burden of any legacy constraints/approaches, and, in an industry as fast-moving and disruptive as the smartphone space, that’s two orders of magnitude too many, invites all sorts of risk with no clear reward, and discards Nokia’s traditional strengths in wireless communications R&D and solid hardware design. What does that mean? Three things:
    • Re-tool Symbian for the low-end to be more like Qualcomm’s BREW (or heck, maybe even adopt BREW?): an operating system focused on enabling carrier/simple software services on the many featurephones out there. That category is Nokia’s (and Symbian’s) traditional strength, and that’s where Symbian can still add a lot of value and find a lot of support.
    • image In the mid-market (high-end featurephone/low-end smartphones), I’d tell Nokia to bite the bullet and adopt Android. Not only is it free, but it immediately levels the software playing field between Nokia and the numerous  OEMs who are itching to adopt Android allowing Nokia’s traditional strength in hardware design to win over.
    • imageIn the high-end, Nokia should go all-in with Intel on their joint MeeGo platform. In that space, Nokia needs a killer platform to disrupt Google/Apple’s hold on the market, and MeeGo is probably the only operating system left which might contest Android and iOS and drive the convergence of mobile devices with traditional computers that this category is pushing towards.
    • Double-down on Qt to make it easier for developers to “develop for Nokia”. A few years ago, Nokia bought Trolltech which had created a programming framework called Qt (pronounced “cute”). Qt had gained significant traction with developres as it made it easier to make a graphical user interface which ran across multiple devices and operating systems. This is a key asset which Nokia has tried to use to make MeeGo and Symbian more attractive (and which is probably one of the main reasons both OS’s still have reasonable levels of developer interest; although, interestingly, there has been an effort to bring Qt over to Android), but it needs to be emphasized even more if Nokia wants to stay in the game.
  • Pick your battles wisely: It is entirely possible that Nokia has lost the high-end smartphone battle in the US and Europe (even despite the operating system approach laid out above). But, even if Nokia was forced to completely cede that market, its not the end of the war – its simply the loss of a few (albeit important) battlegrounds. Nokia is still well-positioned to win out in a number of other markets:
    • image The featurephone world: Many of us tech aficionados often forget that, despite all the buzz that the iPhone and the Droid devices generate, smartphones actually make up a very small unit base. Featurephones are still the vast majority of the volume (for cost reasons) and, as devices like the iPhone continue to capture mindshare, there will be significant value in helping featurephones imitate some of the functionality that smartphones have. While it is true that Moore’s Law makes it easier for high-end operating systems like iOS and Android to be run on tomorrow’s featurephones, the incentives of Apple and Google are to probably better aligned with taking their mobile operating systems up-market (towards higher-end devices and computers) rather than down-market (towards feature phones) to chase higher margins and to continue to build highly optimized performance machines. So, given Nokia/Symbian’s traditional strength in building good devices with good support for carrier services, its natural for Nokia to solidify its ownership of the feature phone market and to emulate some of the functionality of higher-end devices.
    • Emerging markets: This is related to the previous bullet point, but much of the developing world is now seeing vast value in simply adopting basic services and software on their (by Western standards) very low-end phones. As banking systems and computer availability are extremely limited in Africa and parts of Asia, this represents an enormous opportunity for someone like Nokia who has spent years making their phones capable of mobile payment, geolocation, and carrier-enabled services. Couple this with the fact that there is enormous growth waiting to happen in markets like India, China, and Africa (where cell phone penetration is nowhere near as high as in the US), and you have the makings of a potential end-game strategy which could offset short-term setbacks in the US/European smartphone market.
    • image Japan: While Europe and the US are eagerly adopting smartphones (as in phones with rich operating systems), Japan has been a laggard due to differences in the carrier/vendor/services environment. While its been difficult for foreign companies to break into Japan, the recent technology deal between Japanese semiconductor company Renesas and Nokia might provide an interesting “foot in the door” for Nokia to enter a large market where its weakness in software is not so much of a hindrance and its strengths in hardware/willingness to play nice with carriers are a big asset. This is in no way a slam-dunk, but its definitely worth considering.
  • Figure out the key ecosystem player(s) to partner with: The previous two bullet points were mainly tactical suggestions – what to do in the short-run and how to do it. This last bullet point is aimed at the strategic level – or, in other words, how does Nokia influence the creation of a market environment which leads to its long-term success. To do this, it needs to figure out who it wants to be and what it wants the mobile phone industry to look like when all is said and done. I don’t have a clear answer/vision here, but I’d say Nokia should think about partnering with:
    • Carriers: Although Apple/Android have had to play nice with the carriers to get their devices out, the carriers probably see the writing on the wall. If smartphone platforms continue to gain traction, there is significant risk that the carriers themselves will simply become the “dumb pipes” that the platforms run on (in the same way that  internet service providers like AOL rapidly became unimportant to the user experience and purchasing decision). Nokia has an opportunity to play against that and to help bring the carriers back to the table as a driving force by helping the carriers expose new revenue streams/services (which Nokia could take a cut of) and by building more carrier-friendly software/devices which help with coming bandwidth issues.
    • image Retailers/Mobile commerce intermediaries: One of the emerging application cases which is particularly interesting is the use of mobile phones for the buying and selling of goods. This is something which is extremely nascent but has a huge opportunity as mobile commerce can do something that traditional desktop-bound eCommerce can’t: it can bridge the gap between pixels on the screen and actual real-world shopping. It can be used as a mobile coupon/payment platform. It’s camera and GPS enables augmented reality functionality which can let shoppers look up information about a product without having to type in search-strings. It can be used to provide stores with more information about a shopper, letting them tailor new ad campaigns and marketing efforts. I haven’t run the math to build a forecast, but there’s good reason to believe that this could be the application for mobile phones. While Nokia may have to cede application/ad revenue to Google/Apple, it may be able to eke out a nice chunk of profit (maybe even bigger than the one Google/Apple can get) from focusing on this particular need case instead.

Obviously, none of these are guaranteed home-runs, but if I were a Nokia shareholder, I’d hope that the next Nokia CEO does something along the lines of this. And, yes, I’d be willing to accept $100 (and “some” stock) to be Nokia’s CEO and implement this :-).

(Image credit – Business Insider) (Image credit – Android logo) (Image credit – MeeGo logo) (Image credit – feature phone montage) (Image credit – Japanese phones) (Image credit – Mobile coupon)

4 Comments

I know enough to get myself in trouble

One of the dangers of a consultant looking at tech is that he can get lost in jargon. A few weeks ago, I did a little research on some of the most cutting-edge software startups in the cloud computing space (the idea that you can use a computer feature/service without actually knowing anything about what sort of technology infrastructure was used to provide you with that feature/service – i.e., Gmail and Yahoo Mail on the consumer side, services like Amazon Web Services and Microsoft Azure on the business side). As a result, I’ve looked at the product offerings from guys like Nimbula, Cloudera, Clustrix, Appistry, Elastra, and MaxiScale, to name a few. And, while I know enough about cloud computing to understand, at a high level, what these companies do, the use of unclear terminology sometimes makes it very difficult to pierce the “fog of marketing” and really get a good understanding of the various product strengths and weaknesses.

Is it any wonder that, at times, I feel like this Dilbert cartoon?:

image

Yes, its all about that “integration layer” …

My take? A great product should not need to hide behind jargon.

(Link: Dilbert cartoon)

3 Comments

Replicating Taiwan’s Success

I’m always a fan of stories/articles highlighting the importance of Taiwan in the technology industry, so I was especially pleased that one of my favorite publications recently put out an article highlighting the very key Computex industry conference, the role of the Taiwanese government’s ITRI R&D organization in cultivating Taiwan’s technology sector, and the rise of Taiwan’s technology company stars (Acer, HTC, Mediatek, and TSMC).

Some of the more interesting insights are around two of the causes the article attributes to Taiwan’s disproportionate prominence in the global technology supply chain:

Much of the credit for the growth of Taiwan’s information technology (IT) industry goes to the state, notably the Industrial Technology Research Institute (ITRI). Founded in 1973, ITRI did not just import technology and invest in R&D, but also trained engineers and spawned start-ups: thus Taiwan Semiconductor Manufacturing Company (TSMC), now the world’s biggest chip “foundry”, was born. ITRI also developed prototypes of computers and handed the blueprints to private firms.

Taiwan’s history also helps make it the “best place in the world to turn ideas into physical form,” says Derek Lidow of iSuppli, a market-research firm. Japan colonised the island for half a century, leaving a good education system. Amid the turmoil of the Kuomintang’s retreat to Taiwan from mainland China, engineering was encouraged as a useful and politically uncontroversial discipline. Meanwhile, strong geopolitical ties with America helped foster educational and commercial links too. Western tech firms set up shop in Taiwan in the 1960s, increasing the pool of skilled workers and suppliers.

It also provides some interesting lessons for countries like Russia who are struggling to gain their own foothold in the lucrative technology industry:

  • image Facilitate the building of industrial parks with strong ties to R&D centers of excellence. Taiwan’s ITRI helped build the technical expertise Taiwan needed early on to gain ground in the highly competitive and sophisticated technology market by seeding it with resources and equipment. The government’s cooperation in the creation of Hsinchu Science and Industrial Park near ITRI headquarters and two major universities helped erect the community of technologists, engineers, and businessmen that’s needed to achieve a self-sustaining Silicon Valley.
  • Make strategic bets on critical industries and segments of the value chain. Early on, ITRI recognized the strategic importance of the semiconductor industry and went out of its way to seed the creation of Taiwan’s foundries. This was uniquely far-sighted, as it not only allowed Taiwan to participate in a vital industry but it also helped create the “support network” that Taiwan needed for its own technology industry to flourish. While semiconductor giants like Intel and Samsung can afford the factories to build their own chips, small local companies are hard-pressed to (see my discussion of the foundry industry as a disruptive business model). Having foundries like TSMC nearby lets smaller local companies compete on a more even footing with larger companies, and these local companies in turn will not only grow but also provide the support basis for still other companies.
  • Build a culture which encourages talent (domestic and foreign) to participate in strategic industries. This is one example where it’d be best not to imitate Taiwan. But, as the Economist points out, the political turmoil in Taiwan until the mid-80s made politically neutral careers such as engineering more attractive. In the same way that “culture” drove a big boom in technology in Taiwan, the environment which fostered smart and entrepreneurial engineers helped bring about the rise of the Silicon Valley as a global technology center (with the defense industry playing a similar role as Taiwan’s ITRI). Countries wishing to replicate this will need to go beyond just throwing money at speculative industries, but find their own way to encourage workers to develop the right set of skills and talents and to openly make use of them in simultaneously collaborative and entrepreneurial/business-like ventures. No amount of government subsidies or industrial park development could replace that.
  • image Learn as you go. To stay relevant, you need to be an old dog who learns new tricks. The Taiwanese technology industry, for example, is in a state of transition. Like Japan before it, it is learning to adapt to a world in which its cost position is not supreme and where its historical lack of focus on branding and intellectual property-backed R&D is a detriment rather than a cost-saving/customer-enticing play. But, the industry is not standing still. In conjunction with ITRI, the industry is learning to focus on design and IP and branding. ITRI itself has (rightfully) taken a less heavy-handed approach in shepherding its large and flourishing industry, now encouraging investment in the new strategic areas of wireless communications and LEDs.

Jury’s still out on lesson #5 (which is why I didn’t mention it) – have some sort of relation to me – after all, I was born in Taiwan and currently live in the Silicon Valley… 🙂

Leave a Comment