Skip to content →

Tag: Tech

QR^2?

When I was in Japan a few years ago, I was astounded by the abundance of square blocks of black dots (see below) on advertising and print which I later found out were called QR codes. The concept is actually quite ingenious. A standard barcode can only store so much information in the thickness and positioning of the barcode lines because its a one-dimensional code. But a two-dimensional QR code can store a ton more data. This makes it possible to store long web addresses, include error detection/correction methods, and even embed text information in more sophisticated languages (like Japanese).

image

QR codes have slowly been increasing in adoption in the US and Europe as phone camera/image recognition technology has improved to meet their Japanese counterparts. But, Microsoft decided to take the technology one step further: instead of just being black and white blobs, why not introduce greater customization, tracking ability, and a little color?

Behold, Microsoft Tag (HT: Register). The main visual difference you’ll see are the availability of color and custom designs:

image image image image

Underneath the surface lies a bunch of other enhancements, including:

  • Support across most major phone brands
  • Tag manager to provide analytics information on how people are reading your tags
  • API to allow developers access to the tag manager
  • Allows you to change Tag behavior based on a user’s previous tag viewing history or even the user’s location
  • New error correction/color allow for smaller tags and better translation

The question is, will businesses use it? On a basic execution level, the Register brings up the potential problem of recognition. As ugly and clunky as “vanilla” QR codes are, they are very distinctive. Will it still be easy to identify Microsoft’s smaller, customized in-color boxes as codes to scan?

On a business-level, the biggest problem is that “Vanilla” QR codes do quite well in terms of functionality already. Microsoft will need to provide significant value-add in their tag manager/API/customization features to get businesses to switch to a format that Redmond has control over. Given Microsoft’s strengths in software, I’m also astonished they didn’t build in more functionality to make it an easier sell (such as the ability to embed more sophisticated instructions in the codes, or to run specific software/pass specific information when used in a certain context) – a future enhancement, perhaps?

With that said, those who rely on advertising to make a living may find it pretty easy to hand over the reins to a well-put-together Microsoft project as a hedge on their increasing dependence on Google and Apple for their livelihoods. In any event, there’s probably no harm in downloading the reader on your phone or checking out the Microsoft Tag website.

(Image credits – Microsoft Tag website)

Leave a Comment

Platform perils

image One of the most impressive developments in the web and the mobile phone space has been the emergence of new platforms for software developers to target. The developer’s repertoire is no longer just Windows, Mac OS, and Linux, but Android, iPhone OS, Windows Phone 7, Facebook, Twitter, and many more.

While these new platforms are big opportunities for developers, I always find it quite amusing to see the reaction of developers as they see the platform owners aggressively expand beyond their original domains, for example:

imageI’m always shocked at how up-in-arms developers can get about these moves. Why? Because this is nothing new in the software industry. Remember when Microsoft bundled Internet Explorer with their operating system and killed off Netscape? Or when Apple bundled iTunes into Mac OS and killed third-party MP3 player developers? Or IBM, widely considered a pioneer in open source, who bundles a full and very closed software stack with its UNIX servers and mainframes?

So, how does any developer succeed (seeing how most developers don’t control the platforms they develop for)? They key is to understand the economics from the platform owner’s vantage point:

  • Platform value and proliferation – When all is said and done, the business of the platform owner is to sell and proliferate its platform. So, foremost in the owner’s mind in rolling out a new feature which was once left to third party developers is whether or not that feature adds significant value to the platform. For Twitter, implementing a list feature (where formerly it was managed with custom apps like Tweetdeck) made a lot of sense as it not only helped users with organizing their Twitter usage but also helped to increase the social value of the service by helping users find other users to follow. Likewise, to me, the big surprise was not that Twitter acquired Ate Bits, but that it took them this long to buy/release official Twitter clients for iPhone, Android, and Blackberry.
  • New monetization – The full value of a platform extends far beyond the price tag on the platform and the applications being sold. It also includes advertising, virtual goods sales, content, and online transactions which take place. Is it any wonder, then, that Apple has expanded into mobile advertising with its iAd platform or content with its iTunes store? As before, the big surprise to me is that it took them this long to roll out iAd.
  • Impact of integration – There are many features where integration into the platform drives significant additional value. Whereas a cute game or widget doesn’t benefit much from being integrated into an operating system/web service, there is significant additional value to an operating system like Windows or Mac OS or Android to have an internet browser integrated, and there is a great deal of value in tying features related to security or virtual currency into a web platform like Facebook.
  • Impact on developer community – Despite what developers may believe, platform owners do care a great deal about the effect of their actions on their developer community. It doesn’t benefit a platform to have the owner unnecessarily alienate their developer base or to make the developer’s lives significantly harder. After all, a rich developer community makes platforms significantly more valuable – even giants like Microsoft, Apple, and Google can’t possibly create all the games, music, videos, and features which users may want, nor can they necessarily create better apps/content than specialized third party developers. This means that, by default, platform vendors are generally loath to aggressively push their own applications –- and it in fact requires a significant value-creator from one or more of the reasons above  to get an intelligent platform owner to “step on the toes” of their developer community.

Put them together, and you drive a number of conclusions about where platform owners will make aggressive inroads into the domains of their developers:

  • The “cost of admission” – If there is a feature or application which is used by enough users that it needs to be integrated/bundled in order to get users “up and running” quickly, you can be pretty sure that the platform owner will build, acquire, or partner with a vendor of applications there. Examples: web browsers and multimedia players in operating systems, social features in social networks, mobile phone apps to access a popular web application/social network, common device drivers in operating systems
  • “Platform in a platform” – In war, the side which maintains control of the most important roads and resources will win. Similarly, in business, not only does disproportionate profit tends to flow to the businesses which control the key “gateways” to developers and the change of funds, control of those gateways also enables the business to better shape the consumer’s experience. In the past, this has primarily resulted in platform owners seeking greater control over the development of applications, but Apple has proven that advertising, transaction fees on application sales, and digital content delivery are also key gateways to have influence over. Examples: virtual goods/currency on social network, advertising, development tools, digital content, application store, runtime layers
  • image “Plumbing” – To a platform owner, the platform’s inner workings are sacred. After all, a platform’s performance and ability to work with content/applications is heavily tied to its “plumbing”. In the same way that you aren’t likely to trust a random stranger to do open heart surgery on you, platform owners are unlikely to trust third party hacks/modifications on their platform’s inner workings and are unhappy when third party developers clog their “pipes” with too many requests/garbage. It should be no surprise that platform owners often restrict access to and limit/prevent modifications to a platform’s inner workings. Similarly, because of the value of integrating enhancements to lower level processes into the platform itself, it is also likely that platform owners will make their own modifications when needed and heavily restrict access (if its granted at all) to those lower level processes. Examples: APIs which tap into hardware-level capabilities on operating systems, quantity limits on social network/web service API usage, device driver creation in operating systems

So, what to do if you’re a developer who doesn’t own your own platform? The following is a quick (and by no means comprehensive) list

  1. Develop a plan for dealing with a platform owner’s ire: If you go into a business venture expecting everything everything your way, you are likely delusional. This is especially true if you’ve hit a modicum of success as there is nothing which paints a bullseye on your back better than success. The recent Zynga/Facebook spat (although its recently reached a semi-amiable detente) is an example of this. Better to assume, at a relatively early point, that you will sooner or later earn the platform owner’s wrath and come up with ways to prevent/deal with it than to be caught with your pants down when it happens.
  2. Build the best app: There’s almost never a situation where building the best product isn’t a good strategy, but in this case its a very good one. Building the best product gives you a reputation among users who may put pressure on the platform owner in your favor. It also gives you a shield, especially if your app goes above and beyond “the cost of admission”, by making it harder for a platform owner to take market share from you (i.e. the strength of Oracle’s products have allowed it to maintain its lead position in databases despite attempts from IBM and Microsoft). It also gives you more options as it gives the platform owner a reason to acquire/partner with you rather than with a competitor.
  3. Make your app flexible: Flexibility creates more options for a developer. It allows the developer to potentially work with additional platforms, thus creating a larger user base and an “exit strategy” if one platform becomes too hostile. It also allows a developer to more rapidly release new features or cope with platform changes. In the case where a platform owner is also considering acquisitions/partnerships as a route, the more flexible developer has a strong leg up in that he/she can more quickly integrate with the platform, as well as provide a more competitive opponent to take on.
  4. image Ally yourself with other developers: I pointed out earlier that the reason a platform owner exists is to sell and improve the value of the platform. Because of this and because the value of a platform is dependent on having a vibrant developer community, platform developers are loath to make aggressive moves which may alienate that community. To that end, aligning oneself with other developers can help amplify one developer’s protest when a platform owner makes an aggressive move encroaching on your turf.
  5. Create stickiness: There are many ways for developer “Davids” to tilt the battlefield in their favor against platform owner “Goliaths”. Building in social functionality (i.e. social games) so as to force users to give up connections with their friends if they switch to another vendor is becoming increasingly common as a tactic to develop stickiness. Linking your applications to other commonly used applications or services is another way (i.e. pulling in data from Google and Twitter). It may be an uphill battle, but its not a hopeless one.

It was great that there was a time when one could be a success just by building cute Twitter mobile applications that don’t do anything more than access Twitter’s basic API, but such a strategy was never going to be sustainable.  And the same thing is (or will be) true for a lot of the other new platforms.

(Image credit – Apps) (Image credit – Fish) (Image credit – Pipes) (Image credit – Fish)

Leave a Comment

Private concerns

imageOne reason I love science fiction is that it challenges our morals and beliefs in a way that other art forms rarely do. It asks us difficult questions, like, what if we had the ability to visit other planets and encounter different cultures? What if we could genetically “design” our children? What if we could go back in time and change history?

Unsettling questions aren’t they? But, why are they unsettling? My personal belief is that they are unsettling because our intuitions, our values, our beliefs, our laws, and our institutions were not designed to handle those questions. If you assume that Western culture is heavily derived from Ancient Greek and Roman humanism, is it any wonder that society has trouble understanding what to do with our nuclear arsenals or with humankind’s new ability to genetically alter the people and animals around us? After all, the foundations of today’s laws and values predated when people could even conceive that humans would ever have to think about such things.

So, when people ask me what I think about all the press that privacy concerns about Google or privacy concerns about Facebook or any of the other myriad social networks have garnered, I view it as manifestation of the fact that we now have technology which makes it super-easy to share information about ourselves and our location but we have yet to develop the intuititions, values, and laws/institutions to handle it.

Lets use myself as an example: I personally find auto-GPS-tagging my Tweets to be oversharing. However, I frequently Tweet the location I’m at and even the friends I’m with. Is this odd combination of preferences an example of irrationality? Probably (I was never the brightest kid). But I’d argue its more about my lack of intuition on the technology and the lack of clear cultural norms/values.

image And I’m not the only one who is beginning to come to terms with the un-intuitiveness of our digital lives. My good friend, and prominent blogger, Serena Wu recently went through a social network consolidation/privacy overhaul as a result of understanding just what it was she was sharing and how it could be used. All across the internet, I believe users are beginning to understand the consequences to privacy of their social network and search engine behavior.

Now, the easy reflex thing to do would be to simply cut off such privacy issues and cut out these social networks like one would a tumor. But, I think that would be a dramatic over-reaction akin to how the Luddites reacted to factory automation. It ignores the potential value of the technology: in the case of sharing information on social networks, this can come in the form of helping people advertise themselves to employers, assisting friends with keeping in contact with one another, and/or even delivering more valuable services over the internet. Now, that shouldn’t be construed as a blanket defense of everything Facebook or Twitter or Google does, but an understanding that there is a tradeoff to be made between privacy and service value is necessary to help the services, their users, society, and the government realize the appropriate changes in intuition, values, and rules to properly cope.

I’m not smart enough to predict what that tradeoff will look like or how our intuitions and values may change in the future, but I do think we can count on a few things happening:

  • Privacy will remain a big issue. Facebook and Twitter’s early years were marked by a very laissez-faire approach by both the users and the services on privacy. I believe that such an approach is unlikely to persist given the potential dangers and users’ growing appreciation for them. There is no doubt in my mind that, whether it be through laws, user demand, advocacy groups, or some combination of the above, data privacy and security will be a “must-have” feature of great significance for future web services built around sharing/accessing information.
  • Privacy policies and settings will become more standardized. I believe that the industry, in an attempt to become more transparent to their users and to avoid some of the un-intuitiveness that I described above, will build simpler and more standardized privacy controls. This isn’t to say that there won’t be room for extra innovation around privacy settings, but I think a “lexicon” of terms and settings will emerge which most services will have to support to gain user trust.
  • Data access APIs will become more restricted and/or use better authentication. The proliferation of web APIs has created a huge boom in new web services and mashups. However, many of these APIs use antiquated methods of authentication which don’t necessarily protect privacy. Consequently, I believe that the APIs that many new web services have grown to use will face new pressures to authenticate properly and frequently as to avoid data privacy compromise.

In the meantime, the few tips I listed below will probably be relevant to users regardless of how our rules, values, and intuitions change:

  • Understand the privacy policy of the services you use.
  • Figure out what you are willing to share and with whom as well as what you are not willing to share. Only use services which allow you to set access restrictions to those limits.
  • Check with your web service regularly on what information is being stored and what information is being accessed by a third party. (i.e., the Google Dashboard or Twitter’s Connections)
  • Advocate for better forms of authentication and privacy controls

No matter what happens in the web service privacy area, we are definitely in for an interesting ride!

(Image credit – ethics) (Image credit – Big Facebook Brother)

One Comment

My Google Voice story

image

The power of connectivity:

  • Today, a partner at the firm I work at wanted to call me, not realizing I was on a plane
  • He leaves me a voice message on my phone which includes a cell phone number for me to reach him at
  • Thankfully, I’m on a flight with WiFi
  • Also, I have Google Voice which not only gives me an online control panel to access all my voicemail, but also transcribes the message, and forwards it to my email (technology #2)
  • Because I have Google Voice, I can also read and send text messages as long as I have an internet connection, so I shoot his cell number a text message telling him when I land
  • I added his cell phone number to my list of Google contacts
  • When I land, Google Sync adds the partner’s new contact information to my Blackberry contacts
  • I used the Google Voice app on my Blackberry to shoot my partner a call

So I get to help the partner out without breaking a sweat :-).

(Image credit – Google Voice logo)

2 Comments

Microsoft surprise attack!

If you’ve been following the tech news, you’ll know that iPhone-purveyor Apple has launched a patent infringement lawsuit against HTC, one of the flagship (Taiwanese) phone manufacturers partnered up with Google and Microsoft to push Android and Windows phones. While HTC may be the company listed on the lawsuit, it was fairly clear that this was a blow against all iPhone imitators and especially against Google’s Android mobile phone (which was recently reported to have generated more mobile web traffic in the US than the iPhone).

But, as I’ve pointed out before, the lines between enemy and friend are murky in the technology strategy space. It would seem that Microsoft may have just thrown HTC (and hence the Android platform and other would-be iPhone-killers) a surprise lifeline:

REDMOND, Wash. — April 27, 2010 — Microsoft Corp. and HTC Corp. have signed a patent agreement that provides broad coverage under Microsoft’s patent portfolio for HTC’s mobile phones running the Android mobile platform. Under the terms of the agreement, Microsoft will receive royalties from HTC.

The agreement expands HTC’s long-standing business relationship with Microsoft.

“HTC and Microsoft have a long history of technical and commercial collaboration, and today’s agreement is an example of how industry leaders can reach commercial arrangements that address intellectual property,” said Horacio Gutierrez, corporate vice president and deputy general counsel of Intellectual Property and Licensing at Microsoft. “We are pleased to continue our collaboration with HTC.”

Why? I’d conjecture its a combination of three things:

  • Sizable royalty stream: Microsoft is an intellectual property giant. But, given Microsoft’s tenuous and potentially weakening position in mobile phones, they have probably been unable to fully monetize their own intellectual property. Why not test the waters with a company who is already friendly (HTC is a leading supplier of Windows Mobile phones), who desperately needs some intellectual property protection, and is churning out Android phones as if its life depended on it? And, if this works out, it opens the doorway for Microsoft to extract further royalties from other Android phone makers as well (and its even been suggested ominously that perhaps Microsoft is using this as an intellectual property ploy against all Linux systems as well).
  • The enemy of my enemy is my friend: Apple is the Goliath that Windows, Blackberry, Symbian, WebOS, and Android need to slay. Given Microsoft’s unique advantage from being the leading PC operating system, one potentially feasible strategy would be to simply stall its competitors from building a similar position in the mobile phone space (like by helping Android take on Apple) and, when Microsoft is nice and ready, win in mobile phones by moving the PC “software stack” into the mobile phone world and creating better ties between computers (which run Microsoft’s own Windows operating system) and the phone.
  • HTC probably made some fairly significant concessions to Microsoft: I’m willing to bet that HTC has either coughed up some extremely favorable intellectual property royalty/licensing terms or has promised to support Microsoft’s Windows Phone 7 series in a very big way. Considering how quickly HTC embraced Android when it was formerly a Windows-Mobile-only shop, its probably not a stretch to believe that there were probably active discussions within HTC over whether or not to drop Microsoft’s faltering platform. An agreement from HTC to build a certain number of Windows phones or to align on roadmap would be a blessing for Microsoft who likely needs all the friends it can get to claw back smartphone market share.

Obviously, I could be completely wrong here (its unclear if Microsoft can even provide HTC with sufficient legal “air cover” against Apple), but the one thing that nobody can deny is that tech strategy is never boring.

9 Comments

Net Neutrality 2.0

If there is one consensus in the very polarized technology world, it is that more and more people and more and more devices will be connecting to the internet. Consequently, the amount of internet traffic that will be delivered over these networks will explode – estimates by Cisco suggest that total internet traffic will grow on average 41% every year resulting in an almost inconceivable 327 exabytes per year in 2012. To put that into context, that’s the equivalent of ~84 billion DVDs!

The growing size and importance of the internet has pushed regulators and activists to advocate for new rules and regulations to preserve the internet’s independence and neutrality from political and corporate interests. This movement has been called “Net Neutrality” and seeks to make it impossible for network owners to gain too much power over the content and information that a consumer can access.

Now, I am a firm believer in the aspirations of net neutrality – I am no fan of “walled gardens” and am even less a fan of Comcast/Verizon/AT&T throttling access to content they don’t approve. I am also well aware that major network owners, at least those in the US, are granted public licenses by the FCC to operate, and thus have a moral and legal obligation to provide a public and valuable service. But, aspirations and obligations without a realistic assessment of technological and economic realities are meaningless, and I suspect, based on many of the arguments I’ve heard on the internet, that many net neutrality proponents don’t have a good grasp of why their Eden-esque visions of net neutrality may be missing some of the bigger picture needed to make sure their implementation is more effective than naive.

The fundamental reality around the economics of network construction is, in short, that it is expensive as hell. There are enormous upfront costs that need to be recouped, and there are plenty of maintenance costs and challenges. This has two direct consequences:

  • Network providers tend to be large
  • The cost of an incremental byte of data transmitted is marginal (because the equipment is already there), but the cost of incremental capacity is extremely high (because the provider has to buy new equipment, contract new construction/digging, bring it online, etc.)

What does this mean for net neutrality? The cynical view would be that the network providers are primarily interested in minimizing new capacity investments/maintenance costs and in finding ways to “jack up” prices on data transferred, for example by only allowing content and devices where the content providers/device manufacturers pay the network owner a handsome reward. There is no doubt that network carriers can be guilty of this mindset, but the carrier’s current reaction to the smartphone revolution by opening up their formerly “closed gardens” and the failure of walled gardens like AOL to dominate the internet service provider space suggests that something else is at play. In my mind, what will dominate the business priorities of the network providers as time goes on is one question: how will network carriers profitably provide access for the exabytes of content that will be traveling through their networks?

The best example of what may come is in the mobile phone space where AT&T has been caught off guard with the enormous network demands from Apple’s iPhone. It seems the general consensus is that AT&T has been behind on its network infrastructure investments, but the fact of the matter is it will be increasingly difficult for network providers to keep pace with the investments in capacity necessary to maintain service quality for its users. This is especially complicated in the mobile phone space because of three things:

    1. the amount of wireless spectrum available for use is being consumed faster than it is being made available (although this could be alleviated by new regulatory policies)
    2. the quality of network service is dependent on more than just the wireless spectrum, but also an extremely expensive-to-upgrade, rapidly saturating “backhaul” network
    3. the spectral efficiency (how much data you can cram down a particular wireless pipe) is reaching the theoretical limit defined by Shannon’s Law (chart below)

I’ll admit I haven’t fully crunched the numbers (its been my experience that its pretty difficult for a casual observer to find good data on the costs of upgrading network capacity or on the ability of femtocells/network topology changes to help carriers cope with new data demands), but I suspect that network carriers will soon hit a real wall in terms of their ability to profitably expand their networks to meet new demands. If this is true, then there really are only four options open in the long run for profit-seeking network carriers:

  1. Dramatically increase cost of network service – Ending “unlimited access” plans and increasing the average monthly service costs and cost per byte transmitted could alleviate the profitability constraint in the short-run; but, depending on the economics involved, if the network service reached a high enough price, new users would be deterred from using the service, resulting in user attrition for the service provider and the loss of a potentially valuable network access for society
  2. Introduce tiered traffic and practice heavy network management – Allowing the carrier to preference certain traffic (e.g., to users/businesses or content providers/application developers who have paid more, or to preference emergency/government network traffic, etc.) and practice heavier handed network management could alleviate the network traffic constraint without dramatically changing the cost of network access
  3. Some combination of (1) and (2)
  4. Government bailout

None of these options (especially #4) are especially attractive, but it is important for regulators and activists to keep these potential futures in mind when considering their proposals. I fear that the strict form of net neutrality espoused by many would drive carriers to adopt course #1: something which could cut short the full potential of the internet to only wealthy individuals and businesses. This blogger’s humble opinion is that the best outcome would be to use regulation to pursue a variant of #2 for the following reasons:

  • AOL 2 is unlikely to happen: To believe that network carriers absent strongly worded neutrality regulations will rapidly clamp down on access is to believe that the failure of AOL’s walled garden and Verizon allowing Skype and Android on its network are flukes. The value from the openness of the “full Internet” is not trivial and no longer something that major network providers can ignore. And, based on Verizon’s recent promotion of the Droid by harping on problems with the closedness of Apple’s iPhone App Store, is something carriers have learned to use for their own purposes.
  • Regulations can be implemented just around content neutrality: Of all the concerns that I have about network neutrality, the one that matters the most to me is equivalent treatment of content. After all, the internet in the US would suffer a great deal if every ISP/carrier only provided content sanctioned by just a single political party. The remainder of the provisions (universal device access, universal platform access, ability of third party networks to piggyback off of the main network) are much less important, especially if content neutrality is mandated, and implementation of them may even weaken the ability of the industry to create high-quality integrated/tailored products (e.g., requiring AT&T to allow the iPhone to access Verizon’s network as well would have significantly delayed the iPhone’s release and probably would have worsened its battery-life and introduced additional problems). Furthermore, the cost of implementing content neutrality is likely minimal, whereas the cost of requiring networks to support every possible platform/device/third-party are likely to be significantly more prohibitive.
  • Enabled tiering/network management allows network providers to manage networks for higher quality experience: Strict network neutrality could limit the ability of network providers to manage the traffic on their networks, especially in times of peak demand, likely reducing quality of service for all parties. It also limits the ability of network providers to prioritize traffic that should be prioritized (e.g., emergency/government communications, traffic which businesses have network providers manage for them, or even packets related to keeping tabs on the network service quality or devices on the network) and potentially do basic things like block spam/DDoS attacks or manage the quality of internet experience by traffic type (the demands for web page viewing are very different from the demands of internet video and internet telephony).

I won’t claim to be the policy expert that knows what the right solution or sets of costs and tradeoffs are, but hopefully this gives a sense of some of the nuances behind the network neutrality debate.

(Image credit) (Chart credit – Qualcomm whitepaper)

3 Comments

Apple to buy Intrinsity?

I recently read an interesting rumor off of tech blog Ars Technica that Apple has acquired small processor company Intrinsity – who’s website is, as of the time of this writing, down.

imageIn the popular tech press, very few self-professed gadget fans are aware of the nuances of the chip technology which powers their favorite devices. So, first question, what does Intrinsity and why would Apple be interested in them? Intrinsity is a chip design company known for its expertise in making existing processor designs faster and more efficient. They’ve been retained in the past by ATI (the graphics chip company which is now part of AMD) to enhance their GPU offering, Applied Micro (formerly AMCC) to help speed up their embedded processors, and more recently were used by Samsung (and presumably Apple) to speed up the ARM processor technology which powers the applications on the iPhone and the iPad.

Second question, then, would Apple do it? Questions about Apple are very difficult to answer – in part because of the extreme amount of hype and rumor surrounding them, but also because they tend to “think different” about business strategy. Normally, my intuition would say that this deal is unlikely to make much sense. I’ll admit I haven’t looked at the deal terms or Intrinisty’s finances, but my guess is Intrinsity has a flourishing business with other chip companies which would probably be jeopardized by Apple’s acquisition (especially now that Apple is itself sort of a chip design company and will probably want to de-emphasize the rest of Intrinsity’s activities). An acquisition like this could also be risky as Apple’s core strengths lie in building and designing a small number of well-integrated hardware/software products. While most analysts suspect that Apple contributed a huge amount to the design of the Samsung chip that’s currently in the iPhone, Apple is unlikely to have a culture or set of corporate processes that match Intrinsity’s, and I suspect nursing a chip technology group while also pushing the edge on product design and innovation at some point just becomes too difficult to do (which may partially explain the exodus of PA Semi, Apple’s other chip company purchase, engineers post-acquisition).

Of course, Apple is not your ordinary technology company, and there are definitely major benefits Apple could gain from this. The most obvious is that Apple can avoid paying licensing, royalty, and service fees to Intrinsity (which can be quite large if Apple continues to ship as many products as it does now) if it brings them in-house. Strategically, if Intrinsity is truly as good as they claim (I’ve read my fair share of rumors that the A4 processor in the iPad was a joint development effort from Samsung, Apple, and Intrinsity), then Apple may also want to take this valuable chess piece off the table for its competitors. Its no secret that major chip vendors like Qualcomm, NVIDIA, Texas Instruments, and Intel see the mobile chip space as the next hot growth area – Apple could perceive leaving Intrinsity out there as a major risk to maintaining its own device performance against the very impressive Snapdragon, Tegra, and OMAP (and potentially Intel Atom) product lines. imageThis is a similar move to what Apple did with its equity stake in Imagination Technologies, the company that licenses the graphics technology that powers the iPhone, the Palm Pre, and Motorola’s Droid. Its widely believed that, had Imagination been willing (and had Intel not also increased its stake in Imagination), Imagination would currently be an Apple division – highlighting Apple’s preference to not license technology which could potentially remain available to its competitors, but to bring it in-house.

image

So, in the end, does an Apple-Intrinsity deal make sense? Or is this just a rumor to be dismissed? It’s hard to say for sure (especially without knowing much about Intrinsity’s finances or the price offered), but if Intrinsity has key talent or intellectual property that Apple needs for its new devices, then Apple’s extremely high volume (and thus large payments to Intrinsity) could be the basis for fairly sizable financial benefits from such a deal. More importantly, on a strategic level, Apple’s need to maintain a performance lead over new Android (and Symbian and Windows Phone 7) devices could be all the justification needed for swallowing this attractive asset (note: AnandTech’s preliminary review shows the iPad outperforming Google’s Nexus One on web rendering speed – although how much of this is due to the iPad having a bigger battery is up for debate). Its hard to say for sure without knowing much about how profitable Intrinsity is, how much of its business comes from Apple/Samsung, and what sort of price Apple can negotiate, but there is definitely a lot of reason to do it.

(logo credit) (logo credit) (smartphone cartoon credit)

One Comment

Russia dreams of Silicon Valley sheep

image The Economist has an interesting article on the Kremlin’s latest push to modernize Russia’s economy and kick-start a wave of innovation which would supposedly lead to a “Russia with nuclear-powered spaceships and supercomputers.”

Far-fetched as this premise sounded, the article raised many thought-provoking questions on whether or not (and how) Russia could hope to build an innovation hub similar to the US’s Silicon Valley. One tidbit I found very interesting was that this isn’t the first time the Kremlin has tried something like this. Apparently, the Soviet Union, had attempted something similar in the past with very interesting political ramifications:

In the 1930s leading Soviet engineers arrested by Stalin laboured in special prison laboratories within the gulag. After the war, when Stalin required an atomic bomb, a special secret town was established where nuclear physicists lived in relative comfort, but still surrounded by barbed wire. Subsequently hundreds of secret construction bureaus, research institutes and scientific towns were set up across the Soviet Union to serve the military-industrial complex. They also spawned a technical intelligentsia. In the 1980s it was this class of educated people—permitted more freedom and better food than the rest of the country, but still poorly paid and not allowed to go abroad—that became the support base of perestroika [former Soviet Leader Mikhail Gorbachev’s attempt to liberalize/open up the Soviet Union which ultimately resulted in its collapse].

Russia’s rulers, however, seem keen on breaking this link between political openness/democracy and innovation:

Yet the experience of Mr. Gorbachev’s perestroika—which started with talk of technological renewal but ended in the collapse of the Soviet system—has persuaded the Kremlin to define modernisation strictly within technological boundaries. Hence Mr Medvedev’s warning not to rush political reforms. His supporters argue that only authoritarian government is capable of bringing the country into the 21st century. “Consolidated state power is the only instrument of modernisation in Russia. And, let me assure you, it is the only one possible,” said Vladislav Surkov [the Kremlin’s “chief ideologist” who put forth the current plan]

Is Surkov right about the lack of importance of democracy and political freedom? It’s hard to say for sure, but the success of the Asian tigers (esp. Korea, Taiwan, Singapore, and China) in this arena suggests that, at first glance, Surkov is right. Innovation and rapid economic growth do not require democracy so much as:

  1. effective and (relatively) un-corrupt governments
  2. free market systems which allow for consumer/business choice and property rights protection
  3. government investment in “innovation hubs” (e.g., Silicon Valley) where companies/universities/individuals readily share insights and collaborate

Of course, the flip side of the argument, is that its pretty rare for (1) and (2) to exist without democracy and at least basic political systems in place around due process and the respect for individual rights.

imageA successful attempt on (3) is difficult, regardless of the type of government authority (think of the countless failed attempts by cities, states, and countries to replicate Silicon Valley), but is especially difficult for “command regimes” in attempting to encourage innovation. It’s much simpler for an authoritarian government to find ways to double steel production (a la the Soviet Union’s Five-Year Plans) than it is for an authoritarian regime to encourage the trial & error, open exchange of ideas, and “disorganized” development which is necessary to drive innovative technology disruptions (which by definition can’t be “commanded”).

I’ve even heard it theorized that one reason the Soviet military elite allowed the perestroika which helped lead to its eventual collapse was their recognition that authoritarian regimes were not effective at encouraging the sort of innovation needed to build the computer technology which was giving (and still gives) the US its military advantage over the rest of the world.

But the harshest (and snarkiest) indictment of Russia’s short-sighted strategy here comes at the end of the Economist piece:

Mr Surkov is quite right when he argues that democracy would not stimulate technical innovation. The reason for this, however, is that under democracy a country with a declining population, a frighteningly high rate of birth defects, crumbling infrastructure and deteriorating schools might find a better use for taxpayers’ money than pouring it into Mr. Surkov’s Silicon Valley dreams.

Russia’s economy will likely grow quickly, regardless of the success of the Kremlin’s latest plans, by virtue of its resourceful population and economic convergence, but I suspect its future in terms of quality of life and innovation depends on whether it ever gets around to its much-needed political reforms.

(Image credit) (Image credit)

One Comment

Keep your enemies closer

One of the most interesting things about technology strategy is that the lines of competition between different businesses is always blurry. Don’t believe me? Ask yourself this, would anyone 10 years ago have predicted that:

I’m betting not too many people saw these coming. Well, a short while ago, the New York Times Tech Blog decided to chart some of this out, highlighting how the boundaries between some of the big tech giants out there (Google, Microsoft, Apple, and Yahoo) are blurring:

image

Its an oversimplification of the complexity and the economics of each of these business moves, but its still a very useful depiction of how tech companies wage war: they keep their enemies so close that they eventually imitate their business models.

(Chart credit)

3 Comments

Why smartphones are a big deal (Part 2)

[This is a continuation of my post on Why Smartphones are a Big Deal (Part 1)]

Last time, I laid out four reasons why smartphones are a lot more than just phones for rich snobs:

  1. It’s the software, stupid
  2. Look ma, no <insert other device here>
  3. Putting the carriers in their place
  4. Contextuality

My last post focused on #1 and #2, mainly that (#1) software opens up a whole new world of money and possibility for smartphones that “regular” phones can’t replicate and (#2) that the combination of smartphones being able to do the things that many other devices can and phones being something that you carry around with you all day spells bad news for GPS makers, MP3 player companies, digital camera companies, and a lot of other device categories.

This time, I’ll focus on #3 and #4.

III. Putting the carriers in their place

Throughout most of the history of the phone industry, the carriers were the dominant power. Sure, enormous phone companies like Nokia, Samsung, and Motorola had some clout, but at the end of the day, especially in the US, everybody felt the crushing influence of the major wireless carriers.

In the US, the carriers regulated access to phones with subsidies. They controlled which functions were allowed. They controlled how many texts and phone calls you were able to make. When they did let you access the internet, they exerted strong influence on which websites you had access to and which ringtones/wallpapers/music you could download. In short, they managed the business to minimize costs and risks, and they did it because their government-granted monopolies (over the right to use wireless spectrum) and already-built networks made it impossible  for a new guy to enter the market.

imageBut this sorry state of affairs has already started to change with the advent of the smartphone. RIM’s Blackberry had started to affect the balance of power, but Apple’s iPhone really shook things up – precisely because users started demanding more than just a wireless service plan – they wanted a particular operating system with a particular internet experience and a particular set of applications – and, oh, it’s on AT&T? That’s not important, tell me more about the Apple part of it!

What’s more, the iPhone’s commercial success accelerated the change in consumer appetites. Smartphone users were now picking a wireless service provider not because of coverage or the cost of service or the special carrier-branded applications  – that was all now secondary to the availability of the phone they wanted and what sort of applications and internet experience they could get over that phone. And much to the carriers’ dismay, the wireless carrier was becoming less like the gatekeeper who got to charge crazy prices because he/she controlled the keys to the walled garden and more like the dumb pipe that people connected to the web on their iPhone with.

Now, it would be an exaggeration to say that the carriers will necessarily turn into the “dumb pipes” that today’s internet service providers are (remember when everyone in the US used AOL?) as these large carriers are still largely immune to competitors. But, there are signs that the carriers are adapting to their new role. The once ultra-closed Verizon now allows Palm WebOS and Google Android devices to roam free on its network as a consequence of AT&T and T-Mobile offering devices from Apple and Google’s partners, respectively, and has even agreed to allow VOIP applications like Skype access to its network, something which jeopardizes their former core voice revenue stream.

As for the carriers, as they begin to see their influence slip over basic phone experience considerations, they will likely shift their focus to finding ways to better monetize all the traffic that is pouring through their networks. Whether this means finding a way to get a cut of the ad/virtual good/eCommerce revenue that’s flowing through or shifting how they charge for network access away from unlimited/“all you can eat” plans is unclear, but it will be interesting to see how this ecosystem evolves.

IV. Contextuality

There is no better price than the amazingly low price of free. And, in my humble opinion, it is that amazingly low price of free which has enabled web services to have such a high rate of adoption. Ask yourself, would services like Facebook and Google have grown nearly as fast without being free to use?

How does one provide compelling value to users for free? Before the age of the internet, the answer to that age-old question was simple: you either got a nice government subsidy, or you just didn’t. Thankfully, the advent of the internet allowed for an entirely new business model: providing services for free and still making a decent profit by using ads. While over-hyping of this business model led to the dot com crash in 2001 as countless websites found it pretty difficult to monetize their sites purely with ads, services like Google survived because they found that they could actually increase the value of the advertising on their pages not only because they had a ton of traffic, but because they could use the content on the page to find ads which visitors had a significantly higher probability of caring about.

imageThe idea that context could be used to increase ad conversion rates (the percent of people who see an ad and actually end up buying) has spawned a whole new world of web startups and technologies which aim to find new ways to mine context to provide better ad targeting. Facebook is one such example of the use of social context (who your friends are, what your interests are, what your friends’ interests are) to serve more targeted ads.

So, where do smartphones fit in? There are two ways in which smartphones completely change the context-to-advertising dynamic:

  • Location-based services: Your phone is a device which not only has a processor which can run software, but is also likely to have GPS built-in, and is something which you carry on your person at all hours of the day. What this means is that the phone not only know what apps/websites you’re using, it also knows where you are and if you’re on a vehicle (based on how fast you are moving) when you’re using them. If that doesn’t let a merchant figure out a way to send you a very relevant ad, I don’t know what will. The Yowza iPhone application is an example of how this might shape out in the future, where you can search for mobile coupons for local stores all on your phone.
  • image Augmented reality: In the same way that the GPS lets mobile applications do location-based services, the camera, compass, and GPS in a mobile phone lets mobile applications do something called augmented reality. The concept behind augmented reality (AR) is that, in the real world, you and I are only limited by what our five senses can perceive. If I see an ad for a book, I can only perceive what is on the advertisement. I don’t necessarily know much about how much it costs on Amazon.com or what my friends on Facebook have said about it. Of course, with a mobile phone, I could look up those things on the internet, but AR takes this a step further. Instead of merely looking something up on the internet, AR will actually overlay content and information on top of what you are seeing on your phone screen. One example of this is the ShopSavvy application for Android which allows you to scan product barcodes to find product review information and even information on pricing from online and other local stores! Google has taken this a step further with Google Goggles which can recognize pictures of landmarks, books, and even bottles of wine! For an advertiser or a store, the ability to embed additional content through AR technology is the ultimate in providing context but only to those people who want it. Forget finding the right balance between putting too much or too little information on an ad, use AR so that only the people who are interested will get the extra information.

The result of all four of these factors? If you assume that a phone is only a calling device, you’re flat out wrong. And if you think a phone is just another device for accessing the internet and playing goofy little games, you’re also wrong. The smartphone will, in this blogger’s humble opinion, dramatically change the technology landscape, and the smart money is on the companies and startups and venture capitalists who recognize that and act on it.

(Image credit) (Image credit) (Image credit)

3 Comments

Why smartphones are a big deal (Part 1)

image A cab driver the other day went off on me with a rant about how new smartphone users were all smug, arrogant gadget snobs for using phones that did more than just make phone calls. “Why you gotta need more than just the phone?”, he asked.

While he was probably right on the money with the “smug”, “arrogant”, and “snob” part of the description of smartphone users (at least it accurately describes yours truly), I do think he’s ignoring a lot of the important changes which the smartphone revolution has made in the technology industry and, consequently, why so many of the industry’s venture capitalists and technology companies are investing so heavily in this direction. This post will be the first of two posts looking at what I think are the four big impacts of smartphones like the Blackberry and the iPhone on the broader technology landscape:

  1. It’s the software, stupid
  2. Look ma, no <insert other device here>
  3. Putting the carriers in their place
  4. Contextuality

I. It’s the software, stupid!

You can find possibly the greatest impact of the smartphone revolution in the very definition of smartphone: phones which can run rich operating systems and actual applications. As my belligerent cab-driver pointed out, the cellular phone revolution was originally about being able to talk to other people on the go. People bought phones based on network coverage, call quality, the weight of a phone, and other concerns primarily motivated by call usability.

Smartphones, however, change that. Instead of just making phone calls, they also do plenty of other things. While a lot of consumers focus their attention on how their phones now have touchscreens, built-in cameras, GPS, and motion-sensors, the magic change that I see is the ability to actually run programs.

Why do I say this software thing more significant than the other features which have made their ways on to the phone? There are a number of reasons for this, but the big idea is that the ability to run software makes smartphones look like mobile computers. We have seen this pan out in a number of ways:

  • The potential uses for a mobile phone have exploded overnight. Whereas previously, they were pretty much limited to making phone calls, sending text messages/emails, playing music, and taking pictures, now they can be used to do things like play games, look up information, and even be used by doctors to help treat and diagnose patients. In the same way that a computer’s usefulness extends beyond what a manufacturer like Dell or HP or Apple have built into the hardware because of software, software opens up new possibilities for mobile phones in ways which we are only beginning to see.
  • Phones can now be “updated”. Before, phones were simply replaced when they became outdated. Now, some users expect that a phone that they buy will be maintained even after new models are released. Case in point: Users threw a fit when Samsung decided not to allow users to update their Samsung Galaxy’s operating system to a new version of the Android operating system. Can you imagine 10 years ago users getting up in arms if Samsung didn’t ship a new 2 MP mini-camera to anyone who owned an earlier version of the phone which only had a 1 MP camera?
  • An entire new software industry has emerged with its own standards and idiosyncrasies. About four decades ago, the rise of the computer created a brand new industry almost out of thin air. After all, think of all the wealth and enabled productivity that companies like Oracle, Microsoft, and Adobe have created over the past thirty years. There are early signs that a similar revolution is happening because of the rise of the smartphone. Entire fortunes have been created “out of thin air” as enterprising individuals and companies move to capture the potential software profits from creating software for the legions of iPhones and Android phones out there. What remains to be seen is whether or not the mobile software industry will end up looking more like the PC software industry, or whether or not the new operating systems and screen sizes and technologies will create something that looks more like a distant cousin of the first software revolution.

II. Look ma, no <insert other device here>

imageOne of the most amazing consequences of Moore’s Law is that devices can quickly take on a heckuva lot more functionality then they used to. The smartphone is a perfect example of this Swiss-army knife mentality. The typical high-end smartphone today can:

  • take pictures
  • use GPS
  • play movies
  • play songs
  • read articles/books
  • find what direction its being pointed in
  • sense motion
  • record sounds
  • run software

… not to mention receive and make phone calls and texts like a phone.

But, unlike cameras, GPS devices, portable media players, eReaders, compasses, Wii-motes, tape recorders, and computers, the phone is something you are likely to keep with you all day long. And, if you have a smartphone which can double as a camera, GPS, portable media player, eReaders, compass, Wii-mote, tape recorder, and computer all at once – tell me why you’re going to hold on to those other devices?

That is, of course, a dramatic oversimplification. After all, I have yet to see a phone which can match a dedicated camera’s image quality or a computer’s speed, screen size, and range of software, so there are definitely reasons you’d pick one of these devices over a smartphone. The point, however, isn’t that smartphones will make these other devices irrelevant, it is that they will disrupt these markets in exactly the way that Clayton Christensen described in his book The Innovator’s Dilemma, making business a whole lot harder for companies who are heavily invested in these other device categories. And make no mistake: we’re already seeing this happen as GPS companies are seeing lower prices and demand as smartphones take on more and more sophisticated functionality (heck, GPS makers like Garmin are even trying to get into the mobile phone business!). I wouldn’t be surprised if we soon see similar declines in the market growth rates and profitability for all sorts of other devices.

(to be continued in Part 2)

(Image credit) (Image credit)

4 Comments

Don’t count your markets before they hatch

I was reading a very insightful analysis of the supercomputing industry over the past decade on scalability.org, when I stumbled on a chart which illustrates not only a pattern I see very often, but also a reason why you should always sandbag your forecasts if you’re betting on a new technology: your forecasts are almost always too optimistic.

Take Intel’s and HP’s huge gambit to push Itanium as the processor technology which would eventually replace all the other major processor technologies (i.e. SPARC, PowerPC, even Intel’s very own x86). Countless technology analysts and Intel/HP market researchers said Itanium would become the only game in town in computer processor technology – and with good reason. The technology that Itanium represented, in theory would’ve completely changed the processor technology game.

Yet, if we look at the progression of Itanium sales versus Itanium forecasts, we see a very different picture:

image

If anything, Intel/HP have now distanced themselves from Itanium – preferring to ship products based on Intel’s homegrown x86 technology, rather than the technology analysts had expected to storm the market.

Kind of embarrassing, isn’t it? The point isn’t to call out HP and Intel’s folly here (however much they deserve it), but to point out that new technologies are notoriously hard to predict, and to whatever extent is possible, companies everywhere should (a) never bet the farm on them and (b) watch what forecasts they’re making. They may come back to haunt them.

(Image credit)

Leave a Comment

Does an iTablet exist?

If you follow the technology industry gossip, you’ll have heard the rumors that Apple will release a next-generation tablet PC at the end of January (kind of like Moses bringing tablets with the word of God?)

Industry gossip, especially gossip about Apple, is notoriously bad as the many analysts out there oftentimes fail to understand Apple’s business and misread the things that they hear.

However, given the very precise supply chain reports out there (as well as the #2 exec at European telecom firm Orange’s announcement that they would be a partner with Apple), I am leaning towards believing this device exists.

Granted this is all speculation (and there’s a significant chance the industry is getting excited over nothing), and my good (and very intelligent) buddy Eric disagrees with me completely (for good reasons), but my thinking on the subject stems from three things:

  • A rapidly growing device category exists – When I first heard of the netbook category, I scoffed. After all, what is the difference between a netbook and a very cheap, underpowered notebook or an extremely powerful smartphone? However, as time went by, I was forced to eat my own words. There seemed to be an enormous appetite for such a device (as judged by the rapid growth rate of the netbook category) which didn’t seem to cut too deeply into notebook sales at all. Intel has even come on record as saying that netbooks are rarely bought, if ever, to replace notebooks! Whereas mobile phones are likely to replace portable media players (like iPods), it seemed that people were drawn to the idea of something in between a workhorse laptop and a smartphone to be used primarily to access the internet. This is also borne out by the booming growth in eReaders like Amazon’s Kindle which provide special interfaces, like special touchscreens and displays, which are tailored for casual internet browsing and reading. If there is a place for Apple to continue its rapid growth trajectory, a device category with specific technical needs and with potential for rapid growth like this in-between-smartphone-and-notebook eReader/tablet/netbook device would be it.
  • Clear room for user interface innovation – The current generations of netbooks and eReaders could use some significant improvement. Most netbooks don’t (yet) support a touchscreen interface and rarely sport a user interface that really wows. eReaders today predominantly depend on current generations of black-and-white-only e-Ink displays which suffer from a very slow page-change rate. The potential for someone with the hardware and UI design chops that Apple has to implement a new generation of display technology and provide a much needed refresh in the control scheme for these devices is enormous, and it fits with Apple’s history of changing how the industry and the consumer thought of products like the smartphone and portable media player (iPhone and iPod).
  • A vertical model fits – Apple’s standard strategy is to build strong end-to-end solutions that encompass hardware, software, and services in a neatly packaged product. This helps Apple maintain the quality of product experience, as well as extract extra profit by creating  a powerful “walled garden” which prevents other companies from seizing control of Apple’s key features and sources of revenue. Take the iPhone for instance – Apple has built the phone, designed the operating system, created an application and music store, and negotiated the proper service contract with a wireless carrier. It doesn’t get more “all in one/vertical” than that! Similarly, if a tablet emerges primarily as a means to get on the internet and read books/publications/blogs, there are a number of clear ways for Apple to “go vertical” – including adding an eBook store to iTunes, charging publishers a fee to distribute their products to “iTablet” owners, building a subscription model for content access, etc. This wouldn’t be an easy battle, but given Apple’s success with mobile phone applications and digital music, there’s plenty of precedent for seeing Apple expand its “content empire” to other forms of digital content.

Of course, there are a number of good reasons why this prediction might not wind up being true:

  • Apple doesn’t believe that the market opportunity is large enough. Is the growth in netbooks sustainable? Or just a product of the global recession pushing people to buy very cheap electronics? If Apple suspects its the latter, that would be a great reason to not distract management from more important tasks like maintaining or increasing its desktop/notebook market share or defending the iPhone’s market share against a growing Android threat (and potentially a resurgent Blackberry and Windows Mobile 7 threats).
  • Apple fears cannibalization. While Intel might view netbooks as a chance to sell more chips without interfering with its higher-end chips, Apple may fear that Apple notebook users, many of whom don’t need all the processing power that’s in their machines as they merely use them to surf the internet or watch movies/listen to music, will simply “trade down” and be tempted completely away from buying Apple’s higher end notebook models and hence jeopardizing Apple’s long-term growth and profitability.
  • Technological solutions to current problems aren’t mature enough. While the iPhone pushed the mobile phone industry, overnight, to adopt touchscreens, what is oftentimes not understood is that the touchscreen technology used by Apple has been around for quite some time (and were probably approved for use by Apple because they were mature). Many of the new display technologies to replace and/or improve on e-Ink are much less mature. If Apple has studied the problems facing current generations of tablet/netbook/ereaders and concluded that compelling solutions to them are still a year or two away, then, I believe that Apple would wait until they did come out to really storm the market.
  • Apple doesn’t think it can compete with a successful vertical model. If Apple felt it couldn’t use its standard playbook of providing services/content along with software and hardware, then that would be a big reason for Apple to not consider this mode. This could be because of the presence of large book-sellers like Barnes & Noble and Amazon in the eBook space (who do not allow non-Amazon/non-Barnes & Noble approved devices to access their digital libraries) or because of a powerful third party like Google which is already pushing one universal access platform for all eBooks. In my mind, this would be one of the biggest, if not the biggest, reason for Apple to think twice about entering the tablet/eReader space.

I’m glad my personal financial well-being doesn’t depend on me making the right call on this one :-), but push comes to shove, given the pretty-specific-supply-chain checks and the fact that I believe the threat of cannibalization and small market opportunity to be unlikely, I believe Apple will make this plunge, and I eagerly await to see how it will shape this new emerging device category.

(Image credit) (Image credit)

6 Comments

What it will take to get me to switch to Chrome

It’s no secret that I’m a big fan of Firefox. But, given Firefox’s slow start-time and Google’s Chrome browser’s recently announced support for extensions, I did a recent re-evaluation of my browser choice. Although I’ve chosen to stick with Firefox, the comparison of the two browsers is now much closer than its ever been before to the point where I think, if the pace of Chrome development continues, I could actually switch within a few months. What I would need are:

  • Full browser synchMozilla Weave is probably the most important extension in my Firefox install. Weave provides a secure and fast method for me to have the same set of bookmarks, browser history, passwords, and preferences between every copy of Firefox that I run (i.e. on my work computer vs. on my personal computer). This has made it easier for me to not only continue research between browser sessions, but also to quickly get up to productivity on any computer with a working Firefox installation. While Chrome now supports bookmark synchronization, the lack of a history or a secure password synch makes it harder for me to have the same degree of flexibility that I have with Firefox. What’s ironic, though, is that a few years ago, I was very reliant on Google’s Browser Synch Firefox extension to do the same thing, and found Firefox to be a lot less flexible when Google stopped updating it. But, this historical precedent means I’m relatively confident it should be easy for Google to introduce a similar feature for Chrome.
  • A Firebug-like web development tool – Chrome has a lot of useful web development tools but, up until now, I have yet to see a platform built into Chrome (or any other browser) which has the same level of sophistication and feature set as Firefox’s Firebug extension. For most people, this isn’t that relevant, but as someone who’s done a fair amount of web development in the past and expect to continue to do so in the future, the lack of something as versatile and easy-to-use as Firebug is a big downside to me. With the opening up of Chrome to extension developers, I’m hopeful that it will only be a matter of time until something comparable to Firebug is developed for Chrome
  • Extensions to replicate the Greasemonkey hacks I use -Another Firefox extension which I’ve come to rely heavily on is Greasemonkey. It’s a bit difficult to explain how Greasemonkey works to someone who’s never used it, but what it basically does is allow you to install little scripts which can add extra functions to your Firefox browsing experience. These scripts can be found on repositories like Userscripts. Some scripts I’ve become attached to include Google Image Relinker (which lets me go straight to an image from Google Images and skip the intermediary site), LongURL Mobile Expander (which lets me see where shortened URLs, like those from TinyURL or Bit.ly, are actually pointing), and Friendfeed Force Word Wrap (which forces word wrap on improperly formatted Friendfeed entries). Because most of these are pretty minor browser modifications, I am hopeful that these functions will emerge when Chrome’s extension developer community gets large enough.
  • Advanced web standard support – I think its pretty odd that despite being a major proponent of the HTML5 standard and new rich browser technologies like WebGL and Native Client, that Chrome has yet to truly distance itself from its browser peers in terms of support for these new standards. True, the technologies themselves are still under development and very few websites exist which support them, but a differentiated level of support for these new technologies would give me a whole set of reasons to pick Chrome over its browser peers, especially given the direction I expect the rich web to move.

Now, in the off chance someone from Mozilla is reading this, what could Mozilla do to keep me firmly in the Firefox camp?

  • Faster release cycle – It’s difficult to maintain a constant technological edge when your software is open source, but a faster release cycle will help prolong the advantages that the Mozilla ecosystem currently have like a strong extension and theme developer community, a large user base, and a rich set of experimental projects (like Weave and JetPack and Ubiquity).
  • Faster startup time – I appreciate that my startup speed issues with Firefox may be entirely due to the fact that I have hefty extensions like Greasemonkey and Weave installed, but given that my current build of Chrome has some 16 extensions (including the Chrome version of AdBlock and Google Gears) and still loads much faster than Firefox, I believe that significant opportunity for memory management and start-time improvement still exists within the Firefox code base.
  • Better web app integration – The Chrome browser was clearly designed to run web applications. It makes it easy to load individual applications in their own windows and to set up web applications as default handlers for specific file types and events. While Firefox has come a long way in terms of its advanced web technology support, I don’t feel that enough attention has been dedicated to making the web application experience nearly as seamless. Whether this means an overhaul of the Prism project or a new way of handling browser events, I’m not sure, but this is a direction where the gap between Chrome and Firefox can and should be closed.
  • Firefox everywhere – I have been painfully disappointed in the slow roll-out of the Fennec mobile Firefox project. In a world where Safari, Opera, and Internet Explorer all have fully functioning mobile browsers, there’s no reason Firefox should be behind in this arena. Fennec also makes the Firefox value proposition more compelling with Weave as a means of synchronizing settings and bookmarks between the two.
  • More progress on experimental UI – I have been an enormous fan of the innovations in browser use which I consider to be pioneered by Mozilla – tabbed browsing, extensions, browser skinning, the “awesome bar”, etc. One way for Mozilla to stay ahead of the curve, even if they are only “on par” along other dimensions with their peers, is to continue to push on progress in the Mozilla Labs research projects like Ubiquity and JetPack, or a smarter way to integrate Yahoo Pipes!, or something akin to Cooliris’s technology (to throw out a few random ideas).
  • Advanced web technology support – Ditto as with the Google Chrome comment above.

With all of this said, I’m actually fairly happy that there are so many aggressive development efforts underway by the browser makers of our era. It looks like the future of the web will be an interesting place!

(Image credit) (Image credit – Greasemonkey) (Image credit – Fennec)

8 Comments

What is with Microsoft’s consumer electronics strategy?

image Regardless of how you feel about Microsoft’s products, you have to appreciate the brilliance of their strategic “playbook”:

  1. Use the fact that Microsoft’s operating system/productivity software is used by almost everyone to identify key customer/partner needs
  2. Build a product which is usually only a second/third-best follower product but make sure it’s tied back to Microsoft’s products
  3. Take advantage of the time and market share that Microsoft’s channel influence, developer community, and product integration buys to invest in the new product with Microsoft’s massive budget until it achieves leadership
  4. If steps 1-3 fail to give Microsoft a dominant position, either exit (because the market is no longer important) or buy out a competitor
  5. Repeat

While the quality of Microsoft’s execution of each step can be called into question, I’d be hard pressed to find a better approach then this one, and I’m sure much of their success can be attributed to finding good ways to repeatedly follow this formula.

image It’s for that reason that I’m completely  bewildered by Microsoft’s consumer electronics business strategy. Instead of finding good ways to integrate the Zune, XBox, and Windows Mobile franchises together or with the Microsoft operating system “mothership” the way Microsoft did by integrating its enterprise software with Office or Internet Explorer with Windows, these three businesses largely stand apart from Microsoft’s home field (PC software) and even from each other.

This is problematic for two big reasons. First, because non-PC devices are outside of Microsoft’s usual playground, it’s not a surprise that Microsoft finds it difficult to expand into new territory. For Microsoft to succeed here, it needs to pull out all the stops and it’s shocking to me that a company with a stake in the ground in four key device areas (PCs, mobile phones, game consoles, and portable media players) would choose not to use one of the few advantages it has over its competitors.

The second and most obvious (to consumers at least) is that Apple has not made this mistake. Apple’s iPhone and iPod Touch product lines are clear evolutions of their popular iPod MP3 players which integrate well with Apple’s iTunes computer software and iTunes online store. The entire Apple line-up, although each product is a unique entity, has a similar look and feel. The Safari browser that powers the Apple computer internet experience is, basically, the same that powers the iPhone and iPod Touch. Similarly, the same online store and software (iTunes) which lets iPods load themselves with music lets iPod Touches/iPhones load themselves with applications.

image
That neat little integrated package not only makes it easier for Apple consumers to use a product, but the coherent experience across the different devices gives customers even more of a reason to use and/or buy other Apple products.

Contrast that approach with Microsoft’s. Not only are the user interfaces and product designs for the Zune, XBox, and Windows Mobile completely different from one another, they don’t play well together at all. Applications that run on one device (be it the Zune HD, on a Windows PC, on an XBox, or on Windows Mobile) are unlikely to be able to run on any other. While one might be able to forgive this if it was just PC applications which had trouble being “ported” to Microsoft’s other devices (after all, apps that run on an Apple computer don’t work on the iPhone and vice versa), the devices that one would expect this to work well with (i.e. the Zune HD and the XBox because they’re both billed as gaming platforms, or the Zune HD and Windows Mobile because they’re both portable products) don’t. Their application development process doesn’t line up well. And, as far as I’m aware, the devices have completely separate application and content stores!

While recreating the Windows PC experience on three other devices is definitely overkill, I think, were I in Ballmer’s shoes, I would recommend a few simple recommendations which I think would dramatically benefit all of Microsoft’s product lines (and I promise they aren’t the standard Apple/Linux fanboy’s “build something prettier” or “go open source”):

  1. Centralize all application/content “marketplaces” – Apple is no internet genius. Yet, they figured out how to do this. I fail to see why Microsoft can’t do the same.
  2. Invest in building a common application runtime across all the devices – Nobody’s expecting a low-end Windows Mobile phone or a Zune HD to run Microsoft Excel, but to expect that little widgets or games should be able to work across all of Microsoft’s devices is not unreasonable, and would go a long way towards encouraging developers to develop for Microsoft’s new device platforms (if a program can run on just the Zune HD, there’s only so much revenue that a developer can take in, but if it can also run on the XBox and all Windows Mobile phones, then the revenue potential becomes much greater) and towards encouraging consumers to buy more Microsoft gear
  3. Find better ways to link Windows to each device – This can be as simple as building something like iTunes to simplify device management and content streaming, but I have yet to meet anyone with a Microsoft device who hasn’t complained about how poorly the devices work with PCs.

(Image credit – Ballmer) (Image credit – Zune HD) (Image credit – Apple store)

2 Comments

Web 3.0

About a year ago, I met up with Teresa Wu (of My Mom is a Fob and My Dad is a Fob fame). It was our first “Tweetup”, a word used by social media types to refer to meet-up’s between people who had only previously been friends over Twitter. It was a very geeky conversation (and what else would you expect from people who referred to their first face-to-face meeting as a Tweetup?), and at one point the conversation turned to discuss our respective visions of “Web 3.0”, which we loosely defined as what would come after the current also-loosely-defined “Web 2.0” wave of today’s social media websites.

On some level, trying to describe “Web 3.0” is as meaningless as applying the “Web 2.0” label to websites like Twitter and Facebook. It’s not an official title, and there are no set rules or standards on what makes something “Web 2.0”. But, the fact that there are certain shared characteristics between popular websites today versus their counterparts from only a few years ago gives the “Web 2.0” moniker some credible intellectual weight; and the fact that there will be significant investment in a new generation of web companies lends special commercial weight as to why we need to come up with a good conception of “Web 2.0” and a good vision for what comes after (Web 3.0).

So, I thought I would get on my soapbox here and list out three drivers which I believe will define what “Web 3.0” will look like, and I’d love to hear if anyone else has any thoughts.

  1. A flight to quality as users start struggling with ways to organize and process all the information the “Web 2.0” revolution provided.
  2. The development of new web technologies/applications which can utilize the full power of the billions of internet-connected devices that will come online by 2015.
  3. Browser improvement will continue and enable new and more compelling web applications.

I. Quality over quantity

In my mind, the most striking change in the Web has been the evolution of its primary role. Whereas “Web 1.0” was oriented around providing information to users, generally speaking, “Web 2.0” has been centered around user empowerment, both in terms of content creation (blogs, Youtube) and information sharing (social networks). Now, you no longer have to be the editor of the New York Times to have a voice – you can edit a Wikipedia page or upload a YouTube video or post up your thoughts on a blog. Similarly, you no longer have to be at the right cocktail parties to have a powerful network, you can find like-minded individuals over Twitter or LinkedIn or Facebook.

The result of this has been a massive explosion of the amount of information and content available for people and companies to use. While I believe this has generally been a good thing, its led to a situation where more and more users are being overwhelmed with information. As with the evolution of most markets, the first stage of the Web was simply about getting more – more information, more connections, more users, and more speed. This is all well and good when most companies/users are starving for information and connections, but as the demand for pure quantity dries up, the attention will eventually focus on quality.
image
While there will always be people trying to set up the next Facebook or the next Twitter (and a small percentage of them will be successful), I strongly believe the smart money will be on the folks who can take the flood of information now available and milk that into something more useful, whether it be for targeting ads or simply with helping people who feel they are “drinking from a fire hose”. There’s a reason Google and Facebook invest so much in resources to build ads which are targeted at the user’s specific interests and needs. And, I feel that the next wave of Web startups will be more than simply tacking on “social” and “online” to an existing application. It will require developing applications that can actually process the wide array of information into manageable and useful chunks.

II. Mo’ devices, mo’ money

image A big difference between how the internet was used 10 years ago and how it is used today is the rise in the number of devices which can access the internet. This has been led by the rise of new smartphones, gaming consoles, and set-top-boxes. Even cameras have been released with the ability to access the internet (as evidenced by Sony’s Cybershot G3). While those of us in the US think of the internet as mainly a computer-driven phenomena, in much of the developing world and in places like Japan and Korea, computer access to the internet pales in comparison to access through mobile phones.

The result? Many of these interfaces to the internet are still somewhat clumsy, as they were built to mimic PC type access on a device which is definitely not the PC. While work by folks at Apple and at Google (with the iPhone and Android browsers) and at shops like Opera (with Opera Mini) and Skyfire have smoothed some of the rougher edges, there is only so far you can go with mimicking a computer experience on a device that lacks the memory/processing power limitations and screen size of a larger PC.

This isn’t to say that I think the web browsing experience on an iPhone or some other smartphone is bad – I actually am incredibly impressed by how well the PC browsing experience transferred to the mobile phone and believe that web developers should not be forced to make completely separate web pages for separate devices. But, I do believe that the real potential of these new internet-ready devices lies in what makes those individual devices unique. Instead of more attempts to copy the desktop browsing experience, I’d like to see more websites use the iPhone’s GPS to give location-specific content, or use the accelerometer to control a web game. I want to see social networking sites use a gaming console’s owner’s latest scores or screenshots. I want to see cameras use the web to overlay the latest Flickr comments on the pictures you’ve taken or to do augmented reality. I want to see set-top boxes seamlessly mix television content with information from the web. To me, the true potential of having 15 billion internet-connected devices is not 15 billion PC-like devices, but 15 billion devices each with its own features and capabilities.

III. Browser power

image
While the Facebooks and Twitters of the world get (and deserve) a lot of credit for driving the Web 2.0 wave of innovation, a lot of that credit actually belongs to the web standards/browser development pioneers who made these innovations possible. Web applications ranging from office staples like Gmail and Google Docs would have been impossible without new browser technologies like AJAX and more powerful Javascript engines like Chrome’s V8, Webkit’s JavascriptCore, and Mozilla’s SpiderMonkey. Applications like YouTube and Picnik and Photoshop.com depend greatly on Adobe’s Flash product working well with browsers, and so, in many ways, it is web browser technology that is the limiting factor in the development of new web applications.

Is it any wonder, then, that Google, who views web applications as a big piece of its quest for web domination, created a free browser (Chrome) and two web-capable operating systems (ChromeOS and Android), and is investigating ways for web applications to access the full processing power of the computer (Native Client)? The result of Google’s pushes as well as the internet ecosystem’s efforts has been a steady improvement in web browser capability and a strong push on the new HTML5 standard.

So, what does this all mean for the shape of “Web 3.0”? It means that, over the next few years, we are going to see web applications dramatically improve in quality and functionality, making them more and more credible as disruptive innovations to the software industry. While it would be a mistake to interpret this trend, as some zealots do, as a sign that “web applications will replace all desktop software”, it does mean that we should expect to see a dramatic boost in the number and types of web applications, as well as the number of users.

Conclusion

I’ll admit – I kind of cheated. Instead of giving a single coherent vision of what the next wave of Web innovation will look like, I hedged my bets by outlining where I see major technology trends will take the industry. But, in the same way that “Web 2.0” wasn’t a monolithic entity (Facebook, WordPress, and Gmail have some commonalities, but you’d be hard pressed to say they’re just different variants of the same thing), I don’t think “Web 3.0” will be either. Or, maybe all the innovations will be mobile-phone-specific, context-sensitive, super powerful web applications…

(Image credit) (Image credit – PhD comics) (Image credit – mobile phone) (Image credit – Browser wars)

2 Comments

Look ma, no battery!

While Moore’s Law may make it harder to be a tech company, it’s steady march makes it great to be an energy-conscious consumer, as one of its effects is to drive down power consumption in generation after generation of product. Take the example of smartphones like Apple’s iPhone or Motorola’s new Droid: Moore’s Law has made it possible to take computing power that used to need a large battery or power source (like in a laptop or a desktop) and put it in a mobile device that has a tiny rechargeable battery!

imageSome folks at NEC and Soundpower took advantage of this in a very cool way (HT: TechOn via Anthony). By combining NEC’s specialty in extremely low-power chips with Soundpower’s expertise at creating vibration-based power generators, the two companies were able to produce a battery-less remote control powered only by users pressing the buttons!

It makes me wonder where else this type of extremely low-power circuitry and simple energy generation setup could be useful: sensor networks? watches? LEDs? personal-area-networks?

And at the end of the day, that’s one of the things that makes the technology industry so interesting (and challenging to understand). Every new device could enable/develop a whole new set of applications and uses.

(Image credit)

Leave a Comment

Abacus 2.0

I’ve blogged before about the power of Wolfram Alpha, Mathematica creator Wolfram Research’s powerful online “knowledge engine” which is capable of, among other things, balancing chemical equations, looking up star charts, doing math, and even looking up medical information.

But it’s good to know that, despite the sophisticated computational engine which underlies it, Wolfram Alpha hasn’t forgotten its “ancestor” the abacus, a tool used by many cultures before the dawn of the electronics age.

image

Like a respectful child, Wolfram Alpha pays respects to its ancestors with a feature which allows you to see how any number would be represented in abacus form. Case in point, I entered the search string “abacus 24” into the Wolfram Alpha engine and got:

imageimage

Abacus 2.0?

(Image credit – abacus)(results from Wolfram Alpha engine)

Leave a Comment

Innovator’s Business Model

image A few weeks back, I wrote a quick overview of Clayton Christensen’s explanation for how new technologies/products can “disrupt” existing products and technologies. In a nutshell, Christensen explains that new “disruptive innovations” succeed not because they win in a head-to-head comparison with existing products (i.e. laptops versus desktops), but because they have three things:

  1. Good enough performance in one area for a certain segment of users (i.e. laptops were generally good enough to run simple productivity applications)
  2. Very strong performance on an unrelated feature which eventually will become very important for more than one small niche (i.e. laptops were portable, desktops were not, and that became very important as consumers everywhere started demanding laptops)
  3. Have the potential to improve by leveraging their industry learning curve to the point where they can compete head-to-head with an existing product (i.e. laptops now can be as fast if not faster than most desktops)

But, while most people think of Christensen’s findings as applied to product and technology shifts, this model of how innovations overtake one another can be just as easily applied to business models.

A great example of this lies in the semiconductor industry. For years, the dominant business model for semiconductor companies was the Integrated Device Manufacturer (IDM) model – a business model whereby semiconductor companies both designed and manufactured their own product. The primary benefit of this was tighter integration of design and manufacturing. Semiconductor manufacturing is highly sophisticated, requiring all sorts of specialized processes and chemicals and equipment, and there are a great deal of intricacies between one’s designs and one’s manufacturing process. Having both design and manufacturing under one roof allowed IDMs to create better products more quickly as they were able to exploit the interplays between design and manufacturing and more readily correct problems as they arose. IDMs were also able to tweak their manufacturing processes to push specific features, letting IDMs differentiate their products from their peers.

image But, a new semiconductor model emerged in the early 1990s – the fabless model. Unlike the IDM model, fabless companies don’t own their own semiconductor factories (called fabs – hence the name “fabless”) and outsource their manufacturing to either IDMs with spare manufacturing capacity or dedicated contract manufacturers called foundries (the two largest of which are based in Taiwan).

At first, the industry scoffed at the fabless model. After all, these companies could not tightly link their designs to manufacturing, had to rely on the spare capacity of IDMs (who would readily take it away if they needed it) or on foundries in Taiwan, China, and Singapore which lagged the leading IDMs in manufacturing capability by several years.

But, the key to Christensen’s disruptive innovation model is not that the “new” is necessarily better than the “old,” but that it is good enough on one dimension and great on other, more important dimensions. So, while fabless companies were at first unable to keep up in terms of bleeding edge manufacturing technology with the dominant IDMs, the fabless model had a significant cost advantage (due to fabless companies not needing to build and operate expensive fabs) and strategic advantage, as their management could focus their resources and attention on building the best designs rather than also worrying about running a smooth manufacturing setup.

The result? Fabless companies like Xilinx, NVIDIA, Qualcomm, and Broadcom took the semiconductor industry by storm, growing rapidly and bringing their allies, the foundries, along with them to achieve technological parity with the leading IDMs. This model has been so successful that, today, much of the semiconductor space is either fabless or pursuing a fab-lite model (where they outsource significant volumes to foundries, while holding on to a few fabs only for certain products), and TSMC, the world’s largest foundry, is considered to be on par in manufacturing technology with the last few leading IDMs (i.e. Intel and Samsung). This gap has been closed so impressively, in fact, that former IDM-technology leaders like Texas Instruments and Fujitsu have now decided to rely on TSMC for their most advanced manufacturing technology.

To use Christensen’s logic: the fabless model was “good enough” on manufacturing technology for a niche of semiconductor companies, but great in terms of cost. This cost advantage helped the fabless companies and their allies, the foundries, to quickly move up the learning curve and advance in technological capability to the point where they disrupted the old IDM business model.

This type of disruptive business model innovation is not limited to imagethe semiconductor industry. A couple of weeks ago The Economist ran a great series of articles on the mobile phone “ecosystem” in emerging markets. The entire time while I was reading it, I was struck by the numerous ways in which the rise of the mobile phone in emerging markets was creating disruptive business models. One in particular caught my eye as something which was very similar to the fabless semiconductor model story: the so-called “Indian model” of managing a mobile phone network.

Traditional Western/Japanese mobile phone carriers like AT&T and Verizon set up very expensive networks using equipment that they purchase from telecommunications equipment providers like Nokia-Siemens, Alcatel-Lucent, and Ericsson. (In theory,) the carriers are able to invest heavily in their own networks to roll out new services and new coverage because they own their own networks and because they are able to charge customers, on average, ~$50/month. These investments (in theory) produce better networks and services which reinforce their ability to charge premium dollar on a per customer basis.

In emerging markets, this is much harder to pull off since customers don’t have enough money to pay $50/month. The “Indian model”, which began in emerging countries like India, is a way for carriers in  low-cost countries to adapt to the cost constraints imposed by the inability of customers to pay high $50/month bills, and is generally thought to consist of two pieces. The first involves having multiple carriers share large swaths of network infrastructure, something which many Western carriers shied away from due to intellectual property fears and questions of who would pay for maintenance/traffic/etc. Another plank of the “Indian model” is to outsource network management to equipment providers (Ericsson helped to pioneer this model, in much the same way that the foundries helped the first fabless companies take off) — again, something traditional carrier shied away from given the lack of control a firm would have over its own infrastructure and services.

Just as in the fabless semiconductor company case, this low-cost network management business model has many risks, but it has enabled carriers in India, Africa, and Latin America to focus on getting and retaining customers, rather than building expensive networks. The result? We’re starting to see some Western carriers adopt “Indian model” style innovations. One of the most prominent examples of this is Sprint’s deal to outsource its day-to-day network operations to Ericsson! Is this a sign that the “Indian model” might disrupt the traditional carrier model? Only time will tell, but I wouldn’t be surprised.

(Image credit) (Image credit – Foundry market share) (Image credit – mobile users via Economist)

3 Comments

USB H4x0rz

Back when I was still posting on Xhibiting, I was especially fond of interesting USB gadgets. Well, my good friend Anthony pointed me to this interesting gadget that he found out about through Engadget which takes my USB fascination to a whole new level:

image

The product is from Thumbs Up! and apparently, after plugging it into someone’s computer, will erratically turn on and off the caps lock, type out random text, and make random mouse movement. Better, still:

“Handily, the Prankster features a time delay setting, so that after installing it, you can make your getaway safely before it starts misbehaving.”

Glad to see they were thinking ahead. Thankfully, this is meant more to be a nuisance than a security risk, as its designed not to hit “Enter” or open/close files:

“The Prankster is highly annoying, but it’ll never activate the ‘enter’ key or close or save documents, so it’s mostly mischievous, not super-dangerous.”

Even so, to cover themselves morally (and possibly legally?), they note:

“However, it probably shouldn’t be used on computers that control nuclear reactors, security systems for genetically recreated dinosaur parks and/or zombie experimentation units, captured alien spacecraft or freezers packed with delicious ice cream.”

And all only for 20 British pounds!

(Image source – Thumbs Up)

Leave a Comment