Skip to content →

Category: Blog

Homing Stem Cell Missile Treatments

Another month, another paper

This month’s paper is about stem cells: those unique cells within the body which have the capacity to assume different roles. While people have talked at lengths about the potential for stem cells to function as therapies, one thing holding them back (with the main exception being bone marrow cells) is that its very difficult to get stem cells to exactly where they need to be.

With bone marrow transplants, hematopoietic stem cells naturally “home” (like a missile) to where they need to be (in the blood-making areas of the body). But with other types of stem cells, that is not so readily true, making it difficult or impossible to use the bloodstream as a means of administering stem cell therapies. Of course, you could try to inject, say, heart muscle stem cells directly into the heart, but that’s not only risky/difficult, its also artificial enough that you’re not necessarily providing the heart muscle stem cells with the right triggers/indicators to push them towards becoming normal, functioning heart tissue.

Researchers at Brigham & Women’s Hospital and Mass General Hospital published an interesting approach to this problem in the journal Blood (yes, that’s the real name). They used a unique feature of white blood cells that I blogged about very briefly before called leukocyte extravasation, which lets white blood cells leave the bloodstream towards areas of inflammation.

InflamResponse1

The process is described in the image above, but it basically involves the sugars on the white blood cell’s surface, called Sialyl Lewis X (SLeX), sticking to the walls of blood vessels near sites of tissue damage. This causes the white blood cell to start rolling (rather than flowing through the blood) which then triggers other chemical and physical changes which ultimately leads to the white blood cell sticking to the blood vessel walls and moving through.

imageThe researchers “borrowed” this ability of white blood cells for their mesenchymal stem cells. The researchers took mesenchymal stem cells from a donor mouse and chemically coated them with SLeX – the hope being that the stem cells would start rolling anytime they were in the bloodstream and near a site of inflammation/tissue damage. After verifying that these coated cells still functioned (they could still become different types of cells, etc), they then injected them into mice (who received injections in their ears with a substance called LPS to simulate inflammation) and used video microscopes to measure the speed of different mesenchymal stem cells in the bloodstream. In Figures 2A and 2B to the left, the mesenchymal stem cell coated in SLeX is shown in green and a control mesenchymal stem cell is shown in red. What you’re seeing is the same spot in the ear of a mouse under inflammation with the camera rolling at 30 frames per second. As you can see, the red cell (the untreated) moves much faster than the green – in the same number of frames, its already left the vessel area! That, and a number of other measurements, made the researchers conclude that their SLeX coat actually got their mesenchymal stem cells to slow down near points of inflammation.

But, does this slowdown correspond with the mesenchymal stem cells exiting the bloodstream? Unfortunately, the researchers didn’t provide any good pictures, but they did count the number of different types of cells that they observed in the tissue. When it came to ears with inflammation (what Figure 4A below refers to as “LPS ear”), the researchers saw an average of 48 SLeX-coated mesenchymal stem cells versus 31 uncoated mesenchymal stem cells within their microscopic field of view (~50% higher). When it came to the control (the “saline ear”), the researchers saw 31 SLeX-coated mesenchymal stem cells versus 29 uncoated (~7% higher). Conclusion: yes, coating mesenchymal stem cells with SLeX and introducing them into the bloodstream lets them “home” to areas of tissue damage/inflammation.

image

As you can imagine, this is pretty cool – a simple chemical treatment could help us turn non-bone-marrow-stem cells into treatments you might receive via IV someday!

But, despite the cool finding, there were a number of improvements that this paper needs. Granted, I received it pre-print (so I’m sure there are some more edits that need to happen), but my main concerns are around the quality of the figures presented. Without any clear time indicators or pictures, its hard to know what exactly the researchers are seeing. Furthermore, its difficult to see for sure whether or not the treatment did anything to the underlying stem cell function. The supplemental figures of the paper are only the first step in, to me, what needs to be a long and deep investigation into whether or not those cells do what they’re supposed to – otherwise, this method of administering stem cell therapies is dead in the water.

(Figures from paper) (Image credit: Leukocyte Extravasation)

Paper: Sarkar et al., “Engineered Cell Homing.” Blood. 27 Oct 2011 (online print). doi:10.1182/blood-2010-10-311464

Leave a Comment

The Monster

I was asked recently by a friend about my thoughts on the “Occupy Wall Street” movement. While people a heck of a lot smarter and more articulate than me have weighed in, most of it has been focused on finger-pointing (who’s to blame) and judgment (do they actually stand for anything, “its the Tea Party of the Left”).

JohnSteinbeck_TheGrapesOfWrathAs corny as it sounds, my first thought after hearing about “Occupy Wall Street” wasn’t about right or wrong or even really about politics: it was about John Steinbeck and his book The Grapes of Wrath . It’s a book I read long ago in high school, but it was one which left a very deep impression on me. While I can’t even remember the main plot (other than that it dealt with a family of Great Depression and Dust Bowl-afflicted farmers who were forced to flee Oklahoma towards California), what I do remember was a very tragic description of the utter confusion and helplessness that gripped the people of that era (from Chapter 5):

“It’s not us, it’s the bank. A bank isn’t like a man. Or an owner with fifty thousand acres, he isn’t like a man either. That’s the monster.”

“Sure,” cried the tenant men, “but it’s our land. We measured it and broke it up. We were born on it, and we got killed on it, died on it. Even if it’s no good, it’s still ours. That’s what makes it ours—being born on it, working it, dying on it. That makes ownership, not a paper with numbers on it.”

“We’re sorry. It’s not us. It’s the monster. The bank isn’t like a man.”

“Yes, but the bank is only made of men.”

“No, you’re wrong there—quite wrong there. The bank is something else than men. It happens that every man in a bank hates what the bank does, and yet the bank does it. The bank is something more than men, I tell you. It’s the monster. Men made it, but they can’t control it.

And therein lies the best description of the tragedy of the Great Depression, and of every economic crisis that I have ever read. The many un- and under-employed people in the US are clearly under a lot of stress. And, like with the farmers in Steinbeck’s novel, its completely understandable that they want to blame somebody. And, so they are going to point to the most obvious culprits: “the 1%”, the bankers and financiers who work on “Wall Street”.

occupy_wall_street_poster_2_by_jia_flynn-d4ay3sb

But, I think Steinbeck understood this is not really about the individuals. Obviously, there was a lot of wrongdoing that happened on the part of the banks which led to our current economic “malaise.” But I think for the most part, the “1%” aren’t interested in seeing their fellow citizen unemployed and on the street. Even if you don’t believe in their compassion, their greed alone guarantees that they’d prefer to see the whole economy growing with everyone employed and productive, and their desire to avoid harassment alone guarantees they’d love to find a solution which ends the protests and the finger-pointing. They may not be suffering as much as those in the “99%”, but I’m pretty sure they are just as confused and hopeful that a solution comes about.

The real problem – Steinbeck’s “monster” – is the political and economic system people have created but can’t control. Our lives are driven so much by economic forces and institutions which are intertwined with one another on a global level that people can’t understand why they or their friends and family are unemployed, why food and gas prices are so expensive, why the national debt is so high, etc.

Now, a complicated system that we don’t have control of is not always a bad thing. After all, what is a democracy supposed to be but a political system that nobody can control? What is the point of a strong judiciary but to be a legal authority that legislators/executives cannot overthrow? Furthermore, its important for anyone who wants to change the system for the better to remember that the same global economic system which is causing so much grief today is more responsible than any other force for creating many of the scientific and technological advancements which make our lives better and for lifting (and continuing to lift) millions out of poverty such as those who live in countries like China and India.

But, its hard not to sympathize with the idea that the system has failed on its promise. What else am I (or anyone else) supposed to think in a world where corporate profits can go up while unemployment stays stubbornly near 10%, where bankers can get paid bonuses only a short while after their industry was bailed out with taxpayer money, and where the government seems completely unable to do more than bicker about an artificial debt ceiling?

But anyone with even a small understanding of economics knows this is not about a person or even a group of people. To use Steinbeck’s words, the problem is more than a man, it really is a monster. While we may not be able to kill it, letting it rampage is not a viable option either — the “Occupy Wall Street” protests are a testament to that. Their frustration is real and legitimate, and until politicians across both sides of the aisle and individuals across both ends of the income spectrum come together to find a way to “tame the monster’s rampage”, we’re going to see a lot more finger-pointing and anger.

(Image credit – Wikipedia)

Leave a Comment

Antibody-omics

I’m pretty late for my September paper of the month, so here we go

“Omics” is the hot buzz-suffix in the life sciences for anything which uses the new sequencing/array technologies we now have available. You don’t study genes anymore, you study genomics. You don’t study proteins anymore – that’s so last century, you study proteomics now. And, who studies metabolism? Its all about metabolomics. There’s even a (pretty nifty) blog post covering this with the semi-irreverent name “Omics! Omics!”.

Its in the spirit of “Omics” that I chose a Science paper from researchers at the NIH because it was the first time I have ever encountered the term “antibodyome”. For those of you who don’t know, antibodies are the “smart missiles” of your immune system – they are built to recognize and attack only one specific target (i.e. a particular protein on a bacteria/virus). This ability is so remarkable that, rather than rely on human-generated constructs, researchers and biotech companies oftentimes choose to use antibodies to make research tools (i.e. using fluorescent antibodies to label specific things) and therapies (i.e. using antibodies to proteins associated with cancer as anti-cancer drugs).

How the immune system does this is a fascinating story in and of itself. In a process called V(D)J recombination – the basic idea is that your immune system’s B-cells mix, match, and scramble certain pieces of your genetic code to try to produce a wide range of antibodies to hit potentially every structure they could conceivably see. And, once they see something which “kind of sticks”, they undergo a process called affinity maturation to introduce all sorts of mutations in the hopes that you create an even better antibody.

Which brings us to the paper I picked – the researchers analyzed a couple of particularly effective antibodies targeted at HIV, the virus which causes AIDS. What they found was that these antibodies all bound the same part of the HIV virus, but when they took a closer look at the 3D structures/the B-cell genetic code which made them, they found that the antibodies were quite different from one another (see Figure 3C below)

F3.large

What’s more, not only were they fairly distinct from one another, they each showed *significant* affinity maturation – while a typical antibody has 5-15% of their underlying genetic code modified, these antibodies had 20-50%! To get to the bottom of this, the researchers looked at all the antibodies they could pull from the patient – in effect, the “antibodyome”, in the same way that the patient’s genome would be all of his/her genes, —  and along with data from other patients, they were able to construct a “family tree” of these antibodies (see Figure 6C below)

F6.large

The analysis shows that many of the antibodies were derived from the same initial genetic VDJ “mix-and-match” but that afterwards, there were quite a number of changes made to that code to get the situation where a diverse set of structures/genetic codes could attack the same spot on the HIV virus.

While I wish the paper probed deeper into actual experimentation to take this analysis further (i.e. artificially using this method to create other antibodies with similar behavior), this paper goes a long way into establishing an early picture of what “antibodyomics” is. Rather than study the total impact of an immune response or just the immune capabilities of one particular B-cell/antibody, this sort of genetic approach lets researchers get a very detailed, albeit comprehensive look at where the body’s antibodies are coming from. Hopefully, longer term this also turns into a way for researchers to make better vaccines.

(Figure 2 and 6 from paper)

Paper:  Wu et al., “Focused Evolution of HIV-1 Neutralizing Antibodies Revealed by Structures and Deep Sequencing.” Science (333). 16 Sep 2011. doi: 10.1126/science.1207532

One Comment

Google Reader Blues

grlogoIf it hasn’t been clear from posts on this blog or from my huge shared posts activity feed, I am a huge fan of Google Reader. My reliance/use of the RSS reader tool from Google is second only to my use of Gmail. Its my main primary source of information and analysis on the world and, because a group of my close friends are actively sharing and commenting on the service, it is my most important social network.

Yes, that’s right. I’d give up Facebook and Twitter before I’d give up Google Reader.

I’ve always been disappointed by Google’s lack of attention to the product, so you would think that after announcing that they would find a way to better integrate the product with Google+ that I would be jumping for joy.

However, I am not. And, I am not the only one. E. D. Kain from Forbes says it best when he writes:

[A]fter reading Sarah Perez and Austin Frakt and after thinking about just how much I use Google Reader every day, I’m beginning to revise my initial forecast. Stay calm is quickly shifting toward full-bore Panic Mode.

(bolding and underlining from me)

Now, for the record, I can definitely see the value of integrating Google+ with Google Reader well. I think the key to doing that is finding a way to replace the not-really-used-at-all Sparks feature (which seems to have been replaced by a saved searches feature) in Google+ with Google Reader to make it easier to share high quality blog posts/content. So why am I so anxious? Well, looking at the existing products, there are two big things:

  • Google+ is not designed to share posts/content – its designed to share snippets. Yes, there are quite a few folks (i.e. Steve Yegge who made the now-famous-accidentally-public rant about Google’s approach to platforms vs Amazon/Facebook/Apple’s on products) who make very long posts on Google+ using it almost as a mini-blog platform. And, yes, one can share videos and photos on the site. However, what the platform has not proven to be able to share (and is, fundamentally, one of the best uses/features for Google Reader) is a rich site with embedded video, photos, rich text, and links. This blog post that you’re reading for instance? I can’t share this on Google+. All I can share is a text excerpt and an image – that reduces the utility of the service as a reading/sharing/posting platform.
  • Google Reader is not just “another circle” for Google+, it’s a different type of online social behavior. I gave Google props earlier this year for thinking through online social behavior when building their Circles and Hangouts features, but it slipped my mind then that my use of Google Reader was yet another way to do online social interaction that Google+ did not capture. What do I mean by that? Well, when you put friends in a circle, it means you have grouped that set of friends into one category and think of them as similar enough to want to receive their updates/shared items together and to send them updates/shared items, together. Now, this feels more natural to me than the original Facebook concept (where every friend is equal) and Twitter concept (where the idea is to just broadcast everything to everybody), but it misses one dynamic: followers may have different levels of interest in different types of sharing. When I share an article on Google Reader, I want to do it publicly (hence the public share page), but only to people who are interested in what I am reading/thinking. If I wanted to share it with all of my friends, I would’ve long ago integrated Google Reader shares into Facebook and Twitter. On the flip side, whether or not I feel socially close to the people I follow on Google Reader is irrelevant: I follow them on Google Reader because I’m interested in their shares/comments. With Google+, this sort of “public, but only for folks who are interested” sharing and reading mode is not present at all – and it strikes me as worrisome because the idea behind the Google Reader change is to replace its social dynamics with Google+

Now, of course, Google could address these concerns by implementing additional features – and if that were the case, that would be great. But, putting my realist hat on and looking at the tone of the Google Reader blog post and the way that Google+ has been developed, I am skeptical. Or, to sum it up, in the words of Austin Frakt at the Incidental Economist (again bolding/underlining is by me)

I will be entering next week with some trepidation. I’m a big fan of Google and its products, in general. (Love the Droid. Love the Gmail. Etc.) However, today, I’ve never been more frightened of the company. I sure hope they don’t blow this one!

4 Comments

Chrome Remote Desktop

A few weeks ago, I blogged about how the web was becoming the most important and prominent application distribution platform and about Google’s efforts to embrace that direction with initiatives like ChromeOS (Google’s operating system which is designed only to run a browser/use the internet), Native Client, and the Chrome Web Store.

Obviously, for the foreseeable future, “traditional” native applications will continue to have significant advantages over web applications. As much of a “fandroid”/fan of Google as I am, I find it hard to see how I could use a Chromebook (a laptop running Google’s ChromeOS) over a real PC today because of my heavy use of apps like Excel or whenever I code.

However, you can do some pretty cool things with web applications/HTML5 which give you a sense of what can one day be possible. Case in point: enter Chrome Remote Desktop (HT: Google Operating System), a beta extension for Google Chrome which basically allows you to take control of another computer running Chrome a la remote desktop/VNC. While this capability is nothing new (Windows had “remote desktop” built in since, at latest, Windows XP, and there are numerous VNC/remote desktop clients), what is pretty astonishing is that this app is built entirely using web technologies – whereas traditional remote desktops use non-web based communications and native graphics to create the interface to the other computer, Chrome Remote Desktop is doing all the graphics in the browser and all the communications using either the WebSocket standard from HTML5 or Google Talk’s chat protocol! (see below as I use my personal computer to remote-control my work laptop where I am reading a PDF on microblogging in China and am also showing my desktop background image where the Jedi Android slashes up a Apple Death Star)

image

How well does it work? The control is quite good – my mouse/keyboard movements registered immediately on the other computer – but the on-screen graphics/drawing speed was quite poor (par for the course for most sophisticated graphics drawing apps in the browser and for a beta extension). The means of controlling another desktop, while easy to use (especially if you are inviting someone to take a look at your machine) is very clumsy for some applications (i.e. a certain someone who wants to leave his computer in the office and use VNC/remote desktop to access it only when he needs to).

So, will this replace VNC/remote desktop anytime soon? No (nor, does it seem, were they the first to think up something like this), but that’s not the point. The point, at least to me, is that the browser is picking up more and more sophisticated capabilities and, while it may take a few more versions/years before we can actually use this as a replacement for VNC/remote desktop, the fact that we can even be contemplating that at all tells you how far browser technology has come and why the browser as a platform for applications will grow increasingly compelling.

One Comment

AGIS Visual Field Score Tool

One of the things I regret the most about my background is that I lack good knowledge/experience with programming. While I have dabbled (i.e. mathematical modeling exercises in college, Xhibitr, and projects with my younger brother), I am generally more “tell” than “show” when it comes to creating software (except when it comes to writing a random Excel macro/function).

So, when I found out that my girlfriend needed some help with her glaucoma research and that writing software was the ticket, I decided to go out on a limb and help her out (link to my portfolio page).

The basic challenge is that the ophthalmology research world uses an arcane but very difficult-to-do-by-hand scoring system for taking data on a glaucoma patient’s vision (see image below for the type of measurements that might be collected in a visual field test) and turning that into a score (the AGIS visual field score) on how bad a patient’s glaucoma is (as described in a paper from 1994 that is so old I couldn’t find a digital copy of it!).

visual-field-advanced-glaucoma

 

Kr_c_prog_langSo, I started by creating a program using the C programming language which would take this data in the form of a CSV (comma-separated values) file and spit out scores.

While I was pleasantly surprised that I still retained enough programming know-how to do this after a few weekends, the programming was an awkward text-based monstrosity which required the awkward step of converting two-dimensional visual field data into a flat CSV file. The desire to improve on that and the hope that my software might help others doing similar research (and might get others to build on it/let me know if I’ve made any errors) pushed me to turn the tool into a web application which I’ve posted on my site. I hope you’ll take a look! Instructions are pretty basic:

  • Sorry, only works with modern browsers (Internet Explorer 9, Firefox 7, Chrome, Safari, etc) – this simplified my life as now I don’t need to worry about Internet Explorer 6 and 7’s horrific standards support
  • Enter the visual field depression data(in decibels) from the visual field test into the appropriate boxes (the shaded entries correspond to the eye’s blind spot).
    • You can click on “Flip Orientation” to switch from left-eye to right-eye view if that is helpful in data entry.
    • You can also click on “Clear” to wipe out all the data entered and start from scratch. An error will be triggered if non-numeric data is entered or if not all of the values have been filled out.
    • Note: the software can accept depression values as negative or positive, the important thing is to stay consistent throughout each entry as the software is making a guess on depression values based on all the numbers being entered.
  • Click “Calculate” when you’re done to get the score

Hope this is helpful to the ophthalmology researchers out there!

(Image credit – example visual field) (Image credit – C Programming Language)

Leave a Comment

Two More Things

stevejobs

A few weeks ago, I did a little farewell tribute to Apple CEO and tech visionary Steve Jobs after he left the CEO position at Apple. While most observers probably recognized that the cause for his departure was his poor health, few probably guessed that he would die so shortly after he left. The tech press has done a great job of covering his impressive legacy and the numerous anecdotes/lessons he imparted on the broader industry, but there are a few things which stand out to me which deserve a little additional coverage:

  • Much has been said about Jobs’s 2005 Stanford graduation speech: it was moving the first time I read it (back in 2005), and I could probably dedicate a number of blog posts to it, but one of the biggest things I took from it which I haven’t seen covered as much lately was the resilience in the face of setbacks. Despite losing his spot at the company he built, Jobs pushed on to create NeXT and Pixar. And, while we all know Pixar today as the powerhouse behind movies such as Toy Story and Ratatouille, and most Apple followers recognize Apple’s acquisition of NeXT as the integral part of bringing Jobs back into the Apple fold, what very few observers realize is that, for a long time, NeXT and Pixar were, by most objective measures, failures. Despite Steve Jobs’s impressive vision and NeXT’s role in pioneering new technologies, NeXT struggled and only made its first profit almost 10 years after its founding – and only a measly $1 million despite taking many tens of millions of dollars from investors! If Wikipedia is to be believed, NeXT’s “sister” Pixar was doing so poorly that Jobs even considered selling Pixar to – gasp – Microsoft as late as 1994, just one year before Toy Story would turn things around. The point of all of this is not to knock Jobs, but to point out that Jobs was pretty familiar with setbacks. Where he stands out, however, is in his ability and willingness to push onward. He didn’t just wallow in self-pity after getting fired at Apple, or after NeXT/Pixar were forced to give up their hardware businesses – he found a way forward, making tough calls which helped guide both companies to success. And that resilience, I think, is something which I truly hope to emulate.
  • One thing which has stuck with me was a quote from Jobs on why he was opening up to his biographer, Walter Isaacson, after so famously guarding his own privacy: “I wanted my kids to know me … I wasn’t always there for them, and I wanted them to know why and to understand what I did.” It strikes me that at the close of his life, Jobs, one of the most successful corporate executives in history, is preoccupied not with his personal privacy, his fortune, his company’s market share, or even how the world views him, but with how his kids perceive him. If there’s one thing that Steve Jobs can teach us all, its that no amount of success in one’s career can replace success in one’s personal life.

(Image credit)

2 Comments

Solyndra and the Role of VCs and Government in Cleantech

Because of the subject matter here, I’ll re-emphasize the disclaimer that you can read on my About page: The views expressed in this blog are mine and mine alone and do not necessarily reflect the views of my current (or past) employers, their employees, partners, clients, and portfolio companies.

Solyndra-logo

If you’ve been following either the cleantech world or politics, you’ll have heard about the recent collapse of Solyndra, the solar company the Obama administration touted as a shining example of cleantech innovation in America. Solyndra, like a few other “lucky” cleantech companies, received loan guarantees from the Department of Energy (like having Uncle Sam co-sign its loans), and is now embroiled in a political controversy over whether or not the administration acted improperly and whether or not the government should be involved in providing such support for cleantech companies.

Given my vantage point from the venture capital space evaluating cleantech companies, I thought I would weigh in with a few thoughts:

  • The failure of one solar company is hardly a reason to doubt cleantech as an enterprise. In every entrepreneurial industry where lots of bold, unproven ideas are being tested, you will see high failure rates. And, therein lies one of the beauties of a market economy – what the famous economist Joseph Schumpeter called “creative destruction.” That a large solar company like Solyndra failed is not a failing of the industry – if anything it’s a good thing. It means that one unproven idea/business model (Solyndra’s) was pushed out in favor of something better (in this case, more advanced crystalline silicon technologies and new thin film solar technologies) which means the employees/customers of Solyndra can now move on to more productive pastures (possibly another cleantech company which has a better shot at success).
  • The failure of Solyndra is hardly a reason to doubt the importance of government support for the cleantech industry. I believe that a strong “cleantech” industry is a good thing for the world and for the United States. Its good for the world in that it represents new, more efficient methods of harnessing, moving, and using energy and is a non-political (and, so, less controversial to implement) approach to addressing the problems of man-made climate change. Its good for the United States in that it represents a major new driver of market demand that the US is particularly well-suited to addressing because of its leadership in technology & innovation at a time when the US is struggling with job loss/economic decline/competition abroad. Or, to put it in a more economic way, what makes cleantech a worthy sector for government support is its strategic importance in the future growth of the global economy (potentially like a new semiconductor/software industry which drove much of the technology sector over the past two decades), the likelihood that the private sector will underinvest due to not correctly valuing the positive externalities (social good), and the fact that…
  • Private sector investors cannot do it all when it comes to supporting cleantech. One of the criticisms I’ve heard following the Solyndra debacle is that the government should not leave the support of industries like cleantech to the private sector. While I’m sympathetic to that argument, my experience in the venture investing world is that many private investors are not well equipped to providing all the levels of support that the industry would need. Private investors, for instance, are very bad at (and tend to shy away from) providing basic sciences R&D support – that’s research which is not directly linked to the bottom line (and so is outside of what a private company is good at managing) and, in fact, should be conducted by academics who collaborate openly across the research community. Venture capital investors are also not that well-suited to supporting cleantech pilots/deployments – those checks are very large and difficult to finance. These are two large examples of areas where private investors are unlikely to be able to provide all the support that the industry will need to advance and areas where there is a strong role for the government to play.
  • With all that said, I think there are far better ways for the government to go about supporting its domestic cleantech industry. Knowing a certain industry is strategic and difficult for the private sector to support completely is one thing – effectively supporting it is another. In this case, I have major qualms about how the Department of Energy is choosing to spend its time. The loan guarantee program not only puts taxpayer dollars at risk directly, it also picks winners and losers– something that industrial policy should try very hard not to do. Anytime you have the ability to pick winners and losers, you will create situations where the selection of winners and losers could be motivated by cronyism/favoritism. It also exposes the government to a very real criticism: shouldn’t a private sector investor like a venture capitalist do the picking? Its one thing when these are small prize grants for projects – its another when its large sums of taxpayer dollars at risk. Better, in my humble opinion, to find other ways to support the industry like:
    • Sponsoring basic R&D to help the industry with the research it needs to break past the next hurdles
    • Facilitating more dialogue between research and industry: the government is in a unique position to encourage more collaboration between researchers, between industry, between researchers AND industry, and across borders. Helping to set up more “meetings of the minds” is a great, relatively low-cost way of helping push an industry forward.
    • Issuing H1B visas for smart immigrants who want to stay and create/work for the next cleantech startup: I remain flabbergasted that there are countless intelligent individuals who want to do research/work/start companies in the US that we don’t let in.
    • Subsidizing cleantech project/manufacturing line finance: It may be possible for the government to use tax law or direct subsidies to help companies lower their effective interest payments on financing pilot line/project buildouts. Obviously, doing this would be difficult as we would want to avoid supporting the financing of companies which could fail, but it strikes me that this would be easier to “get right” than putting large swaths of taxpayer money at risk in loan guarantees.
    • Taxing carbon/water/pollution:  If there’s one thing the government can do to drive research and demand for “green products” is to issue a tax which makes the consequences of inefficiency obvious. Economists call this a Pigovian tax and the idea is that there is no better way to get people to save energy/water and embrace cleaner energy than by making them. (Note: for those of you worried about higher taxes, the tax can be balanced out by tax cuts/rebates so as to not raise the total tax burden on the US, only shift that burden towards things like pollution/excess energy consumption)

    This is not a complete list (nor is it intended to be one), but its definitely a set of options which are supportive of the cleantech industry, avoid the pitfall of picking winners and losers in a situation where the market should be doing that, and, except for the last, should not be super-controversial to implement.

Sadly, despite the abundance of interesting ideas and the steady pace of innovation/business model innovation, Solyndra seems to have turned investors and the public more sour towards solar and cleantech more broadly. Hopefully, we get past this rough patch soon and find a way to more effectively channel the government’s energies and funds to bolstering the cleantech industry in its quest for clean energy and greater efficiency.

(Image credit)

3 Comments

The Bad Math of Comics Companies

A few months ago I posted on DC Comic’s publicized reboot of their entire comic book franchise and argued this sort of bold action could be a good thing for the industry. Well, the reboot happened, and what’s the verdict? While there have been some very promising new books (I was particularly pleased with Grant Morrison’s Action Comics and Scott Snyder’s Batman), there were a few which, in my humble opinion, were changed for the worse.

But, while my comics fanboi rage might have been quelled had the editorial decision been made in a way to pull in new readers, such was not the case in at at least one notable book which butchered some of my favorite characters, as the following webcomic from Shortpacked illustrates:

2011-09-26-math

Seriously, DC. Even discounting the fact that I’m a big Teen Titans fan (it was one of the first comic book series I actually read!) and that you butchered a great female character who already had a great degree of sensuality in your reboot into some mindless, preening nymphomaniac – how did it ever occur to you to use a character who might have been a nice “gateway comic” for new fans and turn her into something unrecognizable and unlovable? Great one, DC. I hope the next reboot works better…

(There’s a great io9 post which further illustrates the stupidity of DC here)

(Image credit – Shortpacked)

Leave a Comment

The Marketing Glory of NVIDIA’s Codenames

This is an old tidbit, but nevertheless a good one that has (somehow) never made it to my blog. I’ve mentioned before the private equity consulting world’s penchant for silly project names, but while code names are not rare in the corporate world, more often than not, the names don’t tend to be dull and not be of much use for a company. NVIDIA’s code names, however, are pure marketing glory.

Take NVIDIA’s high performance computing product roadmap (below) – these are products that use the graphics processing capabilities of NVIDIA’s high-end GPUs and turn them into smaller, cheaper, and more power-efficient supercomputing engines which scientists and researchers can use to crunch numbers (check out entries from the Bench Press blog for an idea of what researchers have been able to do with them). How does NVIDIA describe its future roadmap? It uses the names of famous scientists to describe its technology roadmap: Tesla (the great American electrical engineer who helped bring us AC power), Fermi (“the father of the Atomic Bomb”), Kepler (one of the first astronomers to apply physics to astronomy), and Maxwell (the physicist who helped show that electrical, magnetic, and optical phenomena were all linked).

cudagpuroadmap

Who wouldn’t want to do some “high power” research (pun intended) with Maxwell? 🙂

But, what really takes the cake for me are the codenames NVIDIA uses for its smartphone/tablet chips: its Tegra line of products. Instead of scientists, he uses, well, comic book characters (now you know why I love them, right?) :-). For release at the end of this year? Kal-El, or for the uninitiated, that’s the alien name for Superman. After that? Wayne, as in the alter ego for Batman. Then, Logan, as in the name for the X-men Wolverine. And then Stark, as in the alter ego for Iron Man.

Tegra_MWC_Update1

Everybody wants a little Iron Man in their tablet :-).

And, now I know what I’ll name my future secret projects!

(Image credit – CUDA GPU Roadmap) (Image credit – Tegra Roadmap)

One Comment

Android in Kenya

I mentioned before when discussing DCM’s Android Fund that Android is a truly global opportunity. While Nokia is probably praying that this is untrue, the recent success of Huawei in Kenya with its IDEOS phone illustrates that Android isn’t just doing well in the First World, its particular approach makes it well-suited to tackle the broader global market (HT: MIT Technology Review):

Smart phones surged in popularity in February after Safaricom, Kenya’s dominant telecom, began offering the cheapest smart phone yet on the market—an Android model called Ideos from the Chinese maker Huawei, which has been making inroads in the developing world. In Kenya, the price, approximately $80, was low enough to win more than 350,000 buyers to date.

That’s an impressive number for a region most in the developed world would probably write off as far too developing to be interesting. Now Huawei’s IDEOS line is not going to blow anyone away – its small, has a fairly low quality camera, and is pretty paltry on RAM. But, the fact that this device can hit the right price point to make the market real is a real advantage for the global Android ecosystem:

  • This is 350,000 additional potential Android users – not an earth-shattering number but its always good to have more folks buying devices and using them for new apps/services
  • It’s enticing new developers into the Android community, both from within Kenya as well as from outside of Kenya. As the MIT Technology Review article further points out:

    Over the past year, Hersman has been developing iHub, an organization devoted to bringing together innovators and investors in Nairobi. Earlier this month, a mobile-app event arranged by iHub fielded 100 entrants and 25 finalists for a $25,000 prize for best mobile app. The winner, Medkenya, developed by two entrepreneurs, offers health advice and connects patients with doctors. Its developers have also formed a partnership with the Kenyan health ministry, with a goal of making health-care information affordable and accessible to Kenyans…

    Some other popular apps are in e-commerce, education, and agriculture. In the last group, one organization riding the smart-phone wave is Biovision, a Swiss nonprofit that educates farmers in East Africa about organic farming techniques. Biovision is developing an Android app for its 200 extension field workers in Kenya and other East African countries.

  • Given the carrier-subsidy model and the high price and bulkiness of computers, this means that there could be an entire generation of individuals who’s main experience with the internet is from using Android devices, not from a traditional Windows/MacOS/Linux PC!

This ability to go ultra-low end and experiment with new partners/business models/approaches is an advantage of the fact that Android is a more open horizontal platform that can be adopted by more device manufacturers and partners. I wouldn’t be surprised to see further efforts by other Asian firms to expand into untapped markets like Africa, the Middle East, and Southeast Asia with other interesting go-to-market strategies like low-cost, pre-paid Android devices.

Leave a Comment

Web vs native

imageWhen Steve Jobs first launched the iPhone in 2007, Apple’s perception of where the smartphone application market would move was in the direction of web applications. The reasons for this are obvious: people are familiar with how to build web pages and applications, and it simplifies application delivery.

Yet in under a year, Apple changed course, shifting the focus of iPhone development from web applications to building native applications custom-built (by definition) for the iPhone’s operating system and hardware. While I suspect part of the reason this was done was to lock-in developers, the main reason was certainly the inadequacy of available browser/web technology. While we can debate the former, the latter is just plain obvious. In 2007, the state of web development was relatively primitive relative to today. There was no credible HTML5 support. Javascript performance was paltry. There was no real way for web applications to access local resources/hardware capabilities. Simply put, it was probably too difficult for Apple to kludge together an application development platform based solely on open web technologies which would get the sort of performance and functionality Apple wanted.

But, that was four years ago, and web technology has come a long way. Combine that with the tech commentator-sphere’s obsession with hyping up a rivalry between “native vs HTML5 app development”, and it begs the question: will the future of application development be HTML5 applications or native?

There are a lot of “moving parts” in a question like this, but I believe the question itself is a red herring. Enhancements to browser performance and the new capabilities that HTML5 will bring like offline storage, a canvas for direct graphic manipulation, and tools to access the file system, mean, at least to this tech blogger, that “HTML5 applications” are not distinct from native applications at all, they are simply native applications that you access through the internet. Its not a different technology vector – it’s just a different form of delivery.

Critics of this idea may cite that the performance and interface capabilities of browser-based applications lag far behind those of “traditional” native applications, and thus they will always be distinct. And, as of today, they are correct. However, this discounts a few things:

  • Browser performance and browser-based application design are improving at a rapid rate, in no small part because of the combination of competition between different browsers and the fact that much of the code for these browsers is open source. There will probably always be a gap between browser-based apps and native, but I believe this gap will continue to narrow to the point where, for many applications, it simply won’t be a deal-breaker anymore.
  • History shows that cross-platform portability and ease of development can trump performance gaps. Once upon a time, all developers worth their salt coded in low level machine language. But this was a nightmare – it was difficult to do simple things like showing text on a screen, and the code written only worked on specific chips and operating systems and hardware configurations. I learned C which helped to abstract a lot of that away, and, keeping with the trend of moving towards more portability and abstraction, the mobile/web developers of today develop with tools (Python, Objective C, Ruby, Java, Javascript, etc) which make C look pretty low-level and hard to work with. Each level of abstraction adds a performance penalty, but that has hardly stopped developers from embracing them, and I feel the same will be true of “HTML5”.
  • Huge platform economic advantages. There are three huge advantages today to HTML5 development over “traditional native app development”. The first is the ability to have essentially the same application run across any device which supports a browser. Granted, there are performance and user experience issues with this approach, but when you’re a startup or even a corporate project with limited resources, being able to get wide distribution for earlier products is a huge advantage. The second is that HTML5 as a platform lacks the control/economic baggage that iOS and even Android have where distribution is controlled and “taxed” (30% to Apple/Google for an app download, 30% cut of digital goods purchases). I mean, what other reason does Amazon have to move its Kindle application off of the iOS native path and into HTML5 territory? The third is that web applications do not require the latest and greatest hardware to perform amazing feats. Because these apps are fundamentally browser-based, using the internet to connect to a server-based/cloud-based application allows even “dumb devices” to do amazing things by outsourcing some of that work to another system. The combination of these three makes it easier to build new applications and services and make money off of them – which will ultimately lead to more and better applications and services for the “HTML5 ecosystem.”

Given Google’s strategic interest in the web as an open development platform, its no small wonder that they have pushed this concept the furthest. Not only are they working on a project called Native Client to let users achieve “native performance” with the browser, they’ve built an entire operating system centered entirely around the browser, Chrome OS, and were the first to build a major web application store, the Chrome Web Store to help with application discovery.

While it remains to be seen if any of these initiatives will end up successful, this is definitely a compelling view of how the technology ecosystem evolves, and, putting on my forward-thinking cap on, I would not be surprised if:

  1. The major operating systems became more ChromeOS-like over time. Mac OS’s dashboard widgets and Windows 7’s gadgets are already basically HTML5 mini-apps, and Microsoft has publicly stated that Windows 8 will support HTML5-based application development. I think this is a sign of things to come as the web platform evolves and matures.
  2. Continued focus on browser performance may lead to new devices/browsers focused on HTML5 applications. In the 1990s/2000s, there was a ton of attention focused on building Java accelerators in hardware/chips and software platforms who’s main function was to run Java. While Java did not take over the world the way its supporters had thought, I wouldn’t be surprised to see a similar explosion just over the horizon focused on HTML5/Javascript performance – maybe even HTML5 optimized chips/accelerators, additional ChromeOS-like platforms, and potentially browsers optimized to run just HTML5 games or enterprise applications?
  3. Web application discovery will become far more important. The one big weakness as it stands today for HTML5 is application discovery. Its still far easier to discover a native mobile app using the iTunes App Store or the Android Market than it is to find a good HTML5 app. But, as platform matures and the platform economics shift, new application stores/recommendation engines/syndication platforms will become increasingly critical.

I can’t wait :-).

(Image credit – iPhone SDK)

22 Comments

Atlantic Cod Are Not Your Average Fish

Another month, another paper, and like with last month’s, I picked another genetics paper, this time covering an interesting quirk of immunology.

codfish-398-159

This month’s paper from Nature talks about a species of fish that has made it to the dinner plates of many: the Atlantic Cod (Gadus morhua). The researchers applied shotgun sequencing techniques to look at the DNA of the Atlantic Cod. What they found about the Atlantic Cod’s immune system was very puzzling: animals with vertebra (so that includes fish, birds, reptiles, mammals, including humans!) tend to rely on proteins called Major Histocompatibility Complex (MHC) to trigger our adaptive immune systems. There tend to be two kinds of MHC proteins, conveniently called MHC I and MHC II:

    • MHC I is found on almost every cell in the body – they act like a snapshot X-ray of sorts for your cells, revealing what’s going on inside. If a cell has been infected by an intracellular pathogen like a virus, the MHC I complexes on the cell will reveal abnormal proteins (an abnormal snapshot X-ray), triggering an immune response to destroy the cell.
    • MHC II is found only on special cells called antigen-presenting cells. These cells are like advance scouts for your immune system – they roam your body searching for signs of infection. When they find it, they reveal these telltale abnormal proteins to the immune system, triggering an immune response to clear the infection.

The genome of the Atlantic cod, however, seemed to be completely lacking in genes for MHC II! In fact, when the researchers used computational methods to see how the Atlantic cod’s genome aligned with another fish species, the Stickleback (Gasterosteus aculeatus), it looked as if someone had simply cut the MHCII genes (highlighted in yellow) out! (see Supplemental Figure 17 below)

image

Yet, despite not having MHC II, Atlantic cod do not appear to suffer any serious susceptibility to disease. How could this be if they’re lacking one entire arm of their disease detection?One possible answer: they seemed to have compensated for their lack of MHC II by beefing up on MHC I! By looking at the RNA (the “working copy” of the DNA that is edited and used to create proteins) from Atlantic cod, the researchers were able to see a diverse range of MHC I complexes, which you can see in how wide the “family tree” of MHCs in Atlantic cod is relative to other species (see figure 3B, below).

image

Of course, that’s just a working theory – the researchers also found evidence of other adaptations on the part of Atlantic cod. The key question the authors don’t answer, presumably because they are fish genetics guys rather than fish immunologists, is how these adaptations work? Is it really an increase in MHC I diversity that helps the Atlantic cod compensate for the lack of MHC II? That sort of functional analysis rather than a purely genetic one would be very interesting to see.

The paper is definitely a testament to the interesting sorts of questions and investigations that genetic analysis can reveal and give a nice tantalizing clue to how alternative immune systems might work.

(Image credit – Atlantic Cod) (All figures from paper)

Paper: Star et al, “The Genome Sequence of Atlantic Cod Reveals a Unique Immune System.” Nature (Aug 2011). doi:10.1038/nature10342

One Comment

Singapore to Combat Dengue with Social Media

(Cross posted to Bench Press)

Singapore is a fascinating country – despite the lack of what most in the West would recognize as democratic freedom, it consistently ranks well in terms of lack of corruption and high and growing standard of living for its people.

It is also one of the boldest when it comes to instituting policies and reforms: they were the first to implement a congestion tax to help manage traffic. Unlike most countries, Singapore is open to competition and investment from foreigners in strategic areas like telecommunications, power generation, and financial services. Singapore has also been extremely active in attempting to build up its capabilities as a center for life sciences excellence.

So it shouldn’t surprise me that they are among the first countries to actively utilize social media applications like Facebook and Twitter to help deal with a public health risk like Dengue Fever (from The Jakarta Globe):

The city-state’s National Environment Agency (NEA) plans to roll out … providing information on the latest dengue clusters or areas that have been earmarked as high-risk – on these new media platforms within the next three months … Through Facebook and Twitter, the public will also be able to post feedback or provide tip-offs. For example, if Singaporeans notice an increase in the number of mosquitoes in your neighbourhood or find potential breeding sites, they can alert NEA officers by posting on the agency’s Facebook page or tweeting the NEA account. “We need to put more information out in the public space, so more people can be informed and take action,” said Derek Ho, director of the environmental health department at NEA. “Leveraging on new media channels such as Facebook and Twitter is a good way to do that.”

A refreshing understanding of the uses of social media by a government agency – more interesting than that, though, is the work Singapore’s NEA is doing to build image recognition capabilities into smartphone apps like the NEA’s iPhone app to help field workers (and potentially the public) track and identify mosquitos and mosquito larvae!

The NEA is also in the process of developing a mosquito-recognition program that can identify the species of mosquito from a photograph of its pupae or larvae. With such software, and with the help of a mini microscope that attaches to the camera on a personal digital assistant or cellphone, NEA officers will be able to take photographs of larvae or pupae found in mosquito-breeding sites and instantly find out if they belong to the Aedes species, which spreads dengue … When it is ready, the agency hopes to be able to integrate it with the NEA iPhone application, so that the public or grassroots members conducting checks around the neighbourhood can use the technology as well.
Early identification will allow the NEA to act more swiftly to curb the spread of dengue in potential high-risk zones.

Very cool demonstration of the power of smartphones and of a government that is motivated to try out new technologies to tackle serious problems.

2 Comments

Farewell, Mr. Jobs

imageGoogle acquiring Motorola and HP dropping its PC/tablet hardware businesses not enough news for you? Late last Thursday, another jawdropper hit the tech industry when Apple announced that visionary CEO Steve Jobs was stepping down.

The tech industry is now awash with commentary about Jobs’ legendary leadership which was not only instrumental in the creation of Apple as a company, but took it from a distant laggard in the computing space to pioneering technology powerhouse today. This is particularly impressive given the degree to which Apple’s leadership structure (warning: full article behind paywall, but well worth a read if you are interested in how corporate organizations work) concentrates authority in the hands of the CEO – meaning, yeah, Apple’s success really *is* because of Steve, whereas in a lot of other companies its only partially due to the CEO.

While I’ve definitely picked a side in the Google vs Apple war, even this “fandroid” has to admit a certain sadness that Jobs is leaving. A very small part of it comes from the fact that I’m an Apple shareholder and find it near impossible to find anyone that has the same vision and execution skills to replace him. A much larger part comes from the fact that Jobs played a huge role in shaping the technology industry for the better:

In any event, I salute you, Mr. Jobs for a remarkable career and an incredible legacy.

DISCLAIMER: I own Apple shares

(Image credit)

One Comment

HP 2.0

The technology ecosystem just won’t give me a break – who would’ve thought that in the same week Google announced its bold acquisition of Motorola Mobility, that HP would also announce a radical restructuring of its business?

For those of you not up to speed, last Friday, HP’s new CEO Leo Apothekar announced that HP would:

    • Spend over $10 billion to acquire British software company Autonomy Corp
    • Shut down its recently-acquired-from-Palm-for-$1-billion WebOS hardware business (no more tablets or phones)
    • Contemplate spinning out its PC business

hpRadical change is not unheard of for long-standing technology stalwarts like HP. The “original Hewlett Packard”, focused on test and measurement devices like oscilloscopes and precision electronic components was spun out in 1999 as Agilent, one of the tech industry’s largest IPO’s. It acquired Compaq in 2001 to bolster its PC business for a whopping $25 billion. To build an IT services business, it acquired EDS in 2008 at a massive $14 billion valuation. To compete with Cisco in networking gear, it acquired 3Com for almost $3 billion. And, to compete in the enterprise storage space, it bought 3PAR after a furious bidding war with Dell for $2 billion. But, while this sort of change might not be unheard of, the billion dollar question remains: is this a good thing for HP and its shareholders? My conclusion: in the long-run, this is a good thing for HP. But how they announced it was very poor form.

Why good for the long-run?

    • HP needed focus. With the exception of the Agilent spinoff and the Compaq acquisition, all the “bold strategic changes” that I mentioned happened in the span of less than 3 years (EDS: 2008, 3com: 2009, Palm and 3PAR: 2010). Success in the technology industry requires you to disrupt existing spaces (and avoid being disrupted), play nicely with the ecosystem, and consistently overachieve. Its hard to do that when you are simultaneously tackling a lot of difficult challenges. At the end of the day, for HP to continue to thrive, it needs to focus and not always chase the technology “flavor of the week.”
    • HP had a big hill to climb to be a leading consumer hardware play. Despite being a very slick product, WebOS was losing the war of the smartphone/tablet operating systems to Google’s Android and Apple’s iOS. Similarly, in its PC business, with the exception of channel reach and scale, HP had no real advantage over Apple, Dell, or rapidly growing low-cost Asian competitors. It’s fair to say that HP might have been able to change that with time. After all, HP had barely had time to announce one generation of new products since Palm was acquired, let alone had time for the core PC division to work together with the engineers and user experience folks at Palm to cook up something new. But, suffice to say, getting to mass market success would have required significant investment and time. Contrast that with…
    • HP as a leading enterprise IT play is a more natural fit. With its strong server and software businesses and recent acquisitions of EDS, 3Com, and 3PAR, HP already has a broad set of assets that it could combine to sell as “solutions” to enterprises. Granted, there is significant room for improvement in how HP does all of this – these products and services have not been integrated very well, and HP lacks the enormous success that Dell has achieved in new cloud computing architectures and the services success that IBM has, to name two uphill battles HP will have to face, but it feels, at least to me, that this is a challenge that HP is already well-equipped to solve with its existing employees, engineering, and assets.
    • Moreover, for better or for worse, HP’s board chose a former executive of enterprise software company SAP to be CEO. What did they expect, that he would miraculously be able to turn HP’s consumer businesses around? I don’t know what happened behind closed doors so I don’t know how seriously Apothekar considered pushing down the consumer line, but I don’t think anyone should be surprised that he’s trying to build a complete enterprise IT stack akin to what IBM/Microsoft/Oracle are trying to do.

With all that said, I’m still quite appalled by how this was announced. First, after basically saying that HP didn’t have the resources to invest in its consumer hardware businesses, Apothekar turns around and pays a huge amount for Autonomy (at a valuation ten times its sales – by most objective measures, a fairly high price). I don’t think HP’s investors or the employees and business partners of HP’s soon-to-be-cast-aside will find the irony there particularly amusing.

Adding to this is the horrible manner in which Apothekar announced his plans. Usually, this sort of announcement only happens after the CEO has gone out of his way to boost the price he can command for the business units he intends to get rid of. In this case, not only are there no clear buyers lined up for the divisions HP plans to dump, the prices that those units could command will be hurt by the fact that their futures are in doubt. Rather than reassure employees, potential buyers, customers, and partners that existing business relationships and efforts will be continued, Apothekar has left them with little reason to be confident. This is appalling behavior from someone who’s main job is to be a steward for shareholder value as he could’ve easily communicated the same information without basically tanking his ability to sell those businesses off at a good valuation.

In any event, as I said in my Googorola post, we definitely live in interesting times :-).

One Comment

Googorola

I would lose my tech commentator license if I didn’t weigh in on the news of Google’s acquisition of Motorola Mobility. So, without further ado, four quick thoughts on “Googorola”:

Google-Motorola-Googorola-logo1108151559571

  • This is a refreshingly bold move by Google. Frankly, I had expected Google to continue its fairly whiny, defensive path on this for some time as they and the rest of the Android ecosystem cobbled together a solution to the horrendous intellectual property situation they found themselves in. After all, while Android was strategically important to Google as a means of preventing another operating system (like Windows or iOS) from weakening their great influence on the mobile internet, one could argue that most of that strategic value came from just making Android available and keeping it updated. It wasn’t immediately obvious to me that it would make dollars-and-cents sense for Google to spend a lot of cash fighting battles that, frankly, Samsung, HTC, LG, and the others should have been prepared to fight on their own. That Google did this at all sends a powerful message to the ecosystem that the success of Android is critical to Google and that it will even go so far as to engage in “unnatural acts” (Google getting into the hardware business!?) to make it so.
  • It will be interesting to observe Google’s IP strategy going forward. Although its not perfect, Google has taken a fairly pro-open source stance when it comes to intellectual property. Case in point: after spending over $100M on video codec maker On2, Google moved to make On2’s VP8/WebM codec freely available for others to integrate as an alternative to the license-laden H.264 codec. Sadly, because of the importance of building up a patent armory in this business, I doubt Google will do something similar here – instead, Google will likely hold on to its patent arsenal and either use it as a legal deterrent to Microsoft/Apple/Nokia or find a smart way to license them to key partners to help bolster their legal cases. It will be interesting to see how Google changes its intellectual property practices and strategy now that its gone through this. I suspect we will see a shift away from the open-ness that so many of us loved about Google.
  • I don’t put much stock into speculation that Motorola’s hardware business will just be spun out again. This is true for a number of reasons:
    1. I’m unaware of any such precedent where a large company acquires another large one, strips it of its valuable intellectual property, and then spins it out. Not only do I think regulators/antitrust guys would not look too kindly on such a deal, but I think Google would have a miserable time trying to convince new investors/buyers that a company stripped of its most valuable assets could stand on its own.
    2. Having the Motorola business gives Google additional tools to build and influence the ecosystem. Other than the Google-designed Nexus devices and requirements Google imposes on its manufacturing partners to support the Android Market, Google actually has fairly little influence over the ecosystem and the specific product decisions that OEMs like Samsung and HTC make. Else, we wouldn’t see so many custom UI layers and bloatware bundled on new Android phones. Having Motorola in-house gives Google valuable hardware chops that it probably did not have before (which will be useful in building out new phones/tablets, new use cases like the Atrix’s (not very successful but still promising) webtop, its accessory development kit strategy, and Android@Home), and lets them always have a “backup option” to release a new service/feature if the other OEMs are not being cooperative.
    3. Motorola’s strong set-top box business is not to be underestimated. Its pretty commonly known that GoogleTV did not go the way that Google had hoped. While it was a bold vision and a true technical feat, I think this is another case of Google not focusing on the product management side of things. Post-acquisition, however, Google might be able leverage Motorola’s expertise in working with cable companies and content providers to create a GoogleTV that is more attuned to the interests/needs of both consumers and the cable/content guys. And, even if that is not in the cards, Motorola may be a powerful ally in helping to bring more internet video content, like the kind found on YouTube, to more TVs and devices.
  • There is a huge risk from Google mismanaging the ecosystem with this move. Although some of Google’s biggest partners have been quoted as being supportive of this deal, that could simply be politeness or relief that someone will be able to protect them from Apple/Microsoft that’s talking. Google has intelligently come out publicly to state that they intend to run Motorola as a separate business and don’t plan on making any changes to their Nexus phone strategy. But, while Google may believe that going into this (and I think they do), and while I believe that Android’s success will be in building a true horizontal platform rather than imitating Apple’s vertical model, the reality of the situation is that you can’t really maintain something as an independent business completely free of influence, and that the temptation will always be there to play favorites. My hope is that Google institutes some very real firewalls and processes to maintain that independence. As a “fandroid” and as someone who is a big believer in the big opportunities enabled by Android, I think the real potential lies in going beyond just what one company can do, even if its Google.

Regardless of what happens, we definitely live in interesting times :-).

(Image credits)

2 Comments

Standards Have No Standards

Many forms of technology requires standards to work. As a result, it is in the best interest of all parties in the technology ecosystem to participate in standards bodies to ensure interoperability.

The two main problem with getting standards working can be summed up, as all good things in technology can be, in the form of webcomics. 🙂

Problem #1, from XKCD: people/companies/organizations keep creating more standards.

standards

The cartoon takes the more benevolent look at how standards proliferate; the more cynical view is that individuals/corporations recognize that control or influence over an industry standard can give them significant power in the technology ecosystem. I think both the benevolent and the cynical view are always at play – but the result is the continual creation of “bigger and badder” standards which are meant to replace but oftentimes fail to completely supplant existing ones. Case in point, as someone who has spent a fair amount of time looking at technologies to enable greater intelligence/network connectivity in new types of devices (think TVs, smart meters, appliances, thermostats, etc.), I’m still puzzled as to why we have so many wireless communication standards and protocols for achieving it (Bluetooth, Zigbee, ZWave, WiFi, DASH7, 6LowPAN, etc)

Problem #2: standards aren’t purely technical undertakings – they’re heavily motivated by the preferences of the bodies and companies which participate in formulating them, and like the US’s “wonderful” legislative process, involves mashing together a large number of preferences, some of which might not necessarily be easily compatible with one another. This can turn quite political and generate standards/working papers which are too difficult to support well (i.e. like DLNA). Or, as Dilbert sums it up, these meetings are full of people who are instructed to do this:

66480.strip

Or this:

129847.strip

Our one hope is that the industry has enough people/companires who are more vested in the future of the technology industry than taking unnecessarily cheap shots at one another… It’s a wonder we have functioning standards at all, isn’t it?

Leave a Comment

Its not just SNPs

Another month, another paper (although this one is almost two weeks overdue – sorry!)

In my life in venture capital, I’ve started more seriously looking at new bioinformatics technologies so I decided to dig into a topic that is right up that alley. This month’s paper from Nature Biotechnology covers the use of next-generation DNA sequencing technologies to look into something which had been previously extremely difficult to study with past sequencing technologies.

As the vast majority of human DNA is the same from person to person, one would expect that the areas of our genetic code which tend to vary the most from person to person, locations which are commonly known as Single Nucleotide Polymorphisms, or SNPs, would be the biggest driver of the variation we see in the human race (at least the variations that we can attribute to genes). This paper from researchers at the Beijing Genomics Institute (now the world’s largest sequencing facility – yes, its in China) adds another dimension to this – its not just SNPs that make us different from one another: humans also appear to have a wide range of variations on an individual level in the “structure” of our DNA, what are called Structural Variations, or SVs.

Whereas SNPs represent changes at the individual DNA code level (for instance, turning a C into a T), SVs are examples where DNA is moved (i.e., between chromosomes), repeated, inverted (i.e., large stretches of DNA reversed in sequence), or subject to deletions/insertions (i.e., where a stretch of DNA is removed or inserted into the original code). Yes, at the end of the day, these are changes to the underlying genetic code, but because of the nature of these changes, they are more difficult to detect with “old school” sequencing technologies which rely on starting at one position in the DNA and “reading” a stretch of DNA from that point onward. Take the example of a stretch of DNA that is moved – unless you start your “reading” right before or right at the end of where the new DNA has been moved to, you’d never know as the DNA would read normally everywhere else and in the middle of the DNA fragment.

What the researchers figured out is that new sequencing technologies let you tackle the problem of detecting SVs in a very different way. Instead of approaching each SV separately (trying to structure your reading strategy to catch these modifications), why not use the fact that so-called “next generation sequencing” is far faster and cheaper to read an individual’s entire genome and then look at the overall structure that way?

image

image

And that’s exactly what they did (see figures 1b and 1c above). They applied their sequencing technologies to the genomes of an African individual (1c) and an Asian individual (1b) and compared them to some of the genomes we have on file. The circles above map out the chromosomes for each of the individuals on the outer-most ring. On the inside, the lines show spots where DNA was moved or copied from place to place. The blue histogram shows where all the insertions are located, and the red histogram does the same thing with deletions. All in all: there looks to be a ton of structural variation between individuals. The two individuals had 80-90,000 insertions, 50-60,000 deletions, 20-30 inversions, and 500-800 copy/moves.

The key question that the authors don’t answer (mainly because the paper was about explaining how they did this approach, which I heavily glossed over here partly because I’m no expert, and how they know this approach is a valid one) is what sort of effect do these structural variations have on us biologically? The authors did a little hand-waving to show, with the limited data that they have, that humans seem to have more rare structural variations than we do rare SNPs – in other words, that you and I are more likely to have different SVs than different SNPs: a weak, but intriguing argument that structural variations drive a lot of the genetic-related individual variations between people. But that remains to be validated.

Suffice to say, this was an interesting technique with a very cool “million dollar figure” and I’m looking forward to seeing further research in this field as well as new uses that researchers and doctors dig up for the new DNA sequencing technology that is coming our way.

(All figures from paper)

Paper: Li et al., “Structural Variation in Two Human Genomes Mapped at Single-Nucleotide Resolution by Whole Genome Assembly.” Nature Biotechnology 29 (Jul 2011) — doi:10.1038/nbt.1904

Leave a Comment

Searching for a Narrative

I love the webcomic XKCD. Not only is it incredibly nerdy, its surprisingly on-point in terms of its take on reality. I found the comic below to be a good example of this:

sports

But whereas I find sports commentary to be somewhat plausible (because its about a specific person or small group of individuals that you might be able to interrogate and make inferences about), I think this is especially true on press describing the stock market.

Take the recent massive market downturn which occurred on Thursday, Aug 4th. Almost immediately, every press outlet had to have an explanation – people talked about fears of a Eurozone crisis, fears that the US and Chinese stimulus which have propped up global demand would vanish, fears that the US would be downgraded, and even talks that this was the media’s fault or the role of greedy banks using flawed computer systems.

The question that you never hear the press answer but which may be more relevant than all these narratives: is it even possible to know? You can’t ask the market what its thinking in the way that you might be able to ask a sports player or even a sports team, and its hard to run controlled experiments in the way a scientist might. And, while the psychology of the buyers and sellers certainly plays a big role, I think the simple truth is this: there is no real way to know, and its not only pointless to speculate but possibly counterproductive to try to explain the market’s movements. We’re all  hardwired to want a reason for something which is insightful and reveals something – but the fact of the matter is that trying to find reasons that aren’t necessarily there or even possible to validate pushes people into investing time and energy trying to control or understand things they can’t.

In my mind, its far better to take Warren Buffett’s approach: don’t waste your time on things you can’t predict or control or understand, take what you can get (the price of a stock or an asset) and make a decision based on that. Who cares why someone is offering to sell you something for $100 that is worth $200 – just make the right choice.

One Comment
%d bloggers like this: