• Boa Constrictors Listen to Your Heart So They Know When You’re Dead

    Source: Paul Whitten

    For January I decided to blog a paper I heard about on the excellent Nature podcast about a deliciously simple and elegant experiment to test a very simple question: given how much time and effort boa constrictors (like the one on above, photo taken by Paul Whitten) need to kill prey by squeezing them to death, how do they know when to stop squeezing?

    Hypothesizing that boa constrictors could sense the heartbeat of their prey, some enterprising researchers from Dickinson College decided to test the hypothesis by fitting dead rats with bulbs connected to water pumps (so that the researchers could simulate a heartbeat) and tracking how long and hard the boas would squeeze for:

    • rats without a “heartbeat” (white)
    • rats with a “heartbeat” for 10 min (gray)
    • rats with a continuous “heartbeat” (black)
    Source: Figure 2, Boback. et al

    The results are shown in figure 2 (to the right). The different color bars show the different experimental groups (white: no heartbeat, gray: heartbeat for 10 min before stopping, and black: continuous heartbeat). Figure 2a (on top) shows how long the boas squeezed for whereas Figure 2b (on bottom) shows the total “effort” exerted by the boas. As obvious from the chart, the longer the simulated heartbeat went, the longer and harder the boas would squeeze.

    Conclusion? I’ll let the paper speak for itself: “snakes use the heartbeat in their prey as a cue to modulate constriction effort and to decide when to release their prey.”

    Interestingly, the paper goes a step further for those of us who aren’t ecology experts and notes that being attentive to heartbeat would probably be pretty irrelevant in the wild for small mammals (which, ironically, includes rats) and birds which die pretty quickly after being constricted. Where this type of attentiveness to heartrate is useful is in reptilian prey (crocodiles, lizards, other snakes, etc) which can survive with reduced oxygen for longer. From that observation, the researchers thus concluded that listening for heartrate probably evolved early in evolutionary history at a time when the main prey for snakes were other reptiles and not mammals and birds.

    In terms of where I’d go next after this – my main point of curiosity is on whether or not boa constrictors are listening/feeling for any other signs of life (i.e. movement or breathing). Obviously, they’re sensitive to heart rate, but if an animal with simulated breathing or movement – would that change their constricting activity as well? After all, I’m sure the creative guys that made an artificial water-pump-heart can find ways to build an artificial diaphragm and limb muscles… right?

    Paper: Boback et al., “Snake modulates constriction in response to prey’s heartbeat.” Biol Letters. 19 Dec 2011. doi: 10.1098/rsbl.2011.1105

    Check out my other academic paper walkthroughs/summaries

  • Mosquitoes are Drawn to Your Skin Bacteria

    This month’s paper (from open access journal PLoS ONE) is yet again about the impact on our health of the bacteria which have decided to call our bodies home. But, instead of the bacteria living in our gut, this month is about the bacteria which live on our skin.

    It’s been known that the bacteria that live on our skin help give us our particular odors. So, the researchers wondered if the mosquitos responsible for passing malaria (Anopheles) were more or less drawn to different individuals based on the scent that our skin-borne bacteria impart upon us (also, for the record, before you freak out about bacteria on your skin, remember that like the bacteria in your gut, the bacteria on your skin are natural and play a key role in maintaining the health of your skin).

    Looking at 48 individuals, they noticed a huge variation in terms of attractiveness to Anopheles mosquitos (measured by seeing how much mosquitos prefer to fly towards a chamber with a particular individual’s skin extract versus a control) which they were able to trace to two things. The first is the amount of bacteria on your skin. As shown in Figure 2 below, is that the more bacteria that you have on your skin (the higher your “log bacterial density”), the more attractive you seem to be to mosquitos (the higher your mean relative attractiveness).

    Source: Figure 2, Verhulst et al

    The second thing they noticed was that the type of bacteria also seemed to be correlated with attractiveness to mosquitos. Using DNA sequencing technology, they were able to get a mini-census of what sort of bacteria were present on the skins of the different patients. Sadly, they didn’t show any pretty figures for the analysis they conducted on two common types of bacteria (Staphylococcus and Pseudomonas), but, to quote from the paper:

    The abundance of Staphylococcus spp. was 2.62 times higher in the HA [Highly Attractive to mosquitoes] group than in the PA [Poorly Attractive to mosquitoes] group and the abundance of Pseudomonas spp. 3.11 times higher in the PA group than in the HA group.

    Using further genetic analyses, they were also able to show a number of other types of bacteria that were correlated with one or the other.

    So, what did I think? While I think there’s a lot of interesting data here, I think the story could’ve been tighter. First and foremost, for obvious reasons, correlation does not mean causation. This was not a true controlled experiment – we don’t know for a fact if more/specific types of bacteria cause mosquitos to be drawn to them or if there’s something else that explains both the amount/type of bacteria and the attractiveness of an individual’s skin scent to a mosquito. Secondly, Figure 2 leaves much to be desired in terms of establishing a strong trendline. Yes, if I  squint (and ignore their very leading trendline) I can see a positive correlation – but truth be told, the scatterplot looks like a giant mess, especially if you include the red squares that go with “Not HA or PA”. For a future study, I think it’d be great if they could get around this to show stronger causation with direct experimentation (i.e. extracting the odorants from Staphylococcus and/or Pseudomonas and adding them to a “clean” skin sample, etc)

    With that said, I have to applaud the researchers for tackling a fascinating topic by taking a very different angle. Coverage of malaria is usually focused on how to directly kill or impede the parasite (Plasmodium falciparums). This is the first treatment of the “ecology” of malaria – specifically the ecology of the bacteria on your skin! While the authors don’t promise a “cure for malaria”, you can tell they are excited about what they’ve found and the potential to find ways other than killing parasites/mosquitos to help deal with malaria, and I look forward to seeing the other ways that our skin bacteria impact our lives.

    Paper: Verhulst et al. “Composition of Human Skin Microbiota Affects Attractiveness to Malaria Mosquitoes.” PLoS ONE 6(12). 17 Nov 2011. doi:10.1371/journal.pone.0028991

    Check out my other academic paper walkthroughs/summaries

  • Fat Flora

    Source: Healthy Soul

    November’s paper was published in Nature in 2006, and covers a topic I’ve become increasingly interested in: the impact of the bacteria that have colonized our bodies on our health (something I’ve blogged about here and here).

    The idea that our bodies are, in some ways, more bacteria than human (there are 10x more gut bacteria – or flora — than human cells on our bodies) and that those bacteria can play a key role on our health is not only mind-blowing, it opens up another potential area for medical/life sciences research and future medicines/treatments.

    In the paper, a genetics team from Washington University in St. Louis explored a very basic question: are the gut bacteria from obese individuals different from those from non-obese individuals? To study the question, they performed two types of analyses on a set of mice with a genetic defect leading to an inability of the mice to “feel full” (and hence likely to become obese) and genetically similar mice lacking that defect (the s0-called “wild type” control).

    The first was a series of genetic experiments comparing the bacteria found within the gut of obese mice with those from the gut of “wild-type” mice (this sort of comparison is something the field calls metagenomics). In doing so, the researchers noticed a number of key differences in the “genetic fingerprint” of the two sets of gut bacteria, especially in the genes involved in metabolism.

    Source: Figure 3, Turnbaugh et al.

    But, what did that mean to the overall health of the animal? To answer that question, the researchers did a number of experiments, two of which I will talk about below. First, they did a very simple chemical analysis (see figure 3b to the left) comparing the “leftover energy” in the waste (aka poop) of the obese mice to the waste of wild-type mice (and, yes, all of this was controlled for the amount of waste/poop). Lo and behold, the obese mice (the white bar) seemed to have gut bacteria which were significantly better at pulling calories out of the food, leaving less “leftover energy”.

    Source: Figure 3, Turnbaugh et al.

    While an interesting result, especially when thinking about some of the causes and effects of obesity, a skeptic might look at that data and say that its inconclusive about the role of gut bacteria in obesity – after all, obese mice could have all sorts of other changes which make them more efficient at pulling energy out of food. To address that, the researchers did a very elegant experiment involving fecal transplant: that’s right, colonize one mouse with the bacteria from another mouse (by transferring poop). The figure to the right (figure 3c) shows the results of the experiment. After two weeks, despite starting out at about the same weight and eating similar amounts of the same food, wild type mice that received bacteria from other wild type mice showed an increase in body fat of about 27%, whereas the wild type mice that received bacteria from the obese mice showed an increase of about 47%! Clearly, gut bacteria in obese mice are playing a key role in calorie uptake!

    In terms of areas of improvement, my main complaint about this study is just that it doesn’t go far enough. The paper never gets too deep on what exactly were the bacteria in each sample and we didn’t really get a sense of the real variation: how much do bacteria vary from mouse to mouse? Is it the completely different bacteria? Is it the same bacteria but different numbers? Is it the same bacteria but they’re each functioning differently? Do two obese mice have the same bacteria? What about a mouse that isn’t quite obese but not quite wild-type either? Furthermore, the paper doesn’t show us what happens if an obese mouse has its bacteria replaced with the bacteria from a wild-type mouse. These are all interesting questions that would really help researchers and doctors understand what is happening.

    But, despite all of that, this was a very interesting finding and has major implications for doctors and researchers in thinking about how our complicated flora impact and are impacted by our health.

    Paper: Turnbaugh et al., “An obesity-associated gut microbiome with increased capacity for energy harvest.” Nature (444). 21/28 Dec 2006. doi:10.1038/nature05414

    Check out my other academic paper walkthroughs/summaries

  • Antibody-omics

    I’m pretty late for paper of the month, so here we go

    “Omics” is the hot buzz-suffix in the life sciences for anything which uses the new sequencing/array technologies we now have available. You don’t study genes anymore, you study genomics. You don’t study proteins anymore – that’s so last century, you study proteomics now. And, who studies metabolism? Its all about metabolomics. There’s even a blog covering this with the semi-irreverent name “Omics! Omics!”.

    This month’s paper from Science is from researchers at the NIH because it was the first time I ever encountered the term “antibodyome”. As some of you know, antibodies are the “smart missiles” of your immune system – they are built to recognize and attack only one specific target (i.e. a particular protein on a bacteria/virus). This ability is so remarkable that, rather than rely on human-generated constructs, researchers and biotech companies oftentimes choose to use antibodies to make research tools (i.e. using fluorescent antibodies to label specific things) and therapies (i.e. using antibodies to proteins associated with cancer as anti-cancer drugs).

    How the immune system does this is a fascinating story in and of itself. In a process called V(D)J recombination – the basic idea is that your immune system’s B-cells mix, match, and scramble certain pieces of your genetic code to try to produce a wide range of antibodies to hit potentially every structure they could conceivably see. And, once they see something which “kind of sticks”, they undergo a process called affinity maturation to introduce all sorts of mutations in the hopes that you create an even better antibody.

    Which brings us to the paper I picked – the researchers analyzed a couple of particularly effective antibodies targeted at HIV, the virus which causes AIDS. What they found was that these antibodies all bound the same part of the HIV virus, but when they took a closer look at the 3D structures/the B-cell genetic code which made them, they found that the antibodies were quite different from one another (see Figure 3C below)

    Source: Figure 3C, Wu et al.

    What’s more, not only were they fairly distinct from one another, they each showed *significant* affinity maturation – while a typical antibody has 5-15% of their underlying genetic code modified, these antibodies had 20-50%! To get to the bottom of this, the researchers looked at all the antibodies they could pull from the patient – their “antibodyome” (in the same way that a patient’s genome would be all of their genes) — and along with data from other patients, they were able to construct a genetic “family tree” for these antibodies (see Figure 6C below)

    Source: Figure 6, Wu et al.

    The analysis shows that many of the antibodies were derived from the same initial genetic VDJ “mix-and-match” but that afterwards, there were quite a number of changes made to that code to get the situation where a diverse set of structures/genetic codes could attack the same spot on the HIV virus.

    While I wish the paper probed deeper into actual experimentation to take this analysis further (i.e. artificially using this method to create other antibodies with similar behavior), this paper goes a long way into establishing an early picture of what “antibodyomics” is. Rather than study the total impact of an immune response or just the immune capabilities of one particular B-cell/antibody, this sort of genetic approach lets researchers get a very detailed, albeit comprehensive look at where the body’s antibodies are coming from. Hopefully, longer term this also turns into a way for researchers to make better vaccines.

    Paper:  Wu et al., “Focused Evolution of HIV-1 Neutralizing Antibodies Revealed by Structures and Deep Sequencing.” Science (333). 16 Sep 2011. doi: 10.1126/science.1207532

    Check out my other academic paper walkthroughs/summaries

  • The Marketing Glory of NVIDIA’s Codenames

    While code names are not rare in the corporate world, more often than not, the names tend to be unimaginative. NVIDIA’s code names, however, are pure marketing glory.

    Take NVIDIA’s high performance computing product roadmap (below) – these are products that use the graphics processing capabilities of NVIDIA’s high-end GPUs and turn them into smaller, cheaper, and more power-efficient supercomputing engines which scientists and researchers can use to crunch numbers. How does NVIDIA describe its future roadmap? It uses the names of famous scientists to describe its technology roadmap: Tesla (the great American electrical engineer who helped bring us AC power), Fermi (“the father of the Atomic Bomb”), Kepler (one of the first astronomers to apply physics to astronomy), and Maxwell (the physicist who helped show that electrical, magnetic, and optical phenomena were all linked).

    Source: Rage3D

    Who wouldn’t want to do some “high power” research (pun intended) with Maxwell? 

    But, what really takes the cake for me are the codenames NVIDIA uses for its smartphone/tablet chips: its Tegra line of products. Instead of scientists, he uses, well, comic book characters. For release at the end of this year? Kal-El, or for the uninitiated, that’s the alien name for Superman. After that? Wayne, as in the alter ego for Batman. Then, Loganas in the name for the X-men Wolverine. And then Starkas in the alter ego for Iron Man.

    Source: NVIDIA

    Everybody wants a little Iron Man in their tablet.

  • Web vs Native

    When Steve Jobs first launched the iPhone in 2007, Apple’s perception of where the smartphone application market would move was in the direction of web applications. The reasons for this are obvious: people are familiar with how to build web pages and applications, and it simplifies application delivery.

    Yet in under a year, Apple changed course, shifting the focus of iPhone development from web applications to building native applications custom-built (by definition) for the iPhone’s operating system and hardware. While I suspect part of the reason this was done was to lock-in developers, the main reason was certainly the inadequacy of available browser/web technology. While we can debate the former, the latter is just plain obvious. In 2007, the state of web development was relatively primitive relative to today. There was no credible HTML5 support. Javascript performance was paltry. There was no real way for web applications to access local resources/hardware capabilities. Simply put, it was probably too difficult for Apple to kludge together an application development platform based solely on open web technologies which would get the sort of performance and functionality Apple wanted.

    But, that was four years ago, and web technology has come a long way. Combine that with the tech commentator-sphere’s obsession with hyping up a rivalry between “native vs HTML5 app development”, and it begs the question: will the future of application development be HTML5 applications or native?

    There are a lot of “moving parts” in a question like this, but I believe the question itself is a red herring. Enhancements to browser performance and the new capabilities that HTML5 will bring like offline storage, a canvas for direct graphic manipulation, and tools to access the file system, mean, at least to this tech blogger, that “HTML5 applications” are not distinct from native applications at all, they are simply native applications that you access through the internet. Its not a different technology vector – it’s just a different form of delivery.

    Critics of this idea may cite that the performance and interface capabilities of browser-based applications lag far behind those of “traditional” native applications, and thus they will always be distinct. And, as of today, they are correct. However, this discounts a few things:

    • Browser performance and browser-based application design are improving at a rapid rate, in no small part because of the combination of competition between different browsers and the fact that much of the code for these browsers is open source. There will probably always be a gap between browser-based apps and native, but I believe this gap will continue to narrow to the point where, for many applications, it simply won’t be a deal-breaker anymore.
    • History shows that cross-platform portability and ease of development can trump performance gaps. Once upon a time, all developers worth their salt coded in low level machine language. But this was a nightmare – it was difficult to do simple things like showing text on a screen, and the code written only worked on specific chips and operating systems and hardware configurations. I learned C which helped to abstract a lot of that away, and, keeping with the trend of moving towards more portability and abstraction, the mobile/web developers of today develop with tools (Python, Objective C, Ruby, Java, Javascript, etc) which make C look pretty low-level and hard to work with. Each level of abstraction adds a performance penalty, but that has hardly stopped developers from embracing them, and I feel the same will be true of “HTML5”.
    • Huge platform economic advantages. There are three huge advantages today to HTML5 development over “traditional native app development”. The first is the ability to have essentially the same application run across any device which supports a browser. Granted, there are performance and user experience issues with this approach, but when you’re a startup or even a corporate project with limited resources, being able to get wide distribution for earlier products is a huge advantage. The second is that HTML5 as a platform lacks the control/economic baggage that iOS and even Android have where distribution is controlled and “taxed” (30% to Apple/Google for an app download, 30% cut of digital goods purchases). I mean, what other reason does Amazon have to move its Kindle application off of the iOS native path and into HTML5 territory? The third is that web applications do not require the latest and greatest hardware to perform amazing feats. Because these apps are fundamentally browser-based, using the internet to connect to a server-based/cloud-based application allows even “dumb devices” to do amazing things by outsourcing some of that work to another system. The combination of these three makes it easier to build new applications and services and make money off of them – which will ultimately lead to more and better applications and services for the “HTML5 ecosystem.”

    Given Google’s strategic interest in the web as an open development platform, its no small wonder that they have pushed this concept the furthest. Not only are they working on a project called Native Client to let users achieve “native performance” with the browser, they’ve built an entire operating system centered entirely around the browser, Chrome OS, and were the first to build a major web application store, the Chrome Web Store to help with application discovery.

    While it remains to be seen if any of these initiatives will end up successful, this is definitely a compelling view of how the technology ecosystem evolves, and, putting on my forward-thinking cap on, I would not be surprised if:

    1. The major operating systems became more ChromeOS-like over time. Mac OS’s dashboard widgets and Windows 7’s gadgets are already basically HTML5 mini-apps, and Microsoft has publicly stated that Windows 8 will support HTML5-based application development. I think this is a sign of things to come as the web platform evolves and matures.
    2. Continued focus on browser performance may lead to new devices/browsers focused on HTML5 applications. In the 1990s/2000s, there was a ton of attention focused on building Java accelerators in hardware/chips and software platforms who’s main function was to run Java. While Java did not take over the world the way its supporters had thought, I wouldn’t be surprised to see a similar explosion just over the horizon focused on HTML5/Javascript performance – maybe even HTML5 optimized chips/accelerators, additional ChromeOS-like platforms, and potentially browsers optimized to run just HTML5 games or enterprise applications?
    3. Web application discovery will become far more important. The one big weakness as it stands today for HTML5 is application discovery. Its still far easier to discover a native mobile app using the iTunes App Store or the Android Market than it is to find a good HTML5 app. But, as platform matures and the platform economics shift, new application stores/recommendation engines/syndication platforms will become increasingly critical.

    Thought this was interesting? Check out some of my other pieces on Tech industry

  • Standards Have No Standards

    Many forms of technology requires standards to work. As a result, it is in the best interest of all parties in the technology ecosystem to participate in standards bodies to ensure interoperability.

    The two main problem with getting standards working can be summed up, as all good things in technology can be, in the form of webcomics. 

    Problem #1, from XKCDpeople/companies/organizations keep creating more standards.

    Source: XKCD

    The cartoon takes the more benevolent look at how standards proliferate; the more cynical view is that individuals/corporations recognize that control or influence over an industry standard can give them significant power in the technology ecosystem. I think both the benevolent and the cynical view are always at play – but the result is the continual creation of “bigger and badder” standards which are meant to replace but oftentimes fail to completely supplant existing ones. Case in point, as someone who has spent a fair amount of time looking at technologies to enable greater intelligence/network connectivity in new types of devices (think TVs, smart meters, appliances, thermostats, etc.), I’m still puzzled as to why we have so many wireless communication standards and protocols for achieving it (Bluetooth, Zigbee, ZWave, WiFi, DASH7, 6LowPAN, etc)

    Problem #2: standards aren’t purely technical undertakings – they’re heavily motivated by the preferences of the bodies and companies which participate in formulating them, and like the US’s “wonderful” legislative process, involves mashing together a large number of preferences, some of which might not necessarily be easily compatible with one another. This can turn quite political and generate standards/working papers which are too difficult to support well (i.e. like DLNA). Or, as Dilbert sums it up, these meetings are full of people who are instructed to do this:

    Source: Dilbert

    Or this:

    Source: Dilbert

    Our one hope is that the industry has enough people/companires who are more vested in the future of the technology industry than taking unnecessarily cheap shots at one another… It’s a wonder we have functioning standards at all, isn’t it?

    Thought this was interesting? Check out some of my other pieces on Tech industry

  • The “Strangest Biotech Company of All” Issues Their Annual Report as a Comic Book

    This seems almost made for me: I’m into comic books. I do my own “corporate style” annual and quarterly reports to track how my finances and goals are going. And, I follow the biopharma industry.

    Source: United Therapeutics 2010 Annual Report

    So, when I found out that a biotech company issued its latest annual report in the form of a comic book, I knew I had to talk about it!

    The art style is not all that bad, and the bulk of the comic is told from the first person perspective of Martin Auster, head of business development at the company (that’s Doctor Auster to you, pal!). We get an interesting look at Auster’s life, how he was a medical student who didn’t really want to do a residency, and how and why he ultimately joins the company.

    Source: United Therapeutics 2010 Annual Report
    Source: United Therapeutics 2010 Annual Report
    Source: United Therapeutics 2010 Annual Report

    And, of course, what annual report wouldn’t be complete without some financial charts – and yes, this particular chart was intended to be read with 3D glasses (which were apparently shipped with paper copies of the report):

    Source: United Therapeutics 2010 Annual Report

    Interestingly, the company in question – United Therapeutics — is not a tiny company either: its worth roughly $3 billion (as of when this was written) and is also somewhat renowned for its more unusual practices (meetings have occurred in the virtual world Second Life and employees are all called “Unitherians”) as well as its brilliant and eccentric founder, Dr. Martine Rothblatt. Rothblatt is a very accomplished modern-day polymath:

    • She was an early pioneer in communication satellite law
    • She helped launch a number of communication satellite technologies and companies
    • She founded and was CEO of Geostar Corporation, an early GPS satellite company
    • She founded and was CEO of Sirius Satellite Radio
    • She led the International Bar Association’s efforts to draft a Universal Declaration on the Human Genome and Human Rights
    • She is a pre-eminent proponent for xenotransplantation
    • She is also one of the most vocal advocates of transgenderism and transgender rights, having been born as Martin Rothblatt (Howard Stern even referred to her as the “Martine Luther Queen” of the movement)
    • She is a major proponent of the interesting philosophy that one might achieve technological immortality by digitizing oneself (having created an interesting robot version of her wife, Bina).
    • She started United Therapeutics because her daughter was diagnosed with Pulmonary Arterial Hypertension, a fatal condition which, at the time of diagnosis, there was no effective treatment for

    You got to have a lot of love and respect for a company that not only seems to have delivered an impressive financial outcome ($600 million in sales a year and $3 billion in market cap is not bad!) and can still maintain what looks like a very fun and unique culture (in no small part, I’m sure, because of their CEO).

  • The Goal is Not Profitability

    I’ve blogged before about how the economics of the venture industry affect how venture capitalists evaluate potential investments, the main conclusion of which is that VCs are really only interested in companies that could potentially IPO or sell for at least several hundred million dollars.

    One variation on that line of logic which I think startups/entrepreneurs oftentimes fail to grasp is that profitability is not the number one goal.

    Now, don’t get me wrong. The reason for any business to exist is to ultimately make profit. And, all things being equal, investors certainly prefer more profitable companies to less/unprofitable ones. But, the truth of the matter is that things are rarely all equal and, at the end of the day, your venture capital investors aren’t necessarily looking for profit, they are looking for a large outcome.

    Before I get accused of being supportive of bubble companies (I’m not), let me explain what this seemingly crazy concept means in practice. First of all, short-term profitability can conflict with rapid growth. This will sound counter-intuitive, but its the very premise for venture capital investment. Think about it: Facebook could’ve tried much harder to make a profit in its early years by cutting salaries and not investing in R&D, but that would’ve killed Facebook’s ability to grow quickly. Instead, they raised venture capital and ignored short-term profitability to build out the product and aggressively market. This might seem simplistic, but I oftentimes receive pitches/plans from entrepreneurs who boast that they can achieve profitability quickly or that they don’t need to raise another round of investment because they will be making a profit soon, never giving any thought to what might happen with their growth rate if they ignored profitability for another quarter or year.

    Secondly, the promise of growth and future profitability can drive large outcomesPandora, Groupon, Enphase, TeslaA123, and Solazyme are among some of the hottest venture-backed IPOs in recent memory and do you know what they all also happen to share? They are very unprofitable and, to the best of my knowledge, have not yet had a single profitable year. However, the investment community has strong faith in the ability of these businesses to continue to grow rapidly and, eventually, deliver profitability. Whether or not that faith is well-placed is another question (and I have my doubts on some of the companies on that list), but as these examples illustrate, you don’t necessarily need to be profitable to be able to get a large venture-sized outcome.

    Of course, it’d be a mistake to take this logic and assume that you never need to achieve or think about profitability. After all, a company that is bleeding cash unnecessarily is not a good company by any definition, regardless of whether or not the person evaluating it is in venture capital. Furthermore, while the public market may forgive Pandora and Groupon’s money-losing, there’s also no guarantee that they will be so forgiving of another company’s or even of Pandora/Groupons a few months from now.

    But what I am saying is that entrepreneurs need to be more thoughtful when approaching a venture investor with a plan to achieve profitability/stop raising money more quickly, because the goal of that investor is not necessarily short-term profits.

    Thought this was interesting? Check out some of my other pieces on how VC works / thinks

  • Our Job is Not to Make Money

    Let’s say you pitch a VC and you’ve got a coherent business plan and some thoughtful perspectives on how your business scales. Does that mean you get the venture capital investment that you so desire?

    Not necessarily. There could be many reasons for a rejection, but one that crops up a great deal is not anything intrinsically wrong with a particular idea or team, but something which is an intrinsic issue with the venture capital model.

    One of our partners put it best when he pointed out, “Our job is not to make money, it’s to make a lot of money.”

    What that means is that venture capitalists are not just looking for a business that can make money. They are really looking for businesses which have the potential to sell for or go public (sell stock on NYSE/NASDAQ/etc) and yield hundreds of millions, if not billions of dollars.

    Why? It has to do with the way that venture capital funds work.

    • Venture capitalists raise large $100M+ funds. This is a lot of money to work with, but its also a burden in that the venture capital firm also has to deliver a large return on that large initial amount. If you start with a $100M fund, its not unheard of for investors in that fund to expect $300-400M back – and you just can’t get to those kinds of returns unless you bet on companies that sell for/list on a public market for a lot of money.
    • Although most investments fail, big outcomes can be *really* big. For every Facebook, there are dozens of wannabe copycats that fall flat – so there is a very high risk that a venture investment will not pan out as one hopes. But, the flip side to this is that Facebook will likely be an outcome dozens upon dozens of times larger than its copycats. The combination of the very high risk but very high reward drive venture capitalists to chase only those which have a shot at becoming a *really* big outcome – doing anything else basically guarantees that the firm will not be able to deliver a large enough return to its investors.
    • Partners are busy people. A typical venture capital fund is a partnership, consisting of a number of general partners who operate the fund. A typical general partner will, in addition to look for new deals, be responsible for/advise several companies at once. This is a fair amount of work for each company as it involves helping companies recruit, develop their strategy, connect with key customers/partners/influencers, deal with operational/legal issues, and raise money. As a result, while the amount of work can vary quite a bit, this basically limits the number of companies that a partner can commit to (and, hence, invest in). This limit encourages partners to favor companies which could end up with a larger outcome than a smaller, because below a certain size, the firm’s return profile and the limits on a partner’s time just don’t justify having a partner get too involved.

    The result? Venture capitalists have to turn down many pitches, not because they don’t like the idea or the team and not even necessarily because they don’t think the company will make money in a reasonably short time, but because they didn’t think the idea had a good shot at being something as big and game-changing as Google, Genentech, and VMWare were. And, in fact, the not often heard truth is that a lot of the endings which entrepreneurs think of as great and which are frequently featured on tech blogs like VentureBeat and TechCrunch (i.e. selling your company to Google for $10M) are actually quite small (and possibly even a failure) when it comes to how a large venture capital firm views it.

    Thought this was interesting? Check out some of my other pieces on how VC works / thinks

  • Do You Have the Guts for Nori?

    Source: Precision Nutrition

    The paper I will talk about this month is from April of this year and highlights the diversity of our “gut flora” (a pleasant way to describe the many bacteria which live in our digestive tract and help us digest the food we eat). Specifically, this paper highlights how a particular bacteria in the digestive tracts of some Japanese individuals has picked up a unique ability to digest certain certain sugars which are common in marine plants (e.g., Porphyra, the seaweed used to make sushi) but not in terrestrial plants.

    Interestingly, the researchers weren’t originally focused on how gut flora function at all, but in understanding how marine bacteria digested marine plants. They started by studying a particular marine bacteria, Zobellia galactanivorans which was known for its ability to digest certain types of algae. Scanning the genome of Zobellia, the researchers were able to identify a few genes which were similar enough to known sugar-digesting enzymes but didn’t seem to have the ability to act on the “usual plant sugars”.

    Two of the identified genes, which they called PorA and PorB, were found to be very selective in the type of plant sugar they digested. In the chart below (from Figure 1), 3 different plants are characterized along a spectrum showing if they have more LA (4-linked 3,6-anhydro-a-L-galactopyranose) chemical groups (red) or L6S (4-linked a-L-galactopyranose-6-sulphate) groups (yellow). Panel b on the right shows the H1-NMR spectrum associated with these different sugar mixes and is a chemical technique to verify what sort of sugar groups are present.

    Source: Figure 1, Hehemann et al.

    These mixes were subjected to PorA and PorB as well as AgaA (a sugar-digesting enzyme which works mainly on LA-type sugars like agarose). The bar charts in the middle show how active the respective enzymes were (as indicated by the amount of plant sugar digested).

    As you can see, PorA and PorB are only effective on L6S-type sugar groups, and not LA-type sugar groups. The researchers wondered if they had discovered the key class of enzyme responsible for allowing marine life to digest marine plant sugars and scanned other genomes for other enzymes similar to PorA and PorB. What they found was very interesting

    Source: Figure 3, Hehemann et al.

    What you see above is an evolutionary family tree for PorA/PorB-like genes. The red and blue boxes represent PorA/PorB-like genes which target “usual plant sugars”, but the yellow show the enzymes which specifically target the sugars found in nori (Porphyra, hence the enzymes are called porhyranases). All the enzymes marked with solid diamonds are actually found in Zgalactanivorans (and were henced dubbed PorC, PorD, and PorE – clearly not the most imaginative naming convention). The other identified genes, however, all belonged to marine bacteria… with the notable exception of Bateroides plebeius, marked with a open circle. And Bacteroides plebeius (at least to the knowledge of the researchers at the time of this publication) has only been found in the guts of certain Japanese people!

    The researchers scanned the Bacteroides plebeius genome and found that the bacteria actually had a sizable chunk of genetic material which were a much better match for marine bacteria than other similar Bacteroides strains. The researchers concluded that the best explanation for this is that the Bacteroides plebeius picked up its unique ability to digest marine plants not on its own, but from marine bacteria (in a process called Horizontal Gene Transfer or HGT), most probably from bacteria that were present on dietary seaweed. Or, to put it more simply: your gut bacteria have the ability to “steal” genes/abilities from bacteria on the food we eat!

    Cool! While this is a conclusion which we can probably never truly prove (it’s an informed hypothesis based on genetic evidence), this finding does make you wonder if a similar genetic screening process could identify if our gut flora have picked up any other genes from “dietary bacteria.”

    Paper: Hehemann et al, “Transfer of carbohydrate-active enzymes from marine bacteria to Japanese gut microbiota.” Nature464: 908-912 (Apr 2010) – doi:10.1038/nature08937

    Check out my other academic paper walkthroughs/summaries

  • How You Might Cure Asian Glow

    The paper I read is something that is very near and dear to my heart. As is commonly known, individuals of Asian ancestry are more likely to experience dizziness and flushed skin after drinking alcohol. This is due to the prevalence of a genetic defect in the Asian population which affects an enzyme called Aldehyde Dehydrogenase 2 (ALDH2). ALDH2 processes one of the by-products of alcohol consumption (acetaldehyde).

    In people with the genetic defect, ALDH2 works very poorly. So, people with the ALDH2 defect build up higher levels of acetaldehyde which leads them to get drunker (and thus hung-over/sick/etc) quicker. This is a problem for someone like me, who needs to drink a (comically) large amount of water to be able to properly process wine/beer/liquor. Interestingly, the anti-drinking drug Disulfiram (sold as “Antabuse” and “Antabus”) helps alcoholics keep off of alcohol by basically shutting down a person’s ALDH2, effectively giving them “Asian alcohol-induced flushing syndrome” and making them get drunk and sick very quickly.

    Source: Wikipedia

    So, what can you do? At this point, nothing really (except, either avoid alcohol or drink a ton of water when you do drink). But, I look forward to the day when there may actually be a solution. A group at Stanford recently identified a small molecule, Alda-1 (chemical structure above), which not only increases the effectiveness of normal ALDH2, but can help “rescue” defective ALDH2!

    Have we found the molecule which I have been searching for ever since I started drinking? Jury’s still out, but the same group at Stanford partnered with structural biologists at Indiana University to conduct some experiments on Alda-1 to try to find out how it works.

    Source: Figure 4, Perez-Miller et al

    To do this, and why this paper was published in Nature Structural and Molecular Biology rather than another journal, they used a technique called X-ray Crystallography to “see” if (and how) Alda-1 interacts with ALDH2. Some of the results of these experiments are shown above. On the left, Panel B (on top) shows a 3D structure of the “defective’ version of ALDH2. If you’re new to structural biology papers, this will take some time getting used to it, but if you look carefully, you can see that ALDH2 is a tetramer: there are 4 identical pieces (in the top-left, top-right, bottom-left, bottom-right) which are attached together in the middle.

    It’s not clear from this picture, but the defective version of the enzyme differs from the normal because it is unable to maintain the 3D structure needed to link up with a coenzyme (a chemical needed by enzymes which do this sort of chemical reaction to be able to work properly) called NAD+ or even carry out the reaction (the “active site”, or the part of the enzyme which actually carries out the reaction, is “disrupted” in the mutant).

    So what does Alda-1 do, then? In the bottom (Panel C), you can see where the Alda-1 molecules (colored in yellow) are when they interact with ALDH2.  While the yellow molecules have a number of impacts on ALHD2’s 3D structure, the most obvious changes are highlighted in pink (those have no clear counterpart in Panel B). This is the secret of Alda-1: it actually changes the shape of ALDH2, (partially) restoring the enzyme’s ability to bind with NAD+ and carry out the chemical reactions needed to process acetaldehyde, and all without actually directly touching the active site (this is something which you can’t see in the panel I shared above, but you can make out from other X-ray crystallography models in the paper).

    The result? If you look at the chart below (Panel A), you’ll see two relationships at play. First, the greater the amount of co-enzyme NAD+ (on the horizontal axis), the faster the reaction speed (on the vertical axis). But, if you increase the amount of Alda-1 from 0 uM (the bottom-most curve) to 30 uM (the highest-most curve), you see a dramatic increase in the enzyme’s reaction speed, for the same amount of NAD+. So, does Alda-1 activate ALDH2? Judging from this chart, it definitely does.

    Source: Figure 4, Perez-Miller et al

    Alda-1 is particularly interesting because most of the chemicals/drugs which we are able to develop work by breaking, de-activating, or inhibiting something. Have a cold? Break the chemical pathways which lead to runny noses. Suffering from depression? De-activate the process which cleans up serotonin (“happiness” chemicals in the brain) quickly. After all, its much easier to break something than it is to fix/create something. But, instead, Alda-1 is actually an activator (rather than a de-activator), which the authors of the study leave as a tantalizing opportunity for medical science:

    This work suggests that it may be possible to rationally design similar molecular chaperones for other mutant enzymes by exploiting the binding of compounds to sites adjacent to the structurally disrupted regions, thus avoiding the possibility of enzymatic inhibition entirely independent of the conditions in which the enzyme operates.

    If only it were that easy (it’s not)…

    Where should we go from here? Frankly, while the paper tackled a very interesting topic in a pretty rigorous fashion, I felt that a lot of the conclusions being drawn were not clear from the presented experimental results (which is why this post is a bit on the vague side on some of those details).

    I certainly understand the difficulty when the study is on phenomena which is molecular in nature (does the enzyme work? are the amino acids in the right location?). But, I personally felt a significant part of the paper was more conjecture than evidence, and while I’m sure the folks making the hypotheses are very experienced, I would like to see more experimental data to back up their theories. A well-designed set of site-directed mutagenesis (mutating specific parts of ALDH2 in the lab to play around with they hypotheses that the group put out) and well-tailored experiments and rounds of X-ray crystallography could help shed a little more light on their fascinating idea.

    Paper: Perez-Miller et al. “Alda-1 is an agonist and chemical chaperone for the common human aldehyde dehydrogenase 2 variant.” Nature Structural and Molecular Biology 17:2 (Feb 2010) –doi:10.1038/nsmb.1737

    Check out my other academic paper walkthroughs/summaries

  • I Know Enough to Get Myself in Trouble

    One of the dangers of a consultant looking at tech is that he can get lost in jargon. A few weeks ago, I did a little research on some of the most cutting-edge software startups in the cloud computing space (the idea that you can use a computer feature/service without actually knowing anything about what sort of technology infrastructure was used to provide you with that feature/service – i.e., Gmail and Yahoo Mail on the consumer side, services like Amazon Web Services and Microsoft Azure on the business side). As a result, I’ve looked at the product offerings from guys like NimbulaClouderaClustrixAppistryElastra, and MaxiScale, to name a few. And, while I know enough about cloud computing to understand, at a high level, what these companies do, the use of unclear terminology sometimes makes it very difficult to pierce the “fog of marketing” and really get a good understanding of the various product strengths and weaknesses.

    Is it any wonder that, at times, I feel like this:

    Source: Dilbert

    Yes, its all about that “integration layer” … My take? A great product should not need to hide behind jargon.

  • Diet Coke + Mentos = Paper

    Unless you just discovered YouTube yesterday, you’ve probably seen countless videos of (and maybe even have tried?) the infamous Diet Coke + Mentos reaction… which brings us to the subject of this month’s paper.

    An enterprising physics professor from Appalachian State University decided to have her sophomore physics class take a fairly rigorous look at what drives the Diet Coke + Mentos reaction and what factors might influence its strength and speed. They were not only able to publish their results in the American Journal of Physics, but the students were also given an opportunity to present their findings in a poster session (Professor Coffey reflected on the experience in a presentation she gave). In my humble opinion, this is science education at its finest: instead of having students re-hash boring experiments which they already know the results of, this allowed them to do fairly original research in a field which they probably had more interest in than in the typical science lab course.

    So, what did they find?

    The first thing they found is that it’s not an acid-base reaction. A lot of people, myself included, believe the diet coke + Mentos reaction is the same as the baking soda + vinegar “volcano” reactions that we all did as kids. Apparently, we were dead wrong, as the paper points out:

    The pH of the diet Coke prior to the reaction was 3.0, and the pH of the diet Coke after the mint Mentos reaction was also 3.0. The lack of change in the pH supports the conclusion that the Mint Mentos–Diet Coke reaction is not an acid-base reaction. This conclusion is also supported by the ingredients in the Mentos, none of which are basic: sugar, glucose, syrup, hydrogenated coconut oil, gelatin, dextrin, natural flavor, corn starch, and gum arabic … An impressive acid-base reaction can be generated by adding baking soda to Diet Coke. The pH of the Diet Coke after the baking soda reaction was 6.1, indicating that much of the acid present in the Diet Coke was neutralized by the reaction.

    Source: Table 1, Coffey, American Journal of Physics

    Secondly, the “reaction” is not chemical (no new compounds are created), but a physical response because the Mentos makes bubbles easier to form. The Mentos triggers bubble formation because the surface of the Mentos is itself extremely rough which allows bubbles to aggregate (like how adding string/popsicle stick to an oversaturated mixture of sugar and water is used to make rock candy). But that doesn’t explain why the Mentos + Diet Coke reaction works so well. The logic blew my mind but, in retrospect, is pretty simple. Certain liquids are more “bubbly” by nature – think soapy water vs. regular water. Why? Because the energy that’s needed to form a bubble is lower than the energy available from the environment (e.g., thermal energy). So, the question is, what makes a liquid more “bubbly”? One way is to heat the liquid (heating up Coke makes it more bubbly because heating the carbon dioxide inside the soda gives the gas more thermal energy to draw upon), which the students were able to confirm when they looked at how much mass was lost during a Mentos + Diet coke reaction under three different temperatures (Table 3 below):

    Source: Table 3, Coffey, American Journal of Physics

    What else? It turns out that what other chemicals a liquid has dissolved is capable of changing the ease at which bubbles are made. Physicists/chemists will recognize this “ease” as surface tension (how tightly the surface of a liquid pulls on itself) which you can see visually as a change in the contact angle (the angle that the bubble forms against a flat surface, see below):

    Source: Contact angle description from presentation

    The larger the angle, the stronger the surface tension (the more tightly the liquid tries to pull in on itself to become a sphere). So, what happens when we add the artificial sweetener aspartame and potassium benzoate (both ingredients in Diet Coke) to water? As you can see in Figure 4 below, the contact angle in (b) [aspartame] and (c) [potassium benzoate] are smaller than (a) [pure water]. Translation: if you add aspartame and/or potassium benzoate to water, you reduce the amount of work that needs to be done by the solution to create a bubble. Table 4 below that shows the contact angles of a variety of solutions that the students tested as well as the amount of work needed to create a bubble relative to pure water:

    Source: Figure 4, Coffey, American Journal of Physics
    Source: Table 4, Coffey, American Journal of Physics

    This table also shows why you use Diet Coke rather than regular Coke (basically sugar-water) to do the Mentos thing – regular coke has a higher contact angle (and ~20% more energy needed to make a bubble).

    Another factor which the paper considers is how long it takes the dropped Mentos to sink to the bottom. The faster a Mentos falls to the bottom, the longer the “average distance” that a bubble needs to travel to get to the surface. As bubbles themselves attract more bubbles, this means that the Mentos which fall to the bottom the fastest will have the strongest explosions. As the paper points out:

    The speed with which the sample falls through the liquid is also a major factor. We used a video camera to measure the time it took for Mentos, rock salt, Wint-o-Green Lifesavers, and playground sand to fall through water from the top of the water line to the bottom of a clear 2 l bottle. The average times were 0.7 s for the Mentos, 1.0 s for the rock salt and the Lifesavers, and 1.5 s for the sand … If the growth of carbon  dioxide bubbles on the sample takes place at the bottom of the bottle, then the bubbles formed will detach from the sample and rise up the bottle. The bubbles then act as growth sites, where the carbon dioxide still dissolved in the solution moves into the rising bubbles, causing even more liberation of carbon dioxide from the bottle. If the bubbles must travel farther through the liquid, the reaction will be more explosive.

    So, in conclusion, what makes a Diet Coke + Mentos reaction stronger?

    • Temperature (hotter = stronger)
    • Adding substances which reduce the surface tension/contact angle
    • Increasing the speed at which the Mentos sink to the bottom (faster = stronger)

    I wish I had done something like this when I was in college! The paper itself also goes into a lot of other things, like the use of an atomic force microscope and scanning electron microscopes to measure the “roughness” of the surface of the Mentos, so if you’re interested in additional things which can affect the strength of the reaction (or if you’re a science teacher interested in coming up with a cool project for your students), I’d strongly encourage taking a look at the paper!

    Paper: Coffey, T. “Diet Coke and Mentos: What is really behind this physical reaction?”. American Journal of Physics 76:6 (Jun 2008) – doi: 10.1119/1.2888546

    Check out my other academic paper walkthroughs/summaries

  • United States of Amoeba

    Most people know that viruses are notoriously tricky disease-causing pathogens to tackle. Unlike bacteria which are completely separate organisms, viruses are parasites which use a host cell’s own DNA-and-RNA-and-protein producing mechanisms to reproduce. As a result, most viruses are extremely small, as they need to find a way into a cell to hijack the cell’s  machinery, and, in fact, are oftentimes too small for light microscopes to see as beams of light have wavelengths that are too large to resolve them.

    However, just because most viruses are small, doesn’t mean all viruses are. In fact, giant MimivirusesMamaviruses, and Marseillesviruses have been found which are larger than many bacteria. The Mimivirus (pictured below), for instance, was so large it was actually identified incorrectly as a bacteria at first glance!

    Source: Wikipedia

    Little concrete detail is known about these giant viruses, and there has been some debate about whether or not these viruses constitute a new “kingdom” of life (the way that bacteria and archaebacteria are), but one thing these megaviruses have in common is that they are all found within amoeba!

    This month’s paper (HT: Anthony) looks into the genome of the Marseillesvirus to try to get a better understanding of the genetic origins of these giant viruses. The left-hand-side panel of picture below is an electron micrograph of an amoeba phagocytosing Marseillesvirus (amoeba, in the search for food, will engulf almost anything smaller than they are) and the right-hand-side panel shows the virus creating viral factories (“VF”, the very dark dots) within the amoeba’s cytoplasm. If you were to zoom in even further, you’d be able to see viral particles in different stages of viral assembly!

    Source: Figure 1, Boyer et al.

    Ok, so we can see them. But just what makes them so big? What the heck is inside? Well, because you asked so nicely:

    • ~368000-base pairs of DNA
      • This constitutes an estimated 457 genes
      • This is much larger than the ~5000 base pair genome of SV40, a popular lab virus, the ~10000 base pairs in HIV, the ~49000 in lambda phage (another scientifically famous lab virus), but is comparable to the genome sizes of some of the smaller bacterium
      • This is smaller than the ~1 million-base pair genome of the Mimivirus, the ~4.6 million of E. coli and the ~3.2 billion in humans
    • 49 proteins were identified in the viral particles, including:
      • Structural proteins
      • Transcription factors (helps regulate gene activity)
      • Protein kinases (primarily found in eukaryotic cells because they play a major role in cellular signaling networks)
      • Glutaredoxins and thioredoxins (usually only found in plant and bacterial cells to help fight off chemical stressors)
      • Ubiquitin system proteins (primarily in eukaryotic cells as they control which proteins are sent to a cell’s “garbage collector”)
      • Histone-like proteins (primarily in eukaryotic cells to pack a cell’s DNA into the nucelus)

    As you can see, there are a whole lot of proteins which you would only expect to see in a “full-fledged” cell, not a virus. This begs the question, why do these giant viruses have so many extra genes and proteins that you wouldn’t have expected?

    To answer this, the researchers ran a genetic analysis on the Marseillesvirus’s DNA, trying to identify not only which proteins were encoded in the DNA but also where those protein-encoding genes seem to come from (by identifying which species has the most similar gene structure). A high-level overview of the results of the analysis is shown in the circular map below:

    The outermost orange bands in the circle correspond to the proteins that were identified in the virus itself using mass spectrometry. The second row of red and blue bands represents protein-coding genes that are predicted to exist (but have yet to be detected in the virus; its possible they don’t make up the virus’s “body” and are only made while inside the amoeba, or even that they are not expressed at all). The gray ring with colored bands represents the researchers’ best guess as to what a predicted protein-coding gene codes for (based on seeing if the gene sequence is similar to other known proteins; the legend is below-right) whereas the colored bands just outside of the central pie chart represents a computer’s best determination of what species the gene seems to have come from (based on seeing if the gene sequence is similar to/the same as another species).

    Of the 188 genes that a computational database identified as matching a previously characterized gene (~40% of all the predicted protein-coding genes), at least 108 come from sources outside of the giant viruses “evolutionary family”. The sources of these “misplaced” genes include bacteria, bacteria-infecting viruses called bacteriophages, amoeba, and even other eukaryotes! In other words, these giant viruses were genetic chimeras, mixed with DNA from all sorts of creatures in a way that you’d normally only expect in a genetically modified organism.

    As many viruses are known to be able to “borrow” DNA from their hosts and from other viruses (a process called horizontal gene transfer), the researchers concluded that, like the immigrant’s conception of the United States of America, amoebas are giant genetic melting pots where genetic “immigrants” like bacteria and viruses comingle and share DNA (pictured below). In the case of the ancestors to the giant viruses, this resulted in viruses which kept gaining more and more genetic material from their amoeboid hosts and the abundance of bacterial and virus parasites living within.

    Source: Figure 5, Boyer et al.

    his finding is very interesting, as it suggests that amoeba may have played a crucial role in the early evolution of life. In the same way that a cultural “melting pot” like the US allows the combination of ideas from different cultures and walks of life, early amoeba “melting pots” may have helped kickstart evolutionary jumps by letting eukaryotes, bacteria, and viruses to co-exist and share DNA far more rapidly than “regular” natural selection could allow.

    Of course, the flip side of this is that amoeba could also very well be allowing super-viruses and super-bacteria to breed…

    Paper: Boyer, Mickael et al. “Giant Marseillevirus highlights the role of amoebae as a melting pot in emergence of chimeric microorganisms.” PNAS 106, 21848-21853 (22 Dec 2009) – doi:10.1073/pnas.0911354106

    Check out my other academic paper walkthroughs/summaries

  • Collateral Damage by Mitochondria

    This month, I read a paper (HT: my ex-college roommate Eric) by a group from Beth Israel about systemic inflammatory response syndrome (SIRS) following serious injury. SIRS, which is more commonly understood/found as sepsis, happens when the entire body is on high “immune alert.” In the case of sepsis, this is usually due to an infection of some sort. While an immune response may be needed to control an internal infection, SIRS is dangerous because the immune system can cause a great deal of collateral damage, resulting in potentially organ failure and death.

    Whereas an infection has a clear link to sepsis, the logic for why injury would cause a similar immune response was less clear. In fact, for years, the best hypothesis from the medical community was that injury would somehow cause the bacteria which naturally live in your gut to appear where they’re not supposed to be. But this explanation was not especially convincing, especially in light of injuries like burns which could still lead to SIRS but which didn’t seem to directly affect gut bacteria.

    Source: The Brain From Top to Bottom

    Zhang et al, instead of assuming that some type of  endogenous bacteria was being released following injury, came up with an interesting hypothesis: it’s not bacteria which is triggering SIRS, but mitochondria. A first year cell biology student will be able to tell you that mitochondria are the parts of eukaryotic cells (sophisticated cells with nuclei) which are responsible for keeping the cell supplied with energy. A long-standing theory in the life science community (pictured above) is that mitochondria, billions of years ago, were originally bacteria which other, larger bacteria swallowed whole. Over countless rounds of evolution, these smaller bacteria became symbiotic with their “neighbor” and eventually adapted to servicing the larger cell’s energy needs. Despite this evolution, mitochondria have not lost all of their (theorized) bacterial ancestry, and in fact still retain bacteria-like DNA and structures. Zhang et al’s guess was that serious injuries could expose a mitochondria’s hidden bacterial nature to the immune system, and cause the body to trigger SIRS as a response.

    Interesting idea, but how do you prove it? The researchers were able to show that 15 major trauma patients with no open wounds or injuries to the gut had thousands of times more mitochondrial DNA  in their bloodstream than non-trauma victims. The researchers were then able to show that this mitochondrial DNA was capable of activating polymorphonuclear neutrophils, some of the body’s key “soldier” cells responsible for causing SIRS.

    Source: Figure 3, Zhang et al.

    The figure above shows the result of an experiments illustrating this effect looking at the levels of a protein called p38 MAPK which gets chemically modified into “p-p38” when neutrophils are activated. As you can see in the p-p38 row, adding more mitochondrial DNA (mtDNA, “-” columns) to a sample of neutrophils increases levels of p-p38 (bigger, darker splotch), but adding special DNA which blocks the neutrophil’s mtDNA “detectors” (ODN, “+” columns) seems to lower it again. Comparing this with the control p38 row right underneath shows that the increase in p-p38 is likely due to neutrophil activation from the cells detecting mitochondrial DNA, and not just because the sample had more neutrophils/more p38 (as the splotches in the second row are all roughly the same).

    Cool, but does this mean that mitochondrial DNA actually causes a strong immune response outside of a test tube environment? To test this, the researchers injected mitochondrial DNA into rats and ran a full set of screens on them. While the paper showed numerous charts pointing out how the injected rats had strong immune response across multiple organs, the most striking are the pictures below which show a cross-section of a rat’s lungs comparing rats injected with a buffer solution (panel a, “Sham”) and rats injected with mitochondrial DNA (panel b, MTD). The cross-sections are stained with hematoxylin and eosin which highlight the presence of cells. The darker and “thicker” color on the right shows that there are many more cells in the lungs of rats injected with mitochondrial DNA – most likely from neutrophils and other “soldier cells” which have rushed in looking for bacteria to fight.

    Source: Figure 4, Zhang et al.

    Amazing isn’t it? Not only did they provide part of the solution to the puzzle of injury-mediated SIRS (what they used to call “sterile SIRS”), but lent some support to the endosymbiont hypothesis!

    Paper: Zhang, Qin et al. “Circulating Mitochondrial DAMPs Cause Inflammatory Responses to Injury.” Nature 464, 104-108 (4 March 2010) – doi:10.1038/nature08780

    Check out my other academic paper walkthroughs/summaries

  • Slime Takes a Stroll

    The paper I read for this month brought up an interesting question I’ve always had but never really dug into: how do individual cells find things they can’t “see”? After all, there are lots of microbes out there who can’t always see where their next meal is coming from. How do they go about looking?

    A group of scientists at Princeton University took a stab at the problem by studying the motion of individual slime mold amoeba (from the genus Dictyostelium) and published their findings in the (open access) journal PLoS ONE.

    As one would imagine, if you have no idea where something is, your path to finding it will be somewhat random. What this paper sought to discover is what kind of random motion do amoeboid-like cells use? To those of you without the pleasure of training in biophysics or stochastic processes, that may sound like utter nonsense, but suffice to say physicists and mathematicians have created mathematically precise definitions for different kinds of “random motion”.

    Now, if the idea of different kinds of randomness makes zero sense to you, then the following figure (from Figure 1 in the paper) might be able to help:

    Source: Figure 1, Li et al.

    anel A describes a “traditional” random walk, where each “step” that a random walker takes is completely random (unpredictable and independent of the motion before it). As you can see, the path doesn’t really cover a lot of ground. After all, if you were randomly moving in different directions, you’re just as likely to move to the left as you are to move to the right. The result of this chaos is that you’re likely not to move very far at all (but likely to search a small area very thoroughly). As a result, this sort of randomness is probably not very useful for an amoeba hunting for food, unless for some reason it is counting on food to magically rain down on its lazy butt.

    Panel B and C describe two other kinds of randomness which are better suited to covering more ground. Although the motion described in panel B (the “Levy walk”) looks very different from the “random walk” in Panel A, it is actually very similar on a mathematical/physical level. In fact, the only difference between the “Levy walk” and the “random walk” is that, in a “normal” random walk, the size of each step is constant, whereas the size of each “step” in a “Levy walk” can be different and, sometimes, extremely long. This lets the path taken cover a whole lot more ground.

    A different way of using randomness to cover a lot of ground is shown in Panel C where, instead of taking big steps, the random path actually takes on two different types of motion. In one mode, the steps are exactly like the random walk in Panel A, where the path doesn’t go very far, but “searches” a local area very thoroughly. In another mode, the path bolts in a straight line for a significant distance before settling back into a random walk. This alternation between the different modes defines the “two-state motion” and is another way for randomness to cover more ground than a random walk.

    And what do amoeba use? Panel D gives a glimpse of it. Unlike the nice theoretical paths from Panels A-C rooted around random walks and different size steps or different modes of motion, the researchers found that slime mold amoeba like to zig-zag around a general direction which seems to change randomly over the course of ~10 min. Panel A of Figure 2 (shown below) gives a look at three such random paths taken over 10 hours.

    Source: Figure 2, Li et al.

    The reason for this zig-zagging, or at least the best hypothesis at the time of publication, is that, unlike theoretical particles, amoeba can’t just move in completely random directions with random “step” sizes. They move by “oozing” out pseudopods (picture below), and this physical reality of amoeba motion basically makes the type of motion the researchers discussed more likely and efficient for a cell trying to make its way through uncharted territory.

    Source: 7B Science Online Labs

    The majority of the paper actually covers a lot of the mathematical detail involved in understanding the precise nature of the randomness of amoeboid motion, and is, frankly, an overly-intimidating way to explain what I just described above. In all fairness, that extra detail is more useful and precise in terms of understanding how amoeba move and give a better sense of the underlying biochemistry and biophysics of why they move that way. But what I found most impressive was that the paper took a very basic and straightforward experiment (tracking the motion of single cells) and applied a rigorous mathematical and physical analysis of what they saw to understand the underlying properties.

    The paper was from May 2008 and, according to the PLoS One website, there have been five papers which have cited it (which I have yet to read). But, I’d like to think that the next steps for the researchers involved would be to:

    1. See how much of this type of zig-zag motion applies to other cell types (i.e., white blood cells from our immune system), and why these differences might have emerged (different cell motion mechanisms? the need to have different types of random search strategies?)
    2. Better understand what controls how quickly these cells change direction (and understand if there are drugs that can be used to modulate how our white blood cells find/identify pathogens or how pathogens find food)

    Paper: Li, Liang et al. “Persistent Cell Motion in the Absence of External Signals: a Search Strategy for Eukaryotic Cells.” PLoS ONE3 (5): e2093 (May 2008) – doi:10.1371/journal.pone.0002093

    Check out my other academic paper walkthroughs/summaries

  • Why Smartphones are a Big Deal

    A cab driver the other day went off on me with a rant about how new smartphone users were all smug, arrogant gadget snobs for using phones that did more than just make phone calls. “Why you gotta need more than just the phone?”, he asked.

    While he was probably right on the money with the “smug”, “arrogant”, and “snob” part of the description of smartphone users (at least it accurately describes yours truly), I do think he’s ignoring a lot of the important changes which the smartphone revolution has made in the technology industry and, consequently, why so many of the industry’s venture capitalists and technology companies are investing so heavily in this direction. This post will be the first of two posts looking at what I think are the four big impacts of smartphones like the Blackberry and the iPhone on the broader technology landscape:

    1. It’s the software, stupid
    2. Look ma, no <insert other device here>
    3. Putting the carriers in their place
    4. Contextuality

    I. It’s the software, stupid!

    You can find possibly the greatest impact of the smartphone revolution in the very definition of smartphone: phones which can run rich operating systems and actual applications. As my belligerent cab-driver pointed out, the cellular phone revolution was originally about being able to talk to other people on the go. People bought phones based on network coverage, call quality, the weight of a phone, and other concerns primarily motivated by call usability.

    Smartphones, however, change that. Instead of just making phone calls, they also do plenty of other things. While a lot of consumers focus their attention on how their phones now have touchscreens, built-in cameras, GPS, and motion-sensors, the magic change that I see is the ability to actually run programs.

    Why do I say this software thing more significant than the other features which have made their ways on to the phone? There are a number of reasons for this, but the big idea is that the ability to run software makes smartphones look like mobile computers. We have seen this pan out in a number of ways:

    • The potential uses for a mobile phone have exploded overnight. Whereas previously, they were pretty much limited to making phone calls, sending text messages/emails, playing music, and taking pictures, now they can be used to do things like play games, look up information, and even be used by doctors to help treat and diagnose patients. In the same way that a computer’s usefulness extends beyond what a manufacturer like Dell or HP or Apple have built into the hardware because of software, software opens up new possibilities for mobile phones in ways which we are only beginning to see.
    • Phones can now be “updated”. Before, phones were simply replaced when they became outdated. Now, some users expect that a phone that they buy will be maintained even after new models are released. Case in point: Users threw a fit when Samsung decided not to allow users to update their Samsung Galaxy’s operating system to a new version of the Android operating system. Can you imagine 10 years ago users getting up in arms if Samsung didn’t ship a new 2 MP mini-camera to anyone who owned an earlier version of the phone which only had a 1 MP camera?
    • An entire new software industry has emerged with its own standards and idiosyncrasies. About four decades ago, the rise of the computer created a brand new industry almost out of thin air. After all, think of all the wealth and enabled productivity that companies like Oracle, Microsoft, and Adobe have created over the past thirty years. There are early signs that a similar revolution is happening because of the rise of the smartphone. Entire fortunes have been created “out of thin air” as enterprising individuals and companies move to capture the potential software profits from creating software for the legions of iPhones and Android phones out there. What remains to be seen is whether or not the mobile software industry will end up looking more like the PC software industry, or whether or not the new operating systems and screen sizes and technologies will create something that looks more like a distant cousin of the first software revolution.

    II. Look ma, no <insert other device here>

    One of the most amazing consequences of Moore’s Law is that devices can quickly take on a heckuva lot more functionality then they used to. The smartphone is a perfect example of this Swiss-army knife mentality. The typical high-end smartphone today can:

    • take pictures
    • use GPS
    • play movies
    • play songs
    • read articles/books
    • find what direction its being pointed in
    • sense motion
    • record sounds
    • run software

    … not to mention receive and make phone calls and texts like a phone.

    But, unlike cameras, GPS devices, portable media players, eReaders, compasses, Wii-motes, tape recorders, and computers, the phone is something you are likely to keep with you all day long. And, if you have a smartphone which can double as a camera, GPS, portable media player, eReaders, compass, Wii-mote, tape recorder, and computer all at once – tell me why you’re going to hold on to those other devices?

    That is, of course, a dramatic oversimplification. After all, I have yet to see a phone which can match a dedicated camera’s image quality or a computer’s speed, screen size, and range of software, so there are definitely reasons you’d pick one of these devices over a smartphone. The point, however, isn’t that smartphones will make these other devices irrelevant, it is that they will disrupt these markets in exactly the way that Clayton Christensen described in his book The Innovator’s Dilemma, making business a whole lot harder for companies who are heavily invested in these other device categories. And make no mistake: we’re already seeing this happen as GPS companies are seeing lower prices and demand as smartphones take on more and more sophisticated functionality (heck, GPS makers like Garmin are even trying to get into the mobile phone business!). I wouldn’t be surprised if we soon see similar declines in the market growth rates and profitability for all sorts of other devices.

    III. Putting the carriers in their place

    Throughout most of the history of the phone industry, the carriers were the dominant power. Sure, enormous phone companies like Nokia, Samsung, and Motorola had some clout, but at the end of the day, especially in the US, everybody felt the crushing influence of the major wireless carriers.

    In the US, the carriers regulated access to phones with subsidies. They controlled which functions were allowed. They controlled how many texts and phone calls you were able to make. When they did let you access the internet, they exerted strong influence on which websites you had access to and which ringtones/wallpapers/music you could download. In short, they managed the business to minimize costs and risks, and they did it because their government-granted monopolies (over the right to use wireless spectrum) and already-built networks made it impossible  for a new guy to enter the market.

    But this sorry state of affairs has already started to change with the advent of the smartphone. RIM’s Blackberry had started to affect the balance of power, but Apple’s iPhone really shook things up – precisely because users started demanding more than just a wireless service plan – they wanted a particular operating system with a particular internet experience and a particular set of applications – and, oh, it’s on AT&T? That’s not important, tell me more about the Apple part of it!

    What’s more, the iPhone’s commercial success accelerated the change in consumer appetites. Smartphone users were now picking a wireless service provider not because of coverage or the cost of service or the special carrier-branded applications  – that was all now secondary to the availability of the phone they wanted and what sort of applications and internet experience they could get over that phone. And much to the carriers’ dismay, the wireless carrier was becoming less like the gatekeeper who got to charge crazy prices because he/she controlled the keys to the walled garden and more like the dumb pipe that people connected to the web on their iPhone with.

    Now, it would be an exaggeration to say that the carriers will necessarily turn into the “dumb pipes” that today’s internet service providers are (remember when everyone in the US used AOL?) as these large carriers are still largely immune to competitors. But, there are signs that the carriers are adapting to their new role. The once ultra-closed Verizon now allows Palm WebOS and Google Android devices to roam free on its network as a consequence of AT&T and T-Mobile offering devices from Apple and Google’s partners, respectively, and has even agreed to allow VOIP applications like Skype access to its network, something which jeopardizes their former core voice revenue stream.

    As for the carriers, as they begin to see their influence slip over basic phone experience considerations, they will likely shift their focus to finding ways to better monetize all the traffic that is pouring through their networks. Whether this means finding a way to get a cut of the ad/virtual good/eCommerce revenue that’s flowing through or shifting how they charge for network access away from unlimited/“all you can eat” plans is unclear, but it will be interesting to see how this ecosystem evolves.

    IV. Contextuality

    There is no better price than the amazingly low price of free. And, in my humble opinion, it is that amazingly low price of free which has enabled web services to have such a high rate of adoption. Ask yourself, would services like Facebook and Google have grown nearly as fast without being free to use?

    How does one provide compelling value to users for free? Before the age of the internet, the answer to that age-old question was simple: you either got a nice government subsidy, or you just didn’t. Thankfully, the advent of the internet allowed for an entirely new business model: providing services for free and still making a decent profit by using ads. While over-hyping of this business model led to the dot com crash in 2001 as countless websites found it pretty difficult to monetize their sites purely with ads, services like Google survived because they found that they could actually increase the value of the advertising on their pages not only because they had a ton of traffic, but because they could use the content on the page to find ads which visitors had a significantly higher probability of caring about.

    The idea that context could be used to increase ad conversion rates (the percent of people who see an ad and actually end up buying) has spawned a whole new world of web startups and technologies which aim to find new ways to mine context to provide better ad targeting. Facebook is one such example of the use of social context (who your friends are, what your interests are, what your friends’ interests are) to serve more targeted ads.

    So, where do smartphones fit in? There are two ways in which smartphones completely change the context-to-advertising dynamic:

    • Location-based services: Your phone is a device which not only has a processor which can run software, but is also likely to have GPS built-in, and is something which you carry on your person at all hours of the day. What this means is that the phone not only know what apps/websites you’re using, it also knows where you are and if you’re on a vehicle (based on how fast you are moving) when you’re using them. If that doesn’t let a merchant figure out a way to send you a very relevant ad, I don’t know what will. The Yowza iPhone application is an example of how this might shape out in the future, where you can search for mobile coupons for local stores all on your phone.
    • Augmented reality: In the same way that the GPS lets mobile applications do location-based services, the camera, compass, and GPS in a mobile phone lets mobile applications do something called augmented reality. The concept behind augmented reality (AR) is that, in the real world, you and I are only limited by what our five senses can perceive. If I see an ad for a book, I can only perceive what is on the advertisement. I don’t necessarily know much about how much it costs on Amazon.com or what my friends on Facebook have said about it. Of course, with a mobile phone, I could look up those things on the internet, but AR takes this a step further. Instead of merely looking something up on the internet, AR will actually overlay content and information on top of what you are seeing on your phone screen. One example of this is the ShopSavvy application for Android which allows you to scan product barcodes to find product review information and even information on pricing from online and other local stores! Google has taken this a step further with Google Goggles which can recognize pictures of landmarks, books, and even bottles of wine! For an advertiser or a store, the ability to embed additional content through AR technology is the ultimate in providing context but only to those people who want it. Forget finding the right balance between putting too much or too little information on an ad, use AR so that only the people who are interested will get the extra information.

    Thought this was interesting? Check out some of my other pieces on Tech industry

  • How to Properly Define a Company’s Culture

    Company culture is a concept which, while incredibly difficult to explain or measure, is very important to a company’s well-being and employee morale. Too often, it comes in the form of vaguely written out “corporate mission statements” or never-ending lists of feel-good, mean-nothing “company values”. Oh joy, you value “teamwork” and “making money” – that was so insightful…

    It was thus very refreshing for me to read the Netflix company culture document (sadly no longer embed-able, but you can find it at this Slideshare link).

    Slidumentation aside, I think the NetFlix presentation does three things extremely well:

    1. It’s not a list of feel-good words, but  actual values and statements which can actually guide the company in its day-to-day hiring, evaluation. Most company culture statements are nothing but long lists of virtues and things non-sociopaths respect. “Teamwork” and “honesty”, for example, are usually among them. But, as the Netflix presentation points out, even Enron had a list of “values” and that wound up not amounting to much of anything. Instead, Netflix has a clear state of  things they look for in their employees, each with clear explanations for what they actually mean. For “Curiosity”, Netflix has listed four supporting statements:
      • You learn rapidly and eagerly
      • You seek to understand our strategy, markets, subscribers, and suppliers.
      • You are broadly knowledgeable about business, technology, and entertainment.
      • You contribute effectively outside of your specialty
      Admittedly, there is nothing particularly remarkable about these four statements. But what is remarkable is that it is immediately clear to the reader what “curiosity” means, in the context of Netflix’s culture, and how Netflix employees should be judged and evaluated. It’s oftentimes astounding to me how few companies get to this bare minimum in terms of culture documents.
    2. Netflix actually gives clear value judgments.  I’ve already lamented the extent to which company culture statements are nothing more than laundry lists of “feel good” words. Netflix admirably cuts through that by not only explaining what the values mean, but also by what should happen when different “good words” conflict. And, best of all, they do it with brutal honesty. For instance, Netflix on how they won’t play the “benefits race” that other companies play:
      A great work place is stunning colleagues. Great workplace is not day-care, espresso, health benefits, sushi lunches, nice offices, or big compensation, and we only do those that are efficient at attracting stunning colleagues.Netflix on teamwork versus individual performance:Brilliant jerks: some companies tolerate them, [but] for us, the cost to teamwork is too high.Netflix on its annual compensation review policy:Lots of people have the title “Major League Pitcher” but they are not all equally effective. Similarly, all people with the title “Senior Marketing Manager” and “Director of Engineering” are not equally effective … So, essentially, [we are] rehiring each employee each year (and re-evaluating them based on their performance) for the purposes of compensation.Within each of the three examples, Netflix has done two amazing things: they’ve made a bold value judgment, which most companies fail to do, explaining just how the values should be lived, especially when they conflict (“we don’t care how smart you are, if you don’t work well with the team, you have to go”), and they’ve even given a reason(“teamwork is more important to delivering impact for our customers than one smart guy”).
    3. They explain what makes their culture different from other companies and why. Most people who like their jobs will give “culture” as a reason they think their company is unique. yet, if you read the countless mission statements and “our values” documents out there, you’d never be able to see that difference. Granted, the main issue may just be that management has chosen not to live up to the lofty ideals espoused in their list of virtues, but what might help with that and make it clearer to employees about what makes a particular workplace special is explaining how and why the company’s culture is different from another’s. Contrast that with the Netflix presentation, which spends many slides explaining the tradeoffs between too many rules and too few, and why they ultimately sided with having very few rules, whereas a manufacturing company or a medical company would have very many of them. They never go so far as to say that one is better than the other, only that they are different because they are in different industries with different needs and dynamics. And, as a result of that, they have implemented changes, like a simpler expense policy (“Act in Netflix’s best interests”) and a revolutionary vacation policy (“There is no policy or tracking”) [with an awesome explanation: “There is also no clothing policy at Netflix, but no one has come to work naked lately”].

    Pay attention, other companies. You would do well to learn from Netflix’s example.

  • What is with Microsoft’s consumer electronics strategy?

    Genius? Source: Softpedia

    Regardless of how you feel about Microsoft’s products, you have to appreciate the brilliance of their strategic “playbook”:

    1. Use the fact that Microsoft’s operating system/productivity software is used by almost everyone to identify key customer/partner needs
    2. Build a product which is usually only a second/third-best follower product but make sure it’s tied back to Microsoft’s products
    3. Take advantage of the time and market share that Microsoft’s channel influence, developer community, and product integration buys to invest in the new product with Microsoft’s massive budget until it achieves leadership
    4. If steps 1-3 fail to give Microsoft a dominant position, either exit (because the market is no longer important) or buy out a competitor
    5. Repeat

    While the quality of Microsoft’s execution of each step can be called into question, I’d be hard pressed to find a better approach then this one, and I’m sure much of their success can be attributed to finding good ways to repeatedly follow this formula.

    It’s for that reason that I’m completely  bewildered by Microsoft’s consumer electronics business strategy. Instead of finding good ways to integrate the Zune, XBox, and Windows Mobile franchises together or with the Microsoft operating system “mothership” the way Microsoft did by integrating its enterprise software with Office or Internet Explorer with Windows, these three businesses largely stand apart from Microsoft’s home field (PC software) and even from each other.

    This is problematic for two big reasons. First, because non-PC devices are outside of Microsoft’s usual playground, it’s not a surprise that Microsoft finds it difficult to expand into new territory. For Microsoft to succeed here, it needs to pull out all the stops and it’s shocking to me that a company with a stake in the ground in four key device areas (PCs, mobile phones, game consoles, and portable media players) would choose not to use one of the few advantages it has over its competitors.

    The second and most obvious (to consumers at least) is that Apple has not made this mistake. Apple’s iPhone and iPod Touch product lines are clear evolutions of their popular iPod MP3 players which integrate well with Apple’s iTunes computer software and iTunes online store. The entire Apple line-up, although each product is a unique entity, has a similar look and feel. The Safari browser that powers the Apple computer internet experience is, basically, the same that powers the iPhone and iPod Touch. Similarly, the same online store and software (iTunes) which lets iPods load themselves with music lets iPod Touches/iPhones load themselves with applications.

    That neat little integrated package not only makes it easier for Apple consumers to use a product, but the coherent experience across the different devices gives customers even more of a reason to use and/or buy other Apple products.

    Contrast that approach with Microsoft’s. Not only are the user interfaces and product designs for the Zune, XBox, and Windows Mobile completely different from one another, they don’t play well together at all. Applications that run on one device (be it the Zune HD, on a Windows PC, on an XBox, or on Windows Mobile) are unlikely to be able to run on any other. While one might be able to forgive this if it was just PC applications which had trouble being “ported” to Microsoft’s other devices (after all, apps that run on an Apple computer don’t work on the iPhone and vice versa), the devices that one would expect this to work well with (i.e. the Zune HD and the XBox because they’re both billed as gaming platforms, or the Zune HD and Windows Mobile because they’re both portable products) don’t. Their application development process doesn’t line up well. And, as far as I’m aware, the devices have completely separate application and content stores!

    While recreating the Windows PC experience on three other devices is definitely overkill, I think, were I in Ballmer’s shoes, I would recommend a few simple recommendations which I think would dramatically benefit all of Microsoft’s product lines (and I promise they aren’t the standard Apple/Linux fanboy’s “build something prettier” or “go open source”):

    1. Centralize all application/content “marketplaces” – Apple is no internet genius. Yet, they figured out how to do this. I fail to see why Microsoft can’t do the same.
    2. Invest in building a common application runtime across all the devices – Nobody’s expecting a low-end Windows Mobile phone or a Zune HD to run Microsoft Excel, but to expect that little widgets or games should be able to work across all of Microsoft’s devices is not unreasonable, and would go a long way towards encouraging developers to develop for Microsoft’s new device platforms (if a program can run on just the Zune HD, there’s only so much revenue that a developer can take in, but if it can also run on the XBox and all Windows Mobile phones, then the revenue potential becomes much greater) and towards encouraging consumers to buy more Microsoft gear
    3. Find better ways to link Windows to each device – This can be as simple as building something like iTunes to simplify device management and content streaming, but I have yet to meet anyone with a Microsoft device who hasn’t complained about how poorly the devices work with PCs.

    Thought this was interesting? Check out some of my other pieces on Tech industry