• Dr. Machine Learning

    How to realize the promise of applying machine learning to healthcare

    Not going to happen anytime soon, sadly: the Doctor from Star Trek: Voyager; Source: TrekCore

    Despite the hype, it’ll likely be quite some time before human physicians will be replaced with machines (sorry, Star Trek: Voyager fans).

    While “smart” technology like IBM’s Watson and Alphabet’s AlphaGo can solve incredibly complex problems, they are probably not quite ready to handle the messiness of qualitative unstructured information from patients and caretakers (“it kind of hurts sometimes”) that sometimes lie (“I swear I’m still a virgin!”) or withhold information (“what does me smoking pot have to do with this?”) or have their own agendas and concerns (“I just need some painkillers and this will all go away”).

    Instead, machine learning startups and entrepreneurs interested in medicine should focus on areas where they can augment the efforts of physicians rather than replace them.

    One great example of this is in diagnostic interpretation. Today, doctors manually process countless X-rays, pathology slides, drug adherence records, and other feeds of data (EKGs, blood chemistries, etc) to find clues as to what ails their patients. What gets me excited is that these tasks are exactly the type of well-defined “pattern recognition” problems that are tractable for an AI / machine learning approach.

    If done right, software can not only handle basic diagnostic tasks, but to dramatically improve accuracy and speed. This would let healthcare systems see more patients, make more money, improve the quality of care, and let medical professionals focus on managing other messier data and on treating patients.

    As an investor, I’m very excited about the new businesses that can be built here and put together the following “wish list” of what companies setting out to apply machine learning to healthcare should strive for:

    • Excellent training data and data pipeline: Having access to large, well-annotated datasets today and the infrastructure and processes in place to build and annotate larger datasets tomorrow is probably the main defining . While its tempting for startups to cut corners here, that would be short-sighted as the long-term success of any machine learning company ultimately depends on this being a core competency.
    • Low (ideally zero) clinical tradeoffs: Medical professionals tend to be very skeptical of new technologies. While its possible to have great product-market fit with a technology being much better on just one dimension, in practice, to get over the innate skepticism of the field, the best companies will be able to show great data that makes few clinical compromises (if any). For a diagnostic company, that means having better sensitivty and selectivity at the same stage in disease progression (ideally prospectively and not just retrospectively).
    • Not a pure black box: AI-based approaches too often work like a black box: you have no idea why it gave a certain answer. While this is perfectly acceptable when it comes to recommending a book to buy or a video to watch, it is less so in medicine where expensive, potentially life-altering decisions are being made. The best companies will figure out how to make aspects of their algorithms more transparent to practitioners, calling out, for example, the critical features or data points that led the algorithm to make its call. This will let physicians build confidence in their ability to weigh the algorithm against other messier factors and diagnostic explanations.
    • Solve a burning need for the market as it is today: Companies don’t earn the right to change or disrupt anything until they’ve established a foothold into an existing market. This can be extremely frustrating, especially in medicine given how conservative the field is and the drive in many entrepreneurs to shake up a healthcare system that has many flaws. But, the practical reality is that all the participants in the system (payers, physicians, administrators, etc) are too busy with their own issues (i.e. patient care, finding a way to get everything paid for) to just embrace a new technology, no matter how awesome it is. To succeed, machine diagnostic technologies should start, not by upending everything with a radical solution, but by solving a clear pain point (that hopefully has a lot of big dollar signs attached to it!) for a clear customer in mind.

    Its reasons like this that I eagerly follow the development of companies with initiatives in applying machine learning to healthcare like Google’s DeepMind, Zebra Medical, and many more.

  • Why VR Could be as Big as the Smartphone Revolution

    Technology in the 1990s and early 2000s marched to the beat of an Intel-and-Microsoft-led drum.

    Source: IT Portal

    Intel would release new chips at a regular cadence: each cheaper, faster, and more energy efficient than the last. This would let Microsoft push out new, more performance-hungry software, which would, in turn, get customers to want Intel’s next, more awesome chip. Couple that virtuous cycle with the fact that millions of households were buying their first PCs and getting onto the Internet for the first time — and great opportunities were created to build businesses and products across software and hardware.

    But, over time, that cycle broke down. By the mid-2000s, Intel’s technological progress bumped into the limits of what physics would allow with regards to chip performance and cost. Complacency from its enviable market share coupled with software bloat from its Windows and Office franchises had a similar effect on Microsoft. The result was that the Intel and Microsoft drum stopped beating as they became unable to give the mass market a compelling reason to upgrade to each subsequent generation of devices.

    The result was a hollowing out of the hardware and semiconductor industries tied to the PC market that was only masked by the innovation stemming from the rise of the Internet and the dawn of a new technology cycle in the late 2000s in the form of Apple’s iPhone and its Android competitors: the smartphone.

    Source: Mashable

    A new, but eerily familiar cycle began: like clockwork, Qualcomm, Samsung, and Apple (playing the part of Intel) would devise new, more awesome chips which would feed the creation of new performance-hungry software from Google and Apple (playing the part of Microsoft) which led to demand for the next generation of hardware. Just as with the PC cycle, new and lucrative software, hardware, and service businesses flourished.

    But, just as with the PC cycle, the smartphone cycle is starting to show signs of maturity. Apple’s recent slower than expected growth has already been blamed on smartphone market saturation. Users are beginning to see each new generation of smartphone as marginal improvements. There are also eery parallels between the growing complaints over Apple software quality from even Apple fans and the position Microsoft was in near the end of the PC cycle.

    While its too early to call the end for Apple and Google, history suggests that we will eventually enter a similar phase with smartphones that the PC industry experienced. This begs the question: what’s next? Many of the traditional answers to this question — connected cars, the “Internet of Things”, Wearables, Digital TVs — have not yet proven themselves to be truly mass market, nor have they shown the virtuous technology upgrade cycle that characterized the PC and smartphone industries.

    This brings us to Virtual Reality. With VR, we have a new technology paradigm that can (potentially) appeal to the mass market (new types of games, new ways of doing work, new ways of experiencing the world, etc.). It also has a high bar for hardware performance that will benefit dramatically from advances in technology, not dissimilar from what we saw with the PC and smartphone.

    Source: Forbes

    The ultimate proof will be whether or not a compelling ecosystem of VR software and services emerges to make this technology more of a mainstream “must-have” (something that, admittedly, the high price of the first generation Facebook/OculusHTC/Valve, and Microsoft products may hinder).

    As a tech enthusiast, its easy to get excited. Not only is VR just frickin’ cool (it is!), its probably the first thing since the smartphone with the mass appeal and virtuous upgrade cycle that can bring about the huge flourishing of products and companies that makes tech so dynamic to be involved with.

    Thought this was interesting? Check out some of my other pieces on Tech industry

  • Laszlo Bock on Building Google’s Culture

    Much has been written about what makes Google work so well: their ridiculously profitable advertising business model, the technology behind their search engine and data centers, and the amazing pay and perks they offer.

    Source: the book

    My experiences investing in and working with startups, however, has taught me that building a great company is usually less about a specific technical or business model innovation than about building a culture of continuous improvement and innovation. To try to get some insight into how Google does things, I picked up Google SVP of People Operations Laszlo Bock’s book Work Rules!

    Bock describes a Google culture rooted in principles that came from founders Larry Page and Sergey Brin when they started the company: get the best people to work for you, make them want to stay and contribute, and remove barriers to their creativity. What’s great (to those interested in company building) is that Bock goes on to detail the practices Google has put in place to try to live up to these principles even as their headcount has expanded.

    The core of Google’s culture boils down to four basic principles and much of the book is focused on how companies should act if they want to live up to them:

    1. Presume trust: Many of Google’s cultural norms stem from a view that people are well-intentioned and trustworthy. While that may not seem so radical, this manifested at Google as a level of transparency with employees and a bias to say yes to employee suggestions that most companies are uncomfortable with. It raises interesting questions about why companies that say their talent is the most important thing treat them in ways that suggest a lack of trust.
    2. Recruit the best: Many an exec pays lip service to this, but what Google has done is institute policies that run counter to standard recruiting practices to try to actually achieve this at scale: templatized interviews / forms (to make the review process more objective and standardized), hiring decisions made by cross-org committees (to insure a consistently high bar is set), and heavy use of data to track the effectiveness of different interviewers and interview tactics. While there’s room to disagree if these are the best policies (I can imagine hating this as a hiring manager trying to staff up a team quickly), what I admired is that they set a goal (to hire the best at scale) and have actually thought through the recruiting practices they need to do so.
    3. Pay fairly [means pay unequally]: While many executives would agree with the notion that superstar employees can be 2-10x more productive, few companies actually compensate their superstars 2-10x more. While its unclear to me how effective Google is at rewarding superstars, the fact that they’ve tried to align their pay policies with their beliefs on how people perform is another great example of deviating from the norm (this time in terms of compensation) to follow through on their desire to pay fairly.
    4. Be data-driven: Another “in vogue” platitude amongst executives, but one that very few companies live up to, is around being data-driven. In reading Bock’s book, I was constantly drawing parallels between the experimentation, data collection, and analyses his People Operations team carried out and the types of experiments, data collection, and analyses you would expect a consumer internet/mobile company to do with their users. Case in point: Bock’s team experimented with different performance review approaches and even cafeteria food offerings in the same way you would expect Facebook to experiment with different news feed algorithms and notification strategies. It underscores the principle that, if you’re truly data-driven, you don’t just selectively apply it to how you conduct business, you apply it everywhere.

    Of course, not every company is Google, and not every company should have the same set of guiding principles or will come to same conclusions. Some of the processes that Google practices are impractical (i.e., experimentation is harder to set up / draw conclusions from with much smaller companies, not all professions have such wide variations in output as to drive such wide variations in pay, etc).

    What Bock’s book highlights, though, is that companies should be thoughtful about what sort of cultural principles they want to follow and what policies and actions that translates into if they truly believe them. I’d highly recommend the book!

  • What Happens After the Tech Bubble Pops

    In recent years, it’s been the opposite of controversial to say that the tech industry is in a bubble. The terrible recent stock market performance of once high-flying startups across virtually every industry (see table below) and the turmoil in the stock market stemming from low oil prices and concerns about the economies of countries like China and Brazil have raised fears that the bubble is beginning to pop.

    While history will judge when this bubble “officially” bursts, the purpose of this post is to try to make some predictions about what will happen during/after this “correction” and pull together some advice for people in / wanting to get into the tech industry. Starting with the immediate consequences, one can reasonably expect that:

    • Exit pipeline will dry up: When startup valuations are higher than what the company could reasonably get in the stock market, management teams (who need to keep their investors and employees happy) become less willing to go public. And, if public markets are less excited about startups, the price acquirers need to pay to convince a management team to sell goes down. The result is fewer exits and less cash back to investors and employees for the exits that do happen.
    • VCs become less willing to invest: VCs invest in startups on the promise that future IPOs and acquisitions will make them even more money. When the exit pipeline dries up, VCs get cold feet because the ability to get a nice exit seems to fade away. The result is that VCs become a lot more price-sensitive when it comes to investing in later stage companies (where the dried up exit pipeline hurts the most).
    • Later stage companies start cutting costs: Companies in an environment where they can’t sell themselves or easily raise money have no choice but to cut costs. Since the vast majority of later-stage startups run at a loss to increase growth, they will find themselves in the uncomfortable position of slowing down hiring and potentially laying employees off, cutting back on perks, and focusing a lot more on getting their financials in order.

    The result of all of this will be interesting for folks used to a tech industry (and a Bay Area) flush with cash and boundlessly optimistic:

    1. Job hopping should slow: “Easy money” to help companies figure out what works or to get an “acquihire” as a soft landing will be harder to get in a challenged financing and exit environment. The result is that the rapid job hopping endemic in the tech industry should slow as potential founders find it harder to raise money for their ideas and as it becomes harder for new startups to get the capital they need to pay top dollar.
    2. Strong companies are here to stay: While there is broad agreement that there are too many startups with higher valuations than reasonable, what’s also become clear is there are a number of mature tech companies that are doing exceptionally well (i.e. Facebook, Amazon, Netflix, and Google) and a number of “hotshots” which have demonstrated enough growth and strong enough unit economics and market position to survive a challenged environment (i.e. Uber, Airbnb). This will let them continue to hire and invest in ways that weaker peers will be unable to match.
    3. Tech “luxury money” will slow but not disappear: Anyone who lives in the Bay Area has a story of the ridiculousness of “tech money” (sky-high rents, gourmet toast,“its like Uber but for X”, etc). This has been fueled by cash from the startup world as well as free flowing VC money subsidizing many of these new services . However, in a world where companies need to cut costs, where exits are harder to come by, and where VCs are less willing to subsidize random on-demand services, a lot of this will diminish. That some of these services are fundamentally better than what came before (i.e. Uber) and that stronger companies will continue to pay top dollar for top talent will prevent all of this from collapsing (and lets not forget San Francisco’s irrational housing supply policies). As a result, people expecting a reversal of gentrification and the excesses of tech wealth will likely be disappointed, but its reasonable to expect a dramatic rationalization of the price and quantity of many “luxuries” that Bay Area inhabitants have become accustomed to soon.

    So, what to do if you’re in / trying to get in to / wanting to invest in the tech industry?

    • Understand the business before you get in: Its a shame that market sentiment drives fundraising and exits, because good financial performance is generally a pretty good indicator of the long-term prospects of a business. In an environment where its harder to exit and raise cash, its absolutely critical to make sure there is a solid business footing so the company can keep going or raise money / exit on good terms.
    • Be concerned about companies which have a lot of startup exposure: Even if a company has solid financial performance, if much of that comes from selling to startups (especially services around accounting, recruiting, or sales), then they’re dependent on VCs opening up their own wallets to make money.
    • Have a much higher bar for large, later-stage companies: The companies that will feel the most “pain” the earliest will be those with with high valuations and high costs. Raising money at unicorn valuations can make a sexy press release but it doesn’t amount to anything if you can’t exit or raise money at an even higher valuation.
    • Rationalize exposure to “luxury”: Don’t expect that “Uber but for X” service that you love to stick around (at least not at current prices)…
    • Early stage companies can still be attractive: Companies that are several years from an exit & raising large amounts of cash will be insulated in the near-term from the pain in the later stage, especially if they are committed to staying frugal and building a disruptive business. Since they are already relatively low in valuation and since investors know they are discounting off a valuation in the future (potentially after any current market softness), the downward pressures on valuation are potentially lighter as well.

    Thought this was interesting or helpful? Check out some of my other pieces on investing / finance.

  • An “Unbiased Opinion”

    I recently read a short column by gadget reviewer Vlad Savov in The Verge provocatively titled “My reviews are biased — that’s why you should trust them” which made me think. In it, Vlad addresses the accusation he hears often that he’s biased:

    Of course I’m biased, that’s the whole point… subjectivity is an inherent — and I would argue necessary — part of making these reviews meaningful. Giving each new device a decontextualized blank slate to be reviewed against and only asserting the bare facts of its existence is neither engaging nor particularly useful. You want me to complain about the chronically bloopy Samsung TouchWiz interface while celebrating the size perfection of last year’s Moto X. Those are my preferences, my biased opinions, and it’s only by applying them to the pristine new phone or tablet that I can be of any use to readers. To be perfectly impartial would negate the value of having a human conduct the review at all. Just feed the new thing into a 3D scanner and run a few algorithms over the resulting data to determine a numerical score. Job done.”

    [emphasis mine]

    As Vlad points out, in an expert you’re asking for advice from, bias is a good thing. Now whether or not Vlad has unhelpful biases or is someone who’s opinion you value is a separate question entirely, but if there’s one thing I’ve learned — an unbiased opinion is oftentimes an uneducated one and tend to come from panderers who fit one of three criteria:

    1. they think you don’t want them to express an opinion and are trying to respect your wishes
    2. they don’t know anything
    3. they are trying to sell you something, not mutually exclusive with (2)

    The individuals who are the most knowledgeable and thoughtful about a topic almost certainly have a bias and that’s a bias that you want to hear.

  • 3D Printing as Disruptive Innovation

    Last week, I attended a MIT/Stanford VLAB event on 3D printing technologies. While I had previously been aware of 3D printing (which works basically the way it sounds) as a way of helping companies and startups do quick prototypes or letting geeks of the “maker” persuasion make random knickknacks, it was at the event that I started to recognize the technology’s disruptive potential in manufacturing. While the conference itself was actually more about personal use for 3D printing, when I thought about the applications in the industrial/business world, it was literally like seeing the first part/introduction of a new chapter or case study from Clayton Christensen, author of The Innovator’s Dilemma (and inspiration for one of the more popular blog posts here :-)) play out right in front of me:

    • Like many other disruptive innovations when they began, 3D printing today is unable to serve the broader manufacturing “market”. Generally speaking, the time needed per unit output, the poor “print resolution”, the upfront capital costs, and some of the limitations in terms of materials are among the reasons that the technology as it stands today is uncompetitive with traditional mass manufacturing.
    • Even if 3D printing were competitive today, there are big internal and external stumbling blocks which would probably make it very difficult for existing large companies to embrace it. Today’s heavyweight manufacturers are organized and incentivized internally along the lines of traditional assembly line manufacturing. They also lack the partners, channels, and supply chain relationships (among others) externally that they would need to succeed.
    • While 3D printing today is very disadvantaged relative to traditional manufacturing technologies (most notably in speed and upfront cost), it is extremely good at certain things which make it a phenomenal technology for certain use cases:
      • Rapid design to production: Unlike traditional manufacturing techniques which take significant initial tooling and setup, once you have a 3D printer and an idea, all you need to do is print the darn thing! At the conference, one of the panelists gave a great example: a designer bought an Apple iPad on a Friday, decided he wanted to make his own iPad case, and despite not getting any help from Apple or prior knowledge of the specs, was able by Monday to be producing and selling the case he had designed that weekend. Idea to production in three days. Is it any wonder that so many of the new hardware startups are using 3D printing to do quick prototyping?
      • Short runs/lots of customizationChances are most of the things you use in your life are not one of a kind (i.e. pencils, clothes, utensils, dishware, furniture, cars, etc). The reason for this is that mass production make it extremely cheap to produce many copies of the same thing. The flip side of this is that short production runs (where you’re not producing thousands or millions of the same thing) and production where each item has a fair amount of customization or uniqueness is really expensive. With 3D printing, however, because each item being produced is produced in the same way (by the printer), you can produce one item at close to the same per unit price as producing a million – this makes 3D printing a very interesting technology for markets where customization & short runs are extremely valuable.
      • Shapes/structures that injection molding and machining find difficult: There are many shapes where traditional machining (taking a big block of material and whittling it down to the desired shape) and injection molding (building a mold and then filling it with molten material to get the desired shape) are not ideal: things like producing precision products that go into airplanes and racecars or printing the scaffolds with which bioengineers hope to build artificial organs are uniquely addressable by 3D printing technologies.
      • Low laborThe printer takes care of all of it – thus letting companies cut costs in manufacturing and/or refocus their people to steps in the process which do require direct human intervention.
    • And, of course, with the new markets which are opening up for 3D printing, its certainly helpful that the size, cost, and performance of 3D printers has improved dramatically and is continuing to improve – to the point where the panelists were very serious when they articulated a vision of the future where 3D printers could be as widespread as typical inkjet/laser printers!

    Ok, so why do we care? While its difficult to predict precisely what this technology could bring (it is disruptive after all!), I think there are a few tantalizing possibilities of how the manufacturing game might change to consider:

    • The ability to do rapid design to productionmeans you could dofast fashion for everything – in the same way that companies like Zara can produce thousands of different products in a season (and quickly change them to meet new trends/styles), broader adoption of 3D printing could lead to the rise of new companies where design/operational flexibility and speed are king, as the companies best able to fit their products to the flavor-of-the-month gain more traction.
    • The ability to do customization means you can manufacture custom parts/products cost-effectively and without holding as much inventory; production only needs to begin after an order is on hand (no reason to hold extra “copies” of something that may go out of fashion/go bad in storage when you can print stuff on the fly) and the lack of retooling means companies can be a lot more flexible in terms of using customization to get more customers.
    • I’m not sure how all the second/third-order effects play out, but this could also put a damper on outsourced manufacturing to countries like China/India – who cares about cheaper manufacturing labor overseas when 3D printing makes it possible to manufacture locally without much labor and avoid import duties, shipping delays, and the need to hold on to parts/inventory?

    I think there’s a ton of potential for the technology itself and its applications, and the possible consequences for how manufacturing will evolve are staggering. Yes, we are probably a long way off from seeing this, but I think we are on the verge of seeing a disruptive innovation take place, and if you’re anything like me, you’re excited to see it play out.

  • Boa Constrictors Listen to Your Heart So They Know When You’re Dead

    Source: Paul Whitten

    For January I decided to blog a paper I heard about on the excellent Nature podcast about a deliciously simple and elegant experiment to test a very simple question: given how much time and effort boa constrictors (like the one on above, photo taken by Paul Whitten) need to kill prey by squeezing them to death, how do they know when to stop squeezing?

    Hypothesizing that boa constrictors could sense the heartbeat of their prey, some enterprising researchers from Dickinson College decided to test the hypothesis by fitting dead rats with bulbs connected to water pumps (so that the researchers could simulate a heartbeat) and tracking how long and hard the boas would squeeze for:

    • rats without a “heartbeat” (white)
    • rats with a “heartbeat” for 10 min (gray)
    • rats with a continuous “heartbeat” (black)
    Source: Figure 2, Boback. et al

    The results are shown in figure 2 (to the right). The different color bars show the different experimental groups (white: no heartbeat, gray: heartbeat for 10 min before stopping, and black: continuous heartbeat). Figure 2a (on top) shows how long the boas squeezed for whereas Figure 2b (on bottom) shows the total “effort” exerted by the boas. As obvious from the chart, the longer the simulated heartbeat went, the longer and harder the boas would squeeze.

    Conclusion? I’ll let the paper speak for itself: “snakes use the heartbeat in their prey as a cue to modulate constriction effort and to decide when to release their prey.”

    Interestingly, the paper goes a step further for those of us who aren’t ecology experts and notes that being attentive to heartbeat would probably be pretty irrelevant in the wild for small mammals (which, ironically, includes rats) and birds which die pretty quickly after being constricted. Where this type of attentiveness to heartrate is useful is in reptilian prey (crocodiles, lizards, other snakes, etc) which can survive with reduced oxygen for longer. From that observation, the researchers thus concluded that listening for heartrate probably evolved early in evolutionary history at a time when the main prey for snakes were other reptiles and not mammals and birds.

    In terms of where I’d go next after this – my main point of curiosity is on whether or not boa constrictors are listening/feeling for any other signs of life (i.e. movement or breathing). Obviously, they’re sensitive to heart rate, but if an animal with simulated breathing or movement – would that change their constricting activity as well? After all, I’m sure the creative guys that made an artificial water-pump-heart can find ways to build an artificial diaphragm and limb muscles… right?

    Paper: Boback et al., “Snake modulates constriction in response to prey’s heartbeat.” Biol Letters. 19 Dec 2011. doi: 10.1098/rsbl.2011.1105

    Check out my other academic paper walkthroughs/summaries

  • Mosquitoes are Drawn to Your Skin Bacteria

    This month’s paper (from open access journal PLoS ONE) is yet again about the impact on our health of the bacteria which have decided to call our bodies home. But, instead of the bacteria living in our gut, this month is about the bacteria which live on our skin.

    It’s been known that the bacteria that live on our skin help give us our particular odors. So, the researchers wondered if the mosquitos responsible for passing malaria (Anopheles) were more or less drawn to different individuals based on the scent that our skin-borne bacteria impart upon us (also, for the record, before you freak out about bacteria on your skin, remember that like the bacteria in your gut, the bacteria on your skin are natural and play a key role in maintaining the health of your skin).

    Looking at 48 individuals, they noticed a huge variation in terms of attractiveness to Anopheles mosquitos (measured by seeing how much mosquitos prefer to fly towards a chamber with a particular individual’s skin extract versus a control) which they were able to trace to two things. The first is the amount of bacteria on your skin. As shown in Figure 2 below, is that the more bacteria that you have on your skin (the higher your “log bacterial density”), the more attractive you seem to be to mosquitos (the higher your mean relative attractiveness).

    Source: Figure 2, Verhulst et al

    The second thing they noticed was that the type of bacteria also seemed to be correlated with attractiveness to mosquitos. Using DNA sequencing technology, they were able to get a mini-census of what sort of bacteria were present on the skins of the different patients. Sadly, they didn’t show any pretty figures for the analysis they conducted on two common types of bacteria (Staphylococcus and Pseudomonas), but, to quote from the paper:

    The abundance of Staphylococcus spp. was 2.62 times higher in the HA [Highly Attractive to mosquitoes] group than in the PA [Poorly Attractive to mosquitoes] group and the abundance of Pseudomonas spp. 3.11 times higher in the PA group than in the HA group.

    Using further genetic analyses, they were also able to show a number of other types of bacteria that were correlated with one or the other.

    So, what did I think? While I think there’s a lot of interesting data here, I think the story could’ve been tighter. First and foremost, for obvious reasons, correlation does not mean causation. This was not a true controlled experiment – we don’t know for a fact if more/specific types of bacteria cause mosquitos to be drawn to them or if there’s something else that explains both the amount/type of bacteria and the attractiveness of an individual’s skin scent to a mosquito. Secondly, Figure 2 leaves much to be desired in terms of establishing a strong trendline. Yes, if I  squint (and ignore their very leading trendline) I can see a positive correlation – but truth be told, the scatterplot looks like a giant mess, especially if you include the red squares that go with “Not HA or PA”. For a future study, I think it’d be great if they could get around this to show stronger causation with direct experimentation (i.e. extracting the odorants from Staphylococcus and/or Pseudomonas and adding them to a “clean” skin sample, etc)

    With that said, I have to applaud the researchers for tackling a fascinating topic by taking a very different angle. Coverage of malaria is usually focused on how to directly kill or impede the parasite (Plasmodium falciparums). This is the first treatment of the “ecology” of malaria – specifically the ecology of the bacteria on your skin! While the authors don’t promise a “cure for malaria”, you can tell they are excited about what they’ve found and the potential to find ways other than killing parasites/mosquitos to help deal with malaria, and I look forward to seeing the other ways that our skin bacteria impact our lives.

    Paper: Verhulst et al. “Composition of Human Skin Microbiota Affects Attractiveness to Malaria Mosquitoes.” PLoS ONE 6(12). 17 Nov 2011. doi:10.1371/journal.pone.0028991

    Check out my other academic paper walkthroughs/summaries

  • Fat Flora

    Source: Healthy Soul

    November’s paper was published in Nature in 2006, and covers a topic I’ve become increasingly interested in: the impact of the bacteria that have colonized our bodies on our health (something I’ve blogged about here and here).

    The idea that our bodies are, in some ways, more bacteria than human (there are 10x more gut bacteria – or flora — than human cells on our bodies) and that those bacteria can play a key role on our health is not only mind-blowing, it opens up another potential area for medical/life sciences research and future medicines/treatments.

    In the paper, a genetics team from Washington University in St. Louis explored a very basic question: are the gut bacteria from obese individuals different from those from non-obese individuals? To study the question, they performed two types of analyses on a set of mice with a genetic defect leading to an inability of the mice to “feel full” (and hence likely to become obese) and genetically similar mice lacking that defect (the s0-called “wild type” control).

    The first was a series of genetic experiments comparing the bacteria found within the gut of obese mice with those from the gut of “wild-type” mice (this sort of comparison is something the field calls metagenomics). In doing so, the researchers noticed a number of key differences in the “genetic fingerprint” of the two sets of gut bacteria, especially in the genes involved in metabolism.

    Source: Figure 3, Turnbaugh et al.

    But, what did that mean to the overall health of the animal? To answer that question, the researchers did a number of experiments, two of which I will talk about below. First, they did a very simple chemical analysis (see figure 3b to the left) comparing the “leftover energy” in the waste (aka poop) of the obese mice to the waste of wild-type mice (and, yes, all of this was controlled for the amount of waste/poop). Lo and behold, the obese mice (the white bar) seemed to have gut bacteria which were significantly better at pulling calories out of the food, leaving less “leftover energy”.

    Source: Figure 3, Turnbaugh et al.

    While an interesting result, especially when thinking about some of the causes and effects of obesity, a skeptic might look at that data and say that its inconclusive about the role of gut bacteria in obesity – after all, obese mice could have all sorts of other changes which make them more efficient at pulling energy out of food. To address that, the researchers did a very elegant experiment involving fecal transplant: that’s right, colonize one mouse with the bacteria from another mouse (by transferring poop). The figure to the right (figure 3c) shows the results of the experiment. After two weeks, despite starting out at about the same weight and eating similar amounts of the same food, wild type mice that received bacteria from other wild type mice showed an increase in body fat of about 27%, whereas the wild type mice that received bacteria from the obese mice showed an increase of about 47%! Clearly, gut bacteria in obese mice are playing a key role in calorie uptake!

    In terms of areas of improvement, my main complaint about this study is just that it doesn’t go far enough. The paper never gets too deep on what exactly were the bacteria in each sample and we didn’t really get a sense of the real variation: how much do bacteria vary from mouse to mouse? Is it the completely different bacteria? Is it the same bacteria but different numbers? Is it the same bacteria but they’re each functioning differently? Do two obese mice have the same bacteria? What about a mouse that isn’t quite obese but not quite wild-type either? Furthermore, the paper doesn’t show us what happens if an obese mouse has its bacteria replaced with the bacteria from a wild-type mouse. These are all interesting questions that would really help researchers and doctors understand what is happening.

    But, despite all of that, this was a very interesting finding and has major implications for doctors and researchers in thinking about how our complicated flora impact and are impacted by our health.

    Paper: Turnbaugh et al., “An obesity-associated gut microbiome with increased capacity for energy harvest.” Nature (444). 21/28 Dec 2006. doi:10.1038/nature05414

    Check out my other academic paper walkthroughs/summaries

  • Antibody-omics

    I’m pretty late for paper of the month, so here we go

    “Omics” is the hot buzz-suffix in the life sciences for anything which uses the new sequencing/array technologies we now have available. You don’t study genes anymore, you study genomics. You don’t study proteins anymore – that’s so last century, you study proteomics now. And, who studies metabolism? Its all about metabolomics. There’s even a blog covering this with the semi-irreverent name “Omics! Omics!”.

    This month’s paper from Science is from researchers at the NIH because it was the first time I ever encountered the term “antibodyome”. As some of you know, antibodies are the “smart missiles” of your immune system – they are built to recognize and attack only one specific target (i.e. a particular protein on a bacteria/virus). This ability is so remarkable that, rather than rely on human-generated constructs, researchers and biotech companies oftentimes choose to use antibodies to make research tools (i.e. using fluorescent antibodies to label specific things) and therapies (i.e. using antibodies to proteins associated with cancer as anti-cancer drugs).

    How the immune system does this is a fascinating story in and of itself. In a process called V(D)J recombination – the basic idea is that your immune system’s B-cells mix, match, and scramble certain pieces of your genetic code to try to produce a wide range of antibodies to hit potentially every structure they could conceivably see. And, once they see something which “kind of sticks”, they undergo a process called affinity maturation to introduce all sorts of mutations in the hopes that you create an even better antibody.

    Which brings us to the paper I picked – the researchers analyzed a couple of particularly effective antibodies targeted at HIV, the virus which causes AIDS. What they found was that these antibodies all bound the same part of the HIV virus, but when they took a closer look at the 3D structures/the B-cell genetic code which made them, they found that the antibodies were quite different from one another (see Figure 3C below)

    Source: Figure 3C, Wu et al.

    What’s more, not only were they fairly distinct from one another, they each showed *significant* affinity maturation – while a typical antibody has 5-15% of their underlying genetic code modified, these antibodies had 20-50%! To get to the bottom of this, the researchers looked at all the antibodies they could pull from the patient – their “antibodyome” (in the same way that a patient’s genome would be all of their genes) — and along with data from other patients, they were able to construct a genetic “family tree” for these antibodies (see Figure 6C below)

    Source: Figure 6, Wu et al.

    The analysis shows that many of the antibodies were derived from the same initial genetic VDJ “mix-and-match” but that afterwards, there were quite a number of changes made to that code to get the situation where a diverse set of structures/genetic codes could attack the same spot on the HIV virus.

    While I wish the paper probed deeper into actual experimentation to take this analysis further (i.e. artificially using this method to create other antibodies with similar behavior), this paper goes a long way into establishing an early picture of what “antibodyomics” is. Rather than study the total impact of an immune response or just the immune capabilities of one particular B-cell/antibody, this sort of genetic approach lets researchers get a very detailed, albeit comprehensive look at where the body’s antibodies are coming from. Hopefully, longer term this also turns into a way for researchers to make better vaccines.

    Paper:  Wu et al., “Focused Evolution of HIV-1 Neutralizing Antibodies Revealed by Structures and Deep Sequencing.” Science (333). 16 Sep 2011. doi: 10.1126/science.1207532

    Check out my other academic paper walkthroughs/summaries

  • The Marketing Glory of NVIDIA’s Codenames

    While code names are not rare in the corporate world, more often than not, the names tend to be unimaginative. NVIDIA’s code names, however, are pure marketing glory.

    Take NVIDIA’s high performance computing product roadmap (below) – these are products that use the graphics processing capabilities of NVIDIA’s high-end GPUs and turn them into smaller, cheaper, and more power-efficient supercomputing engines which scientists and researchers can use to crunch numbers. How does NVIDIA describe its future roadmap? It uses the names of famous scientists to describe its technology roadmap: Tesla (the great American electrical engineer who helped bring us AC power), Fermi (“the father of the Atomic Bomb”), Kepler (one of the first astronomers to apply physics to astronomy), and Maxwell (the physicist who helped show that electrical, magnetic, and optical phenomena were all linked).

    Source: Rage3D

    Who wouldn’t want to do some “high power” research (pun intended) with Maxwell? 

    But, what really takes the cake for me are the codenames NVIDIA uses for its smartphone/tablet chips: its Tegra line of products. Instead of scientists, he uses, well, comic book characters. For release at the end of this year? Kal-El, or for the uninitiated, that’s the alien name for Superman. After that? Wayne, as in the alter ego for Batman. Then, Loganas in the name for the X-men Wolverine. And then Starkas in the alter ego for Iron Man.

    Source: NVIDIA

    Everybody wants a little Iron Man in their tablet.

  • Web vs Native

    When Steve Jobs first launched the iPhone in 2007, Apple’s perception of where the smartphone application market would move was in the direction of web applications. The reasons for this are obvious: people are familiar with how to build web pages and applications, and it simplifies application delivery.

    Yet in under a year, Apple changed course, shifting the focus of iPhone development from web applications to building native applications custom-built (by definition) for the iPhone’s operating system and hardware. While I suspect part of the reason this was done was to lock-in developers, the main reason was certainly the inadequacy of available browser/web technology. While we can debate the former, the latter is just plain obvious. In 2007, the state of web development was relatively primitive relative to today. There was no credible HTML5 support. Javascript performance was paltry. There was no real way for web applications to access local resources/hardware capabilities. Simply put, it was probably too difficult for Apple to kludge together an application development platform based solely on open web technologies which would get the sort of performance and functionality Apple wanted.

    But, that was four years ago, and web technology has come a long way. Combine that with the tech commentator-sphere’s obsession with hyping up a rivalry between “native vs HTML5 app development”, and it begs the question: will the future of application development be HTML5 applications or native?

    There are a lot of “moving parts” in a question like this, but I believe the question itself is a red herring. Enhancements to browser performance and the new capabilities that HTML5 will bring like offline storage, a canvas for direct graphic manipulation, and tools to access the file system, mean, at least to this tech blogger, that “HTML5 applications” are not distinct from native applications at all, they are simply native applications that you access through the internet. Its not a different technology vector – it’s just a different form of delivery.

    Critics of this idea may cite that the performance and interface capabilities of browser-based applications lag far behind those of “traditional” native applications, and thus they will always be distinct. And, as of today, they are correct. However, this discounts a few things:

    • Browser performance and browser-based application design are improving at a rapid rate, in no small part because of the combination of competition between different browsers and the fact that much of the code for these browsers is open source. There will probably always be a gap between browser-based apps and native, but I believe this gap will continue to narrow to the point where, for many applications, it simply won’t be a deal-breaker anymore.
    • History shows that cross-platform portability and ease of development can trump performance gaps. Once upon a time, all developers worth their salt coded in low level machine language. But this was a nightmare – it was difficult to do simple things like showing text on a screen, and the code written only worked on specific chips and operating systems and hardware configurations. I learned C which helped to abstract a lot of that away, and, keeping with the trend of moving towards more portability and abstraction, the mobile/web developers of today develop with tools (Python, Objective C, Ruby, Java, Javascript, etc) which make C look pretty low-level and hard to work with. Each level of abstraction adds a performance penalty, but that has hardly stopped developers from embracing them, and I feel the same will be true of “HTML5”.
    • Huge platform economic advantages. There are three huge advantages today to HTML5 development over “traditional native app development”. The first is the ability to have essentially the same application run across any device which supports a browser. Granted, there are performance and user experience issues with this approach, but when you’re a startup or even a corporate project with limited resources, being able to get wide distribution for earlier products is a huge advantage. The second is that HTML5 as a platform lacks the control/economic baggage that iOS and even Android have where distribution is controlled and “taxed” (30% to Apple/Google for an app download, 30% cut of digital goods purchases). I mean, what other reason does Amazon have to move its Kindle application off of the iOS native path and into HTML5 territory? The third is that web applications do not require the latest and greatest hardware to perform amazing feats. Because these apps are fundamentally browser-based, using the internet to connect to a server-based/cloud-based application allows even “dumb devices” to do amazing things by outsourcing some of that work to another system. The combination of these three makes it easier to build new applications and services and make money off of them – which will ultimately lead to more and better applications and services for the “HTML5 ecosystem.”

    Given Google’s strategic interest in the web as an open development platform, its no small wonder that they have pushed this concept the furthest. Not only are they working on a project called Native Client to let users achieve “native performance” with the browser, they’ve built an entire operating system centered entirely around the browser, Chrome OS, and were the first to build a major web application store, the Chrome Web Store to help with application discovery.

    While it remains to be seen if any of these initiatives will end up successful, this is definitely a compelling view of how the technology ecosystem evolves, and, putting on my forward-thinking cap on, I would not be surprised if:

    1. The major operating systems became more ChromeOS-like over time. Mac OS’s dashboard widgets and Windows 7’s gadgets are already basically HTML5 mini-apps, and Microsoft has publicly stated that Windows 8 will support HTML5-based application development. I think this is a sign of things to come as the web platform evolves and matures.
    2. Continued focus on browser performance may lead to new devices/browsers focused on HTML5 applications. In the 1990s/2000s, there was a ton of attention focused on building Java accelerators in hardware/chips and software platforms who’s main function was to run Java. While Java did not take over the world the way its supporters had thought, I wouldn’t be surprised to see a similar explosion just over the horizon focused on HTML5/Javascript performance – maybe even HTML5 optimized chips/accelerators, additional ChromeOS-like platforms, and potentially browsers optimized to run just HTML5 games or enterprise applications?
    3. Web application discovery will become far more important. The one big weakness as it stands today for HTML5 is application discovery. Its still far easier to discover a native mobile app using the iTunes App Store or the Android Market than it is to find a good HTML5 app. But, as platform matures and the platform economics shift, new application stores/recommendation engines/syndication platforms will become increasingly critical.

    Thought this was interesting? Check out some of my other pieces on Tech industry

  • Standards Have No Standards

    Many forms of technology requires standards to work. As a result, it is in the best interest of all parties in the technology ecosystem to participate in standards bodies to ensure interoperability.

    The two main problem with getting standards working can be summed up, as all good things in technology can be, in the form of webcomics. 

    Problem #1, from XKCDpeople/companies/organizations keep creating more standards.

    Source: XKCD

    The cartoon takes the more benevolent look at how standards proliferate; the more cynical view is that individuals/corporations recognize that control or influence over an industry standard can give them significant power in the technology ecosystem. I think both the benevolent and the cynical view are always at play – but the result is the continual creation of “bigger and badder” standards which are meant to replace but oftentimes fail to completely supplant existing ones. Case in point, as someone who has spent a fair amount of time looking at technologies to enable greater intelligence/network connectivity in new types of devices (think TVs, smart meters, appliances, thermostats, etc.), I’m still puzzled as to why we have so many wireless communication standards and protocols for achieving it (Bluetooth, Zigbee, ZWave, WiFi, DASH7, 6LowPAN, etc)

    Problem #2: standards aren’t purely technical undertakings – they’re heavily motivated by the preferences of the bodies and companies which participate in formulating them, and like the US’s “wonderful” legislative process, involves mashing together a large number of preferences, some of which might not necessarily be easily compatible with one another. This can turn quite political and generate standards/working papers which are too difficult to support well (i.e. like DLNA). Or, as Dilbert sums it up, these meetings are full of people who are instructed to do this:

    Source: Dilbert

    Or this:

    Source: Dilbert

    Our one hope is that the industry has enough people/companires who are more vested in the future of the technology industry than taking unnecessarily cheap shots at one another… It’s a wonder we have functioning standards at all, isn’t it?

    Thought this was interesting? Check out some of my other pieces on Tech industry

  • The “Strangest Biotech Company of All” Issues Their Annual Report as a Comic Book

    This seems almost made for me: I’m into comic books. I do my own “corporate style” annual and quarterly reports to track how my finances and goals are going. And, I follow the biopharma industry.

    Source: United Therapeutics 2010 Annual Report

    So, when I found out that a biotech company issued its latest annual report in the form of a comic book, I knew I had to talk about it!

    The art style is not all that bad, and the bulk of the comic is told from the first person perspective of Martin Auster, head of business development at the company (that’s Doctor Auster to you, pal!). We get an interesting look at Auster’s life, how he was a medical student who didn’t really want to do a residency, and how and why he ultimately joins the company.

    Source: United Therapeutics 2010 Annual Report
    Source: United Therapeutics 2010 Annual Report
    Source: United Therapeutics 2010 Annual Report

    And, of course, what annual report wouldn’t be complete without some financial charts – and yes, this particular chart was intended to be read with 3D glasses (which were apparently shipped with paper copies of the report):

    Source: United Therapeutics 2010 Annual Report

    Interestingly, the company in question – United Therapeutics — is not a tiny company either: its worth roughly $3 billion (as of when this was written) and is also somewhat renowned for its more unusual practices (meetings have occurred in the virtual world Second Life and employees are all called “Unitherians”) as well as its brilliant and eccentric founder, Dr. Martine Rothblatt. Rothblatt is a very accomplished modern-day polymath:

    • She was an early pioneer in communication satellite law
    • She helped launch a number of communication satellite technologies and companies
    • She founded and was CEO of Geostar Corporation, an early GPS satellite company
    • She founded and was CEO of Sirius Satellite Radio
    • She led the International Bar Association’s efforts to draft a Universal Declaration on the Human Genome and Human Rights
    • She is a pre-eminent proponent for xenotransplantation
    • She is also one of the most vocal advocates of transgenderism and transgender rights, having been born as Martin Rothblatt (Howard Stern even referred to her as the “Martine Luther Queen” of the movement)
    • She is a major proponent of the interesting philosophy that one might achieve technological immortality by digitizing oneself (having created an interesting robot version of her wife, Bina).
    • She started United Therapeutics because her daughter was diagnosed with Pulmonary Arterial Hypertension, a fatal condition which, at the time of diagnosis, there was no effective treatment for

    You got to have a lot of love and respect for a company that not only seems to have delivered an impressive financial outcome ($600 million in sales a year and $3 billion in market cap is not bad!) and can still maintain what looks like a very fun and unique culture (in no small part, I’m sure, because of their CEO).

  • The Goal is Not Profitability

    I’ve blogged before about how the economics of the venture industry affect how venture capitalists evaluate potential investments, the main conclusion of which is that VCs are really only interested in companies that could potentially IPO or sell for at least several hundred million dollars.

    One variation on that line of logic which I think startups/entrepreneurs oftentimes fail to grasp is that profitability is not the number one goal.

    Now, don’t get me wrong. The reason for any business to exist is to ultimately make profit. And, all things being equal, investors certainly prefer more profitable companies to less/unprofitable ones. But, the truth of the matter is that things are rarely all equal and, at the end of the day, your venture capital investors aren’t necessarily looking for profit, they are looking for a large outcome.

    Before I get accused of being supportive of bubble companies (I’m not), let me explain what this seemingly crazy concept means in practice. First of all, short-term profitability can conflict with rapid growth. This will sound counter-intuitive, but its the very premise for venture capital investment. Think about it: Facebook could’ve tried much harder to make a profit in its early years by cutting salaries and not investing in R&D, but that would’ve killed Facebook’s ability to grow quickly. Instead, they raised venture capital and ignored short-term profitability to build out the product and aggressively market. This might seem simplistic, but I oftentimes receive pitches/plans from entrepreneurs who boast that they can achieve profitability quickly or that they don’t need to raise another round of investment because they will be making a profit soon, never giving any thought to what might happen with their growth rate if they ignored profitability for another quarter or year.

    Secondly, the promise of growth and future profitability can drive large outcomesPandora, Groupon, Enphase, TeslaA123, and Solazyme are among some of the hottest venture-backed IPOs in recent memory and do you know what they all also happen to share? They are very unprofitable and, to the best of my knowledge, have not yet had a single profitable year. However, the investment community has strong faith in the ability of these businesses to continue to grow rapidly and, eventually, deliver profitability. Whether or not that faith is well-placed is another question (and I have my doubts on some of the companies on that list), but as these examples illustrate, you don’t necessarily need to be profitable to be able to get a large venture-sized outcome.

    Of course, it’d be a mistake to take this logic and assume that you never need to achieve or think about profitability. After all, a company that is bleeding cash unnecessarily is not a good company by any definition, regardless of whether or not the person evaluating it is in venture capital. Furthermore, while the public market may forgive Pandora and Groupon’s money-losing, there’s also no guarantee that they will be so forgiving of another company’s or even of Pandora/Groupons a few months from now.

    But what I am saying is that entrepreneurs need to be more thoughtful when approaching a venture investor with a plan to achieve profitability/stop raising money more quickly, because the goal of that investor is not necessarily short-term profits.

    Thought this was interesting? Check out some of my other pieces on how VC works / thinks

  • Our Job is Not to Make Money

    Let’s say you pitch a VC and you’ve got a coherent business plan and some thoughtful perspectives on how your business scales. Does that mean you get the venture capital investment that you so desire?

    Not necessarily. There could be many reasons for a rejection, but one that crops up a great deal is not anything intrinsically wrong with a particular idea or team, but something which is an intrinsic issue with the venture capital model.

    One of our partners put it best when he pointed out, “Our job is not to make money, it’s to make a lot of money.”

    What that means is that venture capitalists are not just looking for a business that can make money. They are really looking for businesses which have the potential to sell for or go public (sell stock on NYSE/NASDAQ/etc) and yield hundreds of millions, if not billions of dollars.

    Why? It has to do with the way that venture capital funds work.

    • Venture capitalists raise large $100M+ funds. This is a lot of money to work with, but its also a burden in that the venture capital firm also has to deliver a large return on that large initial amount. If you start with a $100M fund, its not unheard of for investors in that fund to expect $300-400M back – and you just can’t get to those kinds of returns unless you bet on companies that sell for/list on a public market for a lot of money.
    • Although most investments fail, big outcomes can be *really* big. For every Facebook, there are dozens of wannabe copycats that fall flat – so there is a very high risk that a venture investment will not pan out as one hopes. But, the flip side to this is that Facebook will likely be an outcome dozens upon dozens of times larger than its copycats. The combination of the very high risk but very high reward drive venture capitalists to chase only those which have a shot at becoming a *really* big outcome – doing anything else basically guarantees that the firm will not be able to deliver a large enough return to its investors.
    • Partners are busy people. A typical venture capital fund is a partnership, consisting of a number of general partners who operate the fund. A typical general partner will, in addition to look for new deals, be responsible for/advise several companies at once. This is a fair amount of work for each company as it involves helping companies recruit, develop their strategy, connect with key customers/partners/influencers, deal with operational/legal issues, and raise money. As a result, while the amount of work can vary quite a bit, this basically limits the number of companies that a partner can commit to (and, hence, invest in). This limit encourages partners to favor companies which could end up with a larger outcome than a smaller, because below a certain size, the firm’s return profile and the limits on a partner’s time just don’t justify having a partner get too involved.

    The result? Venture capitalists have to turn down many pitches, not because they don’t like the idea or the team and not even necessarily because they don’t think the company will make money in a reasonably short time, but because they didn’t think the idea had a good shot at being something as big and game-changing as Google, Genentech, and VMWare were. And, in fact, the not often heard truth is that a lot of the endings which entrepreneurs think of as great and which are frequently featured on tech blogs like VentureBeat and TechCrunch (i.e. selling your company to Google for $10M) are actually quite small (and possibly even a failure) when it comes to how a large venture capital firm views it.

    Thought this was interesting? Check out some of my other pieces on how VC works / thinks

  • Do You Have the Guts for Nori?

    Source: Precision Nutrition

    The paper I will talk about this month is from April of this year and highlights the diversity of our “gut flora” (a pleasant way to describe the many bacteria which live in our digestive tract and help us digest the food we eat). Specifically, this paper highlights how a particular bacteria in the digestive tracts of some Japanese individuals has picked up a unique ability to digest certain certain sugars which are common in marine plants (e.g., Porphyra, the seaweed used to make sushi) but not in terrestrial plants.

    Interestingly, the researchers weren’t originally focused on how gut flora function at all, but in understanding how marine bacteria digested marine plants. They started by studying a particular marine bacteria, Zobellia galactanivorans which was known for its ability to digest certain types of algae. Scanning the genome of Zobellia, the researchers were able to identify a few genes which were similar enough to known sugar-digesting enzymes but didn’t seem to have the ability to act on the “usual plant sugars”.

    Two of the identified genes, which they called PorA and PorB, were found to be very selective in the type of plant sugar they digested. In the chart below (from Figure 1), 3 different plants are characterized along a spectrum showing if they have more LA (4-linked 3,6-anhydro-a-L-galactopyranose) chemical groups (red) or L6S (4-linked a-L-galactopyranose-6-sulphate) groups (yellow). Panel b on the right shows the H1-NMR spectrum associated with these different sugar mixes and is a chemical technique to verify what sort of sugar groups are present.

    Source: Figure 1, Hehemann et al.

    These mixes were subjected to PorA and PorB as well as AgaA (a sugar-digesting enzyme which works mainly on LA-type sugars like agarose). The bar charts in the middle show how active the respective enzymes were (as indicated by the amount of plant sugar digested).

    As you can see, PorA and PorB are only effective on L6S-type sugar groups, and not LA-type sugar groups. The researchers wondered if they had discovered the key class of enzyme responsible for allowing marine life to digest marine plant sugars and scanned other genomes for other enzymes similar to PorA and PorB. What they found was very interesting

    Source: Figure 3, Hehemann et al.

    What you see above is an evolutionary family tree for PorA/PorB-like genes. The red and blue boxes represent PorA/PorB-like genes which target “usual plant sugars”, but the yellow show the enzymes which specifically target the sugars found in nori (Porphyra, hence the enzymes are called porhyranases). All the enzymes marked with solid diamonds are actually found in Zgalactanivorans (and were henced dubbed PorC, PorD, and PorE – clearly not the most imaginative naming convention). The other identified genes, however, all belonged to marine bacteria… with the notable exception of Bateroides plebeius, marked with a open circle. And Bacteroides plebeius (at least to the knowledge of the researchers at the time of this publication) has only been found in the guts of certain Japanese people!

    The researchers scanned the Bacteroides plebeius genome and found that the bacteria actually had a sizable chunk of genetic material which were a much better match for marine bacteria than other similar Bacteroides strains. The researchers concluded that the best explanation for this is that the Bacteroides plebeius picked up its unique ability to digest marine plants not on its own, but from marine bacteria (in a process called Horizontal Gene Transfer or HGT), most probably from bacteria that were present on dietary seaweed. Or, to put it more simply: your gut bacteria have the ability to “steal” genes/abilities from bacteria on the food we eat!

    Cool! While this is a conclusion which we can probably never truly prove (it’s an informed hypothesis based on genetic evidence), this finding does make you wonder if a similar genetic screening process could identify if our gut flora have picked up any other genes from “dietary bacteria.”

    Paper: Hehemann et al, “Transfer of carbohydrate-active enzymes from marine bacteria to Japanese gut microbiota.” Nature464: 908-912 (Apr 2010) – doi:10.1038/nature08937

    Check out my other academic paper walkthroughs/summaries

  • How You Might Cure Asian Glow

    The paper I read is something that is very near and dear to my heart. As is commonly known, individuals of Asian ancestry are more likely to experience dizziness and flushed skin after drinking alcohol. This is due to the prevalence of a genetic defect in the Asian population which affects an enzyme called Aldehyde Dehydrogenase 2 (ALDH2). ALDH2 processes one of the by-products of alcohol consumption (acetaldehyde).

    In people with the genetic defect, ALDH2 works very poorly. So, people with the ALDH2 defect build up higher levels of acetaldehyde which leads them to get drunker (and thus hung-over/sick/etc) quicker. This is a problem for someone like me, who needs to drink a (comically) large amount of water to be able to properly process wine/beer/liquor. Interestingly, the anti-drinking drug Disulfiram (sold as “Antabuse” and “Antabus”) helps alcoholics keep off of alcohol by basically shutting down a person’s ALDH2, effectively giving them “Asian alcohol-induced flushing syndrome” and making them get drunk and sick very quickly.

    Source: Wikipedia

    So, what can you do? At this point, nothing really (except, either avoid alcohol or drink a ton of water when you do drink). But, I look forward to the day when there may actually be a solution. A group at Stanford recently identified a small molecule, Alda-1 (chemical structure above), which not only increases the effectiveness of normal ALDH2, but can help “rescue” defective ALDH2!

    Have we found the molecule which I have been searching for ever since I started drinking? Jury’s still out, but the same group at Stanford partnered with structural biologists at Indiana University to conduct some experiments on Alda-1 to try to find out how it works.

    Source: Figure 4, Perez-Miller et al

    To do this, and why this paper was published in Nature Structural and Molecular Biology rather than another journal, they used a technique called X-ray Crystallography to “see” if (and how) Alda-1 interacts with ALDH2. Some of the results of these experiments are shown above. On the left, Panel B (on top) shows a 3D structure of the “defective’ version of ALDH2. If you’re new to structural biology papers, this will take some time getting used to it, but if you look carefully, you can see that ALDH2 is a tetramer: there are 4 identical pieces (in the top-left, top-right, bottom-left, bottom-right) which are attached together in the middle.

    It’s not clear from this picture, but the defective version of the enzyme differs from the normal because it is unable to maintain the 3D structure needed to link up with a coenzyme (a chemical needed by enzymes which do this sort of chemical reaction to be able to work properly) called NAD+ or even carry out the reaction (the “active site”, or the part of the enzyme which actually carries out the reaction, is “disrupted” in the mutant).

    So what does Alda-1 do, then? In the bottom (Panel C), you can see where the Alda-1 molecules (colored in yellow) are when they interact with ALDH2.  While the yellow molecules have a number of impacts on ALHD2’s 3D structure, the most obvious changes are highlighted in pink (those have no clear counterpart in Panel B). This is the secret of Alda-1: it actually changes the shape of ALDH2, (partially) restoring the enzyme’s ability to bind with NAD+ and carry out the chemical reactions needed to process acetaldehyde, and all without actually directly touching the active site (this is something which you can’t see in the panel I shared above, but you can make out from other X-ray crystallography models in the paper).

    The result? If you look at the chart below (Panel A), you’ll see two relationships at play. First, the greater the amount of co-enzyme NAD+ (on the horizontal axis), the faster the reaction speed (on the vertical axis). But, if you increase the amount of Alda-1 from 0 uM (the bottom-most curve) to 30 uM (the highest-most curve), you see a dramatic increase in the enzyme’s reaction speed, for the same amount of NAD+. So, does Alda-1 activate ALDH2? Judging from this chart, it definitely does.

    Source: Figure 4, Perez-Miller et al

    Alda-1 is particularly interesting because most of the chemicals/drugs which we are able to develop work by breaking, de-activating, or inhibiting something. Have a cold? Break the chemical pathways which lead to runny noses. Suffering from depression? De-activate the process which cleans up serotonin (“happiness” chemicals in the brain) quickly. After all, its much easier to break something than it is to fix/create something. But, instead, Alda-1 is actually an activator (rather than a de-activator), which the authors of the study leave as a tantalizing opportunity for medical science:

    This work suggests that it may be possible to rationally design similar molecular chaperones for other mutant enzymes by exploiting the binding of compounds to sites adjacent to the structurally disrupted regions, thus avoiding the possibility of enzymatic inhibition entirely independent of the conditions in which the enzyme operates.

    If only it were that easy (it’s not)…

    Where should we go from here? Frankly, while the paper tackled a very interesting topic in a pretty rigorous fashion, I felt that a lot of the conclusions being drawn were not clear from the presented experimental results (which is why this post is a bit on the vague side on some of those details).

    I certainly understand the difficulty when the study is on phenomena which is molecular in nature (does the enzyme work? are the amino acids in the right location?). But, I personally felt a significant part of the paper was more conjecture than evidence, and while I’m sure the folks making the hypotheses are very experienced, I would like to see more experimental data to back up their theories. A well-designed set of site-directed mutagenesis (mutating specific parts of ALDH2 in the lab to play around with they hypotheses that the group put out) and well-tailored experiments and rounds of X-ray crystallography could help shed a little more light on their fascinating idea.

    Paper: Perez-Miller et al. “Alda-1 is an agonist and chemical chaperone for the common human aldehyde dehydrogenase 2 variant.” Nature Structural and Molecular Biology 17:2 (Feb 2010) –doi:10.1038/nsmb.1737

    Check out my other academic paper walkthroughs/summaries

  • I Know Enough to Get Myself in Trouble

    One of the dangers of a consultant looking at tech is that he can get lost in jargon. A few weeks ago, I did a little research on some of the most cutting-edge software startups in the cloud computing space (the idea that you can use a computer feature/service without actually knowing anything about what sort of technology infrastructure was used to provide you with that feature/service – i.e., Gmail and Yahoo Mail on the consumer side, services like Amazon Web Services and Microsoft Azure on the business side). As a result, I’ve looked at the product offerings from guys like NimbulaClouderaClustrixAppistryElastra, and MaxiScale, to name a few. And, while I know enough about cloud computing to understand, at a high level, what these companies do, the use of unclear terminology sometimes makes it very difficult to pierce the “fog of marketing” and really get a good understanding of the various product strengths and weaknesses.

    Is it any wonder that, at times, I feel like this:

    Source: Dilbert

    Yes, its all about that “integration layer” … My take? A great product should not need to hide behind jargon.

  • Diet Coke + Mentos = Paper

    Unless you just discovered YouTube yesterday, you’ve probably seen countless videos of (and maybe even have tried?) the infamous Diet Coke + Mentos reaction… which brings us to the subject of this month’s paper.

    An enterprising physics professor from Appalachian State University decided to have her sophomore physics class take a fairly rigorous look at what drives the Diet Coke + Mentos reaction and what factors might influence its strength and speed. They were not only able to publish their results in the American Journal of Physics, but the students were also given an opportunity to present their findings in a poster session (Professor Coffey reflected on the experience in a presentation she gave). In my humble opinion, this is science education at its finest: instead of having students re-hash boring experiments which they already know the results of, this allowed them to do fairly original research in a field which they probably had more interest in than in the typical science lab course.

    So, what did they find?

    The first thing they found is that it’s not an acid-base reaction. A lot of people, myself included, believe the diet coke + Mentos reaction is the same as the baking soda + vinegar “volcano” reactions that we all did as kids. Apparently, we were dead wrong, as the paper points out:

    The pH of the diet Coke prior to the reaction was 3.0, and the pH of the diet Coke after the mint Mentos reaction was also 3.0. The lack of change in the pH supports the conclusion that the Mint Mentos–Diet Coke reaction is not an acid-base reaction. This conclusion is also supported by the ingredients in the Mentos, none of which are basic: sugar, glucose, syrup, hydrogenated coconut oil, gelatin, dextrin, natural flavor, corn starch, and gum arabic … An impressive acid-base reaction can be generated by adding baking soda to Diet Coke. The pH of the Diet Coke after the baking soda reaction was 6.1, indicating that much of the acid present in the Diet Coke was neutralized by the reaction.

    Source: Table 1, Coffey, American Journal of Physics

    Secondly, the “reaction” is not chemical (no new compounds are created), but a physical response because the Mentos makes bubbles easier to form. The Mentos triggers bubble formation because the surface of the Mentos is itself extremely rough which allows bubbles to aggregate (like how adding string/popsicle stick to an oversaturated mixture of sugar and water is used to make rock candy). But that doesn’t explain why the Mentos + Diet Coke reaction works so well. The logic blew my mind but, in retrospect, is pretty simple. Certain liquids are more “bubbly” by nature – think soapy water vs. regular water. Why? Because the energy that’s needed to form a bubble is lower than the energy available from the environment (e.g., thermal energy). So, the question is, what makes a liquid more “bubbly”? One way is to heat the liquid (heating up Coke makes it more bubbly because heating the carbon dioxide inside the soda gives the gas more thermal energy to draw upon), which the students were able to confirm when they looked at how much mass was lost during a Mentos + Diet coke reaction under three different temperatures (Table 3 below):

    Source: Table 3, Coffey, American Journal of Physics

    What else? It turns out that what other chemicals a liquid has dissolved is capable of changing the ease at which bubbles are made. Physicists/chemists will recognize this “ease” as surface tension (how tightly the surface of a liquid pulls on itself) which you can see visually as a change in the contact angle (the angle that the bubble forms against a flat surface, see below):

    Source: Contact angle description from presentation

    The larger the angle, the stronger the surface tension (the more tightly the liquid tries to pull in on itself to become a sphere). So, what happens when we add the artificial sweetener aspartame and potassium benzoate (both ingredients in Diet Coke) to water? As you can see in Figure 4 below, the contact angle in (b) [aspartame] and (c) [potassium benzoate] are smaller than (a) [pure water]. Translation: if you add aspartame and/or potassium benzoate to water, you reduce the amount of work that needs to be done by the solution to create a bubble. Table 4 below that shows the contact angles of a variety of solutions that the students tested as well as the amount of work needed to create a bubble relative to pure water:

    Source: Figure 4, Coffey, American Journal of Physics
    Source: Table 4, Coffey, American Journal of Physics

    This table also shows why you use Diet Coke rather than regular Coke (basically sugar-water) to do the Mentos thing – regular coke has a higher contact angle (and ~20% more energy needed to make a bubble).

    Another factor which the paper considers is how long it takes the dropped Mentos to sink to the bottom. The faster a Mentos falls to the bottom, the longer the “average distance” that a bubble needs to travel to get to the surface. As bubbles themselves attract more bubbles, this means that the Mentos which fall to the bottom the fastest will have the strongest explosions. As the paper points out:

    The speed with which the sample falls through the liquid is also a major factor. We used a video camera to measure the time it took for Mentos, rock salt, Wint-o-Green Lifesavers, and playground sand to fall through water from the top of the water line to the bottom of a clear 2 l bottle. The average times were 0.7 s for the Mentos, 1.0 s for the rock salt and the Lifesavers, and 1.5 s for the sand … If the growth of carbon  dioxide bubbles on the sample takes place at the bottom of the bottle, then the bubbles formed will detach from the sample and rise up the bottle. The bubbles then act as growth sites, where the carbon dioxide still dissolved in the solution moves into the rising bubbles, causing even more liberation of carbon dioxide from the bottle. If the bubbles must travel farther through the liquid, the reaction will be more explosive.

    So, in conclusion, what makes a Diet Coke + Mentos reaction stronger?

    • Temperature (hotter = stronger)
    • Adding substances which reduce the surface tension/contact angle
    • Increasing the speed at which the Mentos sink to the bottom (faster = stronger)

    I wish I had done something like this when I was in college! The paper itself also goes into a lot of other things, like the use of an atomic force microscope and scanning electron microscopes to measure the “roughness” of the surface of the Mentos, so if you’re interested in additional things which can affect the strength of the reaction (or if you’re a science teacher interested in coming up with a cool project for your students), I’d strongly encourage taking a look at the paper!

    Paper: Coffey, T. “Diet Coke and Mentos: What is really behind this physical reaction?”. American Journal of Physics 76:6 (Jun 2008) – doi: 10.1119/1.2888546

    Check out my other academic paper walkthroughs/summaries