Skip to content →

Tag: Paper a month

Boa Constrictors Listen to Your Heart So They Know When You’re Dead

Here’s to the first paper a month post for 2012!


For January I decided to blog a paper I heard about on the excellent Nature podcast about a deliciously simple and elegant experiment to test a very simple question: given how much time and effort boa constrictors (like the one on above, photo taken by Paul Whitten) need to kill prey by squeezing them to death, how do they know when to stop squeezing?

Hypothesizing that boa constrictors could sense the heartbeat of their prey, some enterprising researchers from Dickinson College decided to test the hypothesis by fitting dead rats with bulbs connected to water pumps (so that the researchers could simulate a heartbeat) and tracking how long and hard the boas would squeeze for:

    • rats without a “heartbeat” (white)
    • rats with a “heartbeat” for 10 min (gray)
    • rats with a continuous “heartbeat” (black)

The results are shown in figure 2 (to the right). The different color bars show the different experimental groups (white: no heartbeat, gray: heartbeat for 10 min before stopping, and black: continuous heartbeat). Figure 2a (on top) shows how long the boas squeezed for whereas Figure 2b (on bottom) shows the total “effort” exerted by the boas. As obvious from the chart, the longer the simulated heartbeat went, the longer and harder the boas would squeeze.

Conclusion? I’ll let the paper speak for itself: “snakes use the heartbeat in their prey as a cue to modulate constriction effort and to decide when to release their prey.”

Interestingly, the paper goes a step further for those of us who aren’t ecology experts and notes that being attentive to heartbeat would probably be pretty irrelevant in the wild for small mammals (which, ironically, includes rats) and birds which die pretty quickly after being constricted. Where this type of attentiveness to heartrate is useful is in reptilian prey (crocodiles, lizards, other snakes, etc) which can survive with reduced oxygen for longer. From that observation, the researchers thus concluded that listening for heartrate probably evolved early in evolutionary history at a time when the main prey for snakes were other reptiles and not mammals and birds.

In terms of where I’d go next after this – my main point of curiosity is on whether or not boa constrictors are listening/feeling for any other signs of life (i.e. movement or breathing). Obviously, they’re sensitive to heart rate, but if an animal with simulated breathing or movement – would that change their constricting activity as well? After all, I’m sure the creative guys that made an artificial water-pump-heart can find ways to build an artificial diaphragm and limb muscles… right? 🙂

(Image credit – boa constrictor: Paul Whitten) (Figures from paper)

Paper: Boback et al., “Snake modulates constriction in response to prey’s heartbeat.” Biol Letters. 19 Dec 2011. doi: 10.1098/rsbl.2011.1105

Leave a Comment

Mosquitoes are Drawn to Your Skin Bacteria

There’s only two more days left in 2011, so time for my final paper a month post for 2011!

Like with the paper I blogged for last month, this month’s paper (from open access journal PLoS ONE) is yet again about the impact on our health of the bacteria which have decided to call our bodies home. But, instead of the bacteria living in our gut, this month is about the bacteria which live on our skin.

Its been known that the bacteria that live on our skin help give us our particular odors. So, the researchers wondered if the mosquitos responsible for passing malaria (Anopheles) were more or less drawn to different individuals based on the scent that our skin-borne bacteria impart upon us (also, for the record, before you freak out about bacteria on your skin, remember that like the bacteria in your gut, the bacteria on your skin are natural and play a key role in maintaining the health of your skin).

Looking at 48 individuals, they noticed a huge variation in terms of attractiveness to Anopheles mosquitos (measured by seeing how much mosquitos prefer to fly towards a chamber with a particular individual’s skin extract versus a control) which they were able to trace to two things. The first is the amount of bacteria on your skin. As shown in Figure 2 below, is that the more bacteria that you have on your skin (the higher your “log bacterial density”), the more attractive you seem to be to mosquitos (the higher your mean relative attractiveness).

Figure 2

The second thing they noticed was that the type of bacteria also seemed to be correlated with attractiveness to mosquitos. Using DNA sequencing technology, they were able to get a mini-census of what sort of bacteria were present on the skins of the different patients. Sadly, they didn’t show any pretty figures for the analysis they conducted on two common types of bacteria (Staphylococcus and Pseudomonas), but, to quote from the paper:

The abundance of Staphylococcus spp. was 2.62 times higher in the HA [Highly Attractive to mosquitoes] group than in the PA [Poorly Attractive to mosquitoes] group and the abundance of Pseudomonas spp. 3.11 times higher in the PA group than in the HA group.

Using further genetic analyses, they were also able to show a number of other types of bacteria that were correlated with one or the other.

So, what did I think? While I think there’s a lot of interesting data here, I think the story could’ve been tighter. First and foremost, for obvious reasons, correlation does not mean causation. This was not a true controlled experiment – we don’t know for a fact if more/specific types of bacteria cause mosquitos to be drawn to them or if there’s something else that explains both the amount/type of bacteria and the attractiveness of an individual’s skin scent to a mosquito. Secondly, Figure 2 leaves much to be desired in terms of establishing a strong trendline. Yes, if I  squint (and ignore their very leading trendline) I can see a positive correlation – but truth be told, the scatterplot looks like a giant mess, especially if you include the red squares that go with “Not HA or PA”. For a future study, I think it’d be great if they could get around this to show stronger causation with direct experimentation (i.e. extracting the odorants from Staphylococcus and/or Pseudomonas and adding them to a “clean” skin sample, etc)

With that said, I have to applaud the researchers for tackling a fascinating topic by taking a very different angle. I’ve blogged before about papers on dealing with malaria, but the subject matter is usually focused on how to directly kill or impede the parasite (Plasmodium falciparums). This is the first treatment of the “ecology” of malaria – specifically the ecology of the bacteria on your skin! While the authors don’t promise a “cure for malaria”, you can tell they are excited about what they’ve found and the potential to find ways other than killing parasites/mosquitos to help deal with malaria, and I look forward to seeing the other ways that our skin bacteria impact our lives.

(Figure 2 from paper)

Paper: Verhulst et al. “Composition of Human Skin Microbiota Affects Attractiveness to Malaria Mosquitoes.” PLoS ONE 6(12). 17 Nov 2011. doi:10.1371/journal.pone.0028991

Leave a Comment

Fat Flora


November’s paper was published in Nature in 2006, and covers a topic I’ve become increasingly interested in: the impact of the bacteria that have colonized our bodies  on our health (something I’ve blogged about here and here).

The idea that our bodies are, in some ways, more bacteria than human (there are 10x more gut bacteria – or flora — than human cells on our bodies) and that those bacteria can play a key role on our health is not only mind-blowing, it opens up another potential area for medical/life sciences research and future medicines/treatments.

In the paper, a genetics team from Washington University in St. Louis explored a very basic question: are the gut bacteria from obese individuals different from those from non-obese individuals? To study the question, they performed two types of analyses on a set of mice with a genetic defect leading to an inability of the mice to “feel full” (and hence likely to become obese) and genetically similar mice lacking that defect (the s0-called “wild type” control).

The first was a series of genetic experiments comparing the bacteria found within the gut of obese mice with those from the gut of “wild-type” mice (this sort of comparison is something the field calls metagenomics). In doing so, the researchers noticed a number of key differences in the “genetic fingerprint” of the two sets of gut bacteria, especially in the genes involved in metabolism.

imageBut, what did that mean to the overall health of the animal? To answer that question, the researchers did a number of experiments, two of which I will talk about below. First, they did a very simple chemical analysis (see figure 3b to the left) comparing the “leftover energy” in the waste (aka poop) of the obese mice to the waste of wild-type mice (and, yes, all of this was controlled for the amount of waste/poop). Lo and behold, the obese mice (the white bar) seemed to have gut bacteria which were significantly better at pulling calories out of the food, leaving less “leftover energy”.

imageWhile an interesting result, especially when thinking about some of the causes and effects of obesity, a skeptic might look at that data and say that its inconclusive about the role of gut bacteria in obesity – after all, obese mice could have all sorts of other changes which make them more efficient at pulling energy out of food. To address that, the researchers did a very elegant experiment involving fecal transplant: that’s right, colonize one mouse with the bacteria from another mouse (by transferring poop). The figure to the right (figure 3c) shows the results of the experiment. After two weeks, despite starting out at about the same weight and eating similar amounts of the same food, wild type mice that received bacteria from other wild type mice showed an increase in body fat of about 27%, whereas the wild type mice that received bacteria from the obese mice showed an increase of about 47%! Clearly, gut bacteria in obese mice are playing a key role in calorie uptake!

In terms of areas of improvement, my main complaint about this study is just that it doesn’t go far enough. The paper never gets too deep on what exactly were the bacteria in each sample and we didn’t really get a sense of the real variation: how much do bacteria vary from mouse to mouse? Is it the completely different bacteria? Is it the same bacteria but different numbers? Is it the same bacteria but they’re each functioning differently? Do two obese mice have the same bacteria? What about a mouse that isn’t quite obese but not quite wild-type either? Furthermore, the paper doesn’t show us what happens if an obese mouse has its bacteria replaced with the bacteria from a wild-type mouse. These are all interesting questions that would really help researchers and doctors understand what is happening.

But, despite all of that, this was a very interesting finding and has major implications for doctors and researchers in thinking about how our complicated flora impact and are impacted by our health.

(Image credit) (Figure 3 from the paper)

Paper: Turnbaugh et al., “An obesity-associated gut microbiome with increased capacity for energy harvest.” Nature (444). 21/28 Dec 2006. doi:10.1038/nature05414

One Comment

Homing Stem Cell Missile Treatments

Another month, another paper

This month’s paper is about stem cells: those unique cells within the body which have the capacity to assume different roles. While people have talked at lengths about the potential for stem cells to function as therapies, one thing holding them back (with the main exception being bone marrow cells) is that its very difficult to get stem cells to exactly where they need to be.

With bone marrow transplants, hematopoietic stem cells naturally “home” (like a missile) to where they need to be (in the blood-making areas of the body). But with other types of stem cells, that is not so readily true, making it difficult or impossible to use the bloodstream as a means of administering stem cell therapies. Of course, you could try to inject, say, heart muscle stem cells directly into the heart, but that’s not only risky/difficult, its also artificial enough that you’re not necessarily providing the heart muscle stem cells with the right triggers/indicators to push them towards becoming normal, functioning heart tissue.

Researchers at Brigham & Women’s Hospital and Mass General Hospital published an interesting approach to this problem in the journal Blood (yes, that’s the real name). They used a unique feature of white blood cells that I blogged about very briefly before called leukocyte extravasation, which lets white blood cells leave the bloodstream towards areas of inflammation.


The process is described in the image above, but it basically involves the sugars on the white blood cell’s surface, called Sialyl Lewis X (SLeX), sticking to the walls of blood vessels near sites of tissue damage. This causes the white blood cell to start rolling (rather than flowing through the blood) which then triggers other chemical and physical changes which ultimately leads to the white blood cell sticking to the blood vessel walls and moving through.

The researchers “borrowed” this ability of white blood cells for their mesenchymal stem cells. The researchers took mesenchymal stem cells from a donor mouse and chemically coated them with SLeX – the hope being that the stem cells would start rolling anytime they were in the bloodstream and near a site of inflammation/tissue damage. After verifying that these coated cells still functioned (they could still become different types of cells, etc), they then injected them into mice (who received injections in their ears with a substance called LPS to simulate inflammation) and used video microscopes to measure the speed of different mesenchymal stem cells in the bloodstream. In Figures 2A and 2B to the left, the mesenchymal stem cell coated in SLeX is shown in green and a control mesenchymal stem cell is shown in red. What you’re seeing is the same spot in the ear of a mouse under inflammation with the camera rolling at 30 frames per second. As you can see, the red cell (the untreated) moves much faster than the green – in the same number of frames, its already left the vessel area! That, and a number of other measurements, made the researchers conclude that their SLeX coat actually got their mesenchymal stem cells to slow down near points of inflammation.

But, does this slowdown correspond with the mesenchymal stem cells exiting the bloodstream? Unfortunately, the researchers didn’t provide any good pictures, but they did count the number of different types of cells that they observed in the tissue. When it came to ears with inflammation (what Figure 4A below refers to as “LPS ear”), the researchers saw an average of 48 SLeX-coated mesenchymal stem cells versus 31 uncoated mesenchymal stem cells within their microscopic field of view (~50% higher). When it came to the control (the “saline ear”), the researchers saw 31 SLeX-coated mesenchymal stem cells versus 29 uncoated (~7% higher). Conclusion: yes, coating mesenchymal stem cells with SLeX and introducing them into the bloodstream lets them “home” to areas of tissue damage/inflammation.

As you can imagine, this is pretty cool – a simple chemical treatment could help us turn non-bone-marrow-stem cells into treatments you might receive via IV someday!

But, despite the cool finding, there were a number of improvements that this paper needs. Granted, I received it pre-print (so I’m sure there are some more edits that need to happen), but my main concerns are around the quality of the figures presented. Without any clear time indicators or pictures, its hard to know what exactly the researchers are seeing. Furthermore, its difficult to see for sure whether or not the treatment did anything to the underlying stem cell function. The supplemental figures of the paper are only the first step in, to me, what needs to be a long and deep investigation into whether or not those cells do what they’re supposed to – otherwise, this method of administering stem cell therapies is dead in the water.

(Figures from paper) (Image credit: Leukocyte Extravasation)

Paper: Sarkar et al., “Engineered Cell Homing.” Blood. 27 Oct 2011 (online print). doi:10.1182/blood-2010-10-311464

Leave a Comment


I’m pretty late for my September paper of the month, so here we go

“Omics” is the hot buzz-suffix in the life sciences for anything which uses the new sequencing/array technologies we now have available. You don’t study genes anymore, you study genomics. You don’t study proteins anymore – that’s so last century, you study proteomics now. And, who studies metabolism? Its all about metabolomics. There’s even a (pretty nifty) blog post covering this with the semi-irreverent name “Omics! Omics!”.

Its in the spirit of “Omics” that I chose a Science paper from researchers at the NIH because it was the first time I have ever encountered the term “antibodyome”. For those of you who don’t know, antibodies are the “smart missiles” of your immune system – they are built to recognize and attack only one specific target (i.e. a particular protein on a bacteria/virus). This ability is so remarkable that, rather than rely on human-generated constructs, researchers and biotech companies oftentimes choose to use antibodies to make research tools (i.e. using fluorescent antibodies to label specific things) and therapies (i.e. using antibodies to proteins associated with cancer as anti-cancer drugs).

How the immune system does this is a fascinating story in and of itself. In a process called V(D)J recombination – the basic idea is that your immune system’s B-cells mix, match, and scramble certain pieces of your genetic code to try to produce a wide range of antibodies to hit potentially every structure they could conceivably see. And, once they see something which “kind of sticks”, they undergo a process called affinity maturation to introduce all sorts of mutations in the hopes that you create an even better antibody.

Which brings us to the paper I picked – the researchers analyzed a couple of particularly effective antibodies targeted at HIV, the virus which causes AIDS. What they found was that these antibodies all bound the same part of the HIV virus, but when they took a closer look at the 3D structures/the B-cell genetic code which made them, they found that the antibodies were quite different from one another (see Figure 3C below)

What’s more, not only were they fairly distinct from one another, they each showed *significant* affinity maturation – while a typical antibody has 5-15% of their underlying genetic code modified, these antibodies had 20-50%! To get to the bottom of this, the researchers looked at all the antibodies they could pull from the patient – in effect, the “antibodyome”, in the same way that the patient’s genome would be all of his/her genes, —  and along with data from other patients, they were able to construct a “family tree” of these antibodies (see Figure 6C below)

The analysis shows that many of the antibodies were derived from the same initial genetic VDJ “mix-and-match” but that afterwards, there were quite a number of changes made to that code to get the situation where a diverse set of structures/genetic codes could attack the same spot on the HIV virus.

While I wish the paper probed deeper into actual experimentation to take this analysis further (i.e. artificially using this method to create other antibodies with similar behavior), this paper goes a long way into establishing an early picture of what “antibodyomics” is. Rather than study the total impact of an immune response or just the immune capabilities of one particular B-cell/antibody, this sort of genetic approach lets researchers get a very detailed, albeit comprehensive look at where the body’s antibodies are coming from. Hopefully, longer term this also turns into a way for researchers to make better vaccines.

(Figure 2 and 6 from paper)

Paper:  Wu et al., “Focused Evolution of HIV-1 Neutralizing Antibodies Revealed by Structures and Deep Sequencing.” Science (333). 16 Sep 2011. doi: 10.1126/science.1207532

One Comment

Atlantic Cod Are Not Your Average Fish

Another month, another paper, and like with last month’s, I picked another genetics paper, this time covering an interesting quirk of immunology.


This month’s paper from Nature talks about a species of fish that has made it to the dinner plates of many: the Atlantic Cod (Gadus morhua). The researchers applied shotgun sequencing techniques to look at the DNA of the Atlantic Cod. What they found about the Atlantic Cod’s immune system was very puzzling: animals with vertebra (so that includes fish, birds, reptiles, mammals, including humans!) tend to rely on proteins called Major Histocompatibility Complex (MHC) to trigger our adaptive immune systems. There tend to be two kinds of MHC proteins, conveniently called MHC I and MHC II:

    • MHC I is found on almost every cell in the body – they act like a snapshot X-ray of sorts for your cells, revealing what’s going on inside. If a cell has been infected by an intracellular pathogen like a virus, the MHC I complexes on the cell will reveal abnormal proteins (an abnormal snapshot X-ray), triggering an immune response to destroy the cell.
    • MHC II is found only on special cells called antigen-presenting cells. These cells are like advance scouts for your immune system – they roam your body searching for signs of infection. When they find it, they reveal these telltale abnormal proteins to the immune system, triggering an immune response to clear the infection.

The genome of the Atlantic cod, however, seemed to be completely lacking in genes for MHC II! In fact, when the researchers used computational methods to see how the Atlantic cod’s genome aligned with another fish species, the Stickleback (Gasterosteus aculeatus), it looked as if someone had simply cut the MHCII genes (highlighted in yellow) out! (see Supplemental Figure 17 below)


Yet, despite not having MHC II, Atlantic cod do not appear to suffer any serious susceptibility to disease. How could this be if they’re lacking one entire arm of their disease detection?One possible answer: they seemed to have compensated for their lack of MHC II by beefing up on MHC I! By looking at the RNA (the “working copy” of the DNA that is edited and used to create proteins) from Atlantic cod, the researchers were able to see a diverse range of MHC I complexes, which you can see in how wide the “family tree” of MHCs in Atlantic cod is relative to other species (see figure 3B, below).


Of course, that’s just a working theory – the researchers also found evidence of other adaptations on the part of Atlantic cod. The key question the authors don’t answer, presumably because they are fish genetics guys rather than fish immunologists, is how these adaptations work? Is it really an increase in MHC I diversity that helps the Atlantic cod compensate for the lack of MHC II? That sort of functional analysis rather than a purely genetic one would be very interesting to see.

The paper is definitely a testament to the interesting sorts of questions and investigations that genetic analysis can reveal and give a nice tantalizing clue to how alternative immune systems might work.

(Image credit – Atlantic Cod) (All figures from paper)

Paper: Star et al, “The Genome Sequence of Atlantic Cod Reveals a Unique Immune System.” Nature (Aug 2011). doi:10.1038/nature10342

One Comment

Its not just SNPs

Another month, another paper (although this one is almost two weeks overdue – sorry!)

In my life in venture capital, I’ve started more seriously looking at new bioinformatics technologies so I decided to dig into a topic that is right up that alley. This month’s paper from Nature Biotechnology covers the use of next-generation DNA sequencing technologies to look into something which had been previously extremely difficult to study with past sequencing technologies.

As the vast majority of human DNA is the same from person to person, one would expect that the areas of our genetic code which tend to vary the most from person to person, locations which are commonly known as Single Nucleotide Polymorphisms, or SNPs, would be the biggest driver of the variation we see in the human race (at least the variations that we can attribute to genes). This paper from researchers at the Beijing Genomics Institute (now the world’s largest sequencing facility – yes, its in China) adds another dimension to this – its not just SNPs that make us different from one another: humans also appear to have a wide range of variations on an individual level in the “structure” of our DNA, what are called Structural Variations, or SVs.

Whereas SNPs represent changes at the individual DNA code level (for instance, turning a C into a T), SVs are examples where DNA is moved (i.e., between chromosomes), repeated, inverted (i.e., large stretches of DNA reversed in sequence), or subject to deletions/insertions (i.e., where a stretch of DNA is removed or inserted into the original code). Yes, at the end of the day, these are changes to the underlying genetic code, but because of the nature of these changes, they are more difficult to detect with “old school” sequencing technologies which rely on starting at one position in the DNA and “reading” a stretch of DNA from that point onward. Take the example of a stretch of DNA that is moved – unless you start your “reading” right before or right at the end of where the new DNA has been moved to, you’d never know as the DNA would read normally everywhere else and in the middle of the DNA fragment.

What the researchers figured out is that new sequencing technologies let you tackle the problem of detecting SVs in a very different way. Instead of approaching each SV separately (trying to structure your reading strategy to catch these modifications), why not use the fact that so-called “next generation sequencing” is far faster and cheaper to read an individual’s entire genome and then look at the overall structure that way?



And that’s exactly what they did (see figures 1b and 1c above). They applied their sequencing technologies to the genomes of an African individual (1c) and an Asian individual (1b) and compared them to some of the genomes we have on file. The circles above map out the chromosomes for each of the individuals on the outer-most ring. On the inside, the lines show spots where DNA was moved or copied from place to place. The blue histogram shows where all the insertions are located, and the red histogram does the same thing with deletions. All in all: there looks to be a ton of structural variation between individuals. The two individuals had 80-90,000 insertions, 50-60,000 deletions, 20-30 inversions, and 500-800 copy/moves.

The key question that the authors don’t answer (mainly because the paper was about explaining how they did this approach, which I heavily glossed over here partly because I’m no expert, and how they know this approach is a valid one) is what sort of effect do these structural variations have on us biologically? The authors did a little hand-waving to show, with the limited data that they have, that humans seem to have more rare structural variations than we do rare SNPs – in other words, that you and I are more likely to have different SVs than different SNPs: a weak, but intriguing argument that structural variations drive a lot of the genetic-related individual variations between people. But that remains to be validated.

Suffice to say, this was an interesting technique with a very cool “million dollar figure” and I’m looking forward to seeing further research in this field as well as new uses that researchers and doctors dig up for the new DNA sequencing technology that is coming our way.

(All figures from paper)

Paper: Li et al., “Structural Variation in Two Human Genomes Mapped at Single-Nucleotide Resolution by Whole Genome Assembly.” Nature Biotechnology 29 (Jul 2011) — doi:10.1038/nbt.1904

Leave a Comment

Much Ado About Microcredit

Another month, another paper.

This month’s paper from Science is not the usual traditional science fare I’ve tended to blog about. I heard about this on the Science magazine podcast. In it, two economists basically find a way to run a randomized clinical trial to see what microfinance does!

PicturingMicrofinanceImageA brief intro to microfinance: in many developing countries, the banking system is underdeveloped. And, even if a mature banking system were to exist, banks themselves typically do not lend small amounts of money to small businesses/families who don’t have much by the way of credit history. The idea being microfinance is that you can do a lot to help people in developing countries by providing their smallest businesses, especially those run by women who are traditionally excluded from their local economies, with “micro” loans. Organizations like Kiva have sprung up to pursue this sort of work, and the 2006 Nobel Peace Prize was even awarded to Muhammad Yunus for his role in popularizing it.

But, does it work at building communities and improving economies? If you’re a scientist, to answer that question conclusively, you need a controlled experiment. So, the authors of the study worked with a for-profit microfinance organization in the Philippines, First Macro Bank (FMB), to do a double-blinded randomized trial. Using a computer program, they automatically categorized a series of microcredit applicants by their creditworthiness. Obviously credit-worthy and obviously credit-unworthy applicants (combined, 26% of applicants) were taken care of quickly. For the 74% of applicants that the program considered “marginal” (not obviously one way or the other), they were randomly assigned to two groups: a control group that did not receive a microloan, and a treatment group who would receive a microloan. Following the “treatment”, the participants in the experiment were then surveyed along on a number of economic and lifestyle metrics.

How was this double-blinded? Neither the applicants nor the FMB employees who interfaced with were aware that this was an experiment. The surveyors were not even aware this was an experiment or that FMB was involved.

Why focus on “marginal” applicants? A couple of reasons: first, the most likely changes to microfinance policy will impact these applicants the most, so they are the most relevant group to study. Secondly, you want to try to make apples-to-apples comparisons. Rejecting some obviously credit-worthy (or credit-unworthy) individuals may have raised red flags that some sort of algorithmic flaw or artificial experiment was happening. To really understand the impact of microfinance, you need to start on even footing in a realistic setting (esp. not comparing obviously credit-worthy individuals with so-so- credit-worthy individuals)

So, what did the researchers find? They found a lot of interesting things – many of which will require us to re-think the advantages of microfinance. The data is presented in a lot of boring tables so, unlike most of my science paper posts, I’m not going to cut and paste figures, but I will summarize the statistically significant findings:

  1. Receiving microfinance increases amount of borrowing. The “treatment group” had, on average, 9% more loans from institutions (rather than friends/family) than the control group (excluding the microloan itself, of course)
  2. Microfinance does not seem to go towards aggressive hiring. The “treatment group” had, on average, 0.273 fewer paid employees than the control group. Whether or not this reflected the original size of the businesses is beyond me, but I am willing to give the researchers the benefit of the doubt for now.
  3. Microfinance does not seem to have a major impact on subjective measures of quality of life except elevated stress levels of male microfinance recipients. Most of the subjective quality of life measures showed no statistically significant differences except that one.
  4. Receiving microfinance reduces likelihood of getting non-health insurance by 7.9%
  5. There don’t appear to be significantly different or larger impacts of microfinance on women vs. men.

So, when’s all said and done, what does it all mean? First, it appears that instead of leading to aggressive business expansion as it is widely believed, microfinance itself actually seems to have a small, but slightly negative impact on employment at those businesses. While I don’t have a perfect explanation, combining all the observations above would suggest that the main impact of microfinance is not business expansion so much as risk management: entrepreneurs who received microloans seemed more willing to consolidate their business activities (i.e., firing “extra” workers who might have been “spare capacity”), to avoid purchases of insurance, and to reach out to other banks for more loans — very different than the story that we usually hear from the typical microfinance supporter.

The fundamental unknowns of this well-crafted study, though, are around whether or not these findings are that useful. While the researchers did an admirable job controlling for extraneous factors to reach a certain conclusion for a certain set of people in the Philippines, its not necessarily obvious that the study’s findings hold true in another country/culture. The surveys were also conducted only a few months after receipt of the microloans — it is possible that the impacts on businesses and local communities need more time to manifest. Finally, the data collected from the study does almost too good of a job stripping out selection bias. Microfinance organizations today can be fairly selective, picking only the best entrepreneurs or potentially coaching/forcing the entrepreneurs to allocate their resources differently than the mostly hands-off approach that was taken here.

All in all, an interesting paper, and something worth reading and thinking about by anyone who works in/with microfinance organizations.

(Image credit)

Paper: Karlan et al., “Microcredit in Theory and Practice: Using Randomized Clinical Scoring for Impact Evaluation.” Science 332 (Jun 2011) – doi: 10.1126/science.1200138


The Sickle Cell Salve

I might have been crazy late with April, but for May, my on-timeliness when it comes to the paper a month posts is returning with a vengeance.

This month’s paper from Cell (a journal I usually avoid because their papers are ridiculously long :-)) dives beneath the surface of one of the classic examples of genetics used in almost every intro-to-genetics seminar/class/textbook. As you probably know, living things typically receive two sets of genes: one from the mother and one from the father. If those two sets of genes result in the same protein, then the organism is said to be homozygous for that particular trait. Otherwise, the proper term is heterozygous. In classical genetics (i.e. what was painstakingly discovered by Gregor Mendel, the “father of genetics”), being heterozygous, to a casual observer, was usually something that could only be seen after multiple generations (or with a DNA test). This is because even though the individual has two different versions of the same gene, one of them is “dominant”, expressing itself more loudly than the other.

en93587For the mutation which causes the disease sickle cell anemia (see image to the right as to why the disease is called “sickle cell”), however, the truth was a little different. While heterozygous individuals did not suffer from the problems associated with sickle cell anemia, unlike individuals homozygous for the “normal” gene, they showed a remarkable advantage when it came to surviving infection with malaria. It is one reason scientists feel that sickle cell anemia continues to be endemic in parts of the world where malaria is still a major issue.

But, how the sickle cell disease mutation did this in heterozygotes was not well-understood. The authors for this month’s paper tried to probe one possible explanation for this using mice as an experimental system. The interesting thing that they found was that mice that were heterozygous for the sickle cell trait (HbSAD), despite having better survival against malaria than those which were homozygous for the “normal” gene (HbWT) (see the Kaplan-Meier survival curve in Figure 1A below, showing the proportion of surviving mice over time), did not have a significantly different amount of infected red blood cells (see Figure 1F below).

                                                    image  image

So if the sickle cell gene wasn’t reducing the number of infected cells, what was causing the improvement in survival? The researchers knew that red blood cells which are heterozygous for the sickle cell trait will often “leak” an iron-containing chemical called heme into the blood stream. Because heme just floating around is toxic, the body responds to this with an enzyme called heme oxygenase-1 (HO-1) which turns toxic heme into the less toxic biliverdin and carbon monoxide (CO). The researchers considered whether or not HO-1 was responsible for the improved ability of the mice to avoid cerebral malaria. In a creative experiment, they were able to show that the sickle cell mice needed HO-1 to get their better survival – mice which were genetically engineered to be missing one copy of HO-1 (Hmox1+/-), even if they were heterozygous for the sickle cell disease, did not survive particularly well when infected (see Figure 2B below, left for the survival data).

In fact, they were even able to show that if you took mice which did not have any sickle cell trait gene (the HbWT group), and replaced their blood system using irradiation and a bone marrow transplant from a heterozygous sickle cell mouse (HbSAD), you only improve survival if the cells come from a mouse with its HO-1 genes intact (Hmox1+/+) (see Figure 4A below, right).

image imageimage

So, we know HO-1 is somehow the source of the heterozygous mice’s magical ability to survive malaria. But, how? As I stated earlier, the researchers knew HO-1 produced carbon monoxide (CO) and, they were able to show that heterozygous mice with a defective HO-1 response were able to survive when given carbon monoxide (see Figure 6E below). Interestingly, exposure to carbon monoxide reduces the amount of heme floating around in the bloodstream, something which gets kicked into overdrive when malaria starts killing red blood cells left and right (see Figure 6G below), something the researchers validated when they were able to neutralize the protecting power of carbon monoxide by adding more heme back into the mouse (see Figure 6H)


So, overall, what did I think? First the positive: these are extremely clearly designed and well-controlled experiments. I could only show a fraction of the figures in the paper, but rest assured, they were very methodical about creating positive and negative controls for all their figures and experiments which is fantastic. In particular, the use of bone marrow transplantation and genetically engineered mice to prove that HO-1 plays a key role in improving survival were creative and well-done.

What leaves me unsettled with the paper is the conclusion. The problem is that the trigger for HO-1, what the authors have shown is the reason mice which are heterozygous for sickle cell anemia survive malaria better, is heme, which happens to also be what the authors say is the cause for many of the survival complications. It’s like claiming that the best way to cure a patient of poison (heme) is to give the patient more poison (heme) because the poison somehow triggers the antidote (HO-1).

In my mind, there are two possible ways to explain the results. The first is that the authors are right and the reason for this is around the levels and timing of heme in the bloodstream. Maybe the amount of heme that the sickle cell heterozygotes have is not high enough to cause some of the malaria complications, but high enough so that HO-1 is always around. That way, if a malaria infection does happen, the HO-1 stays around and keeps the final level of heme just low enough so that problems don’t happen. The second explanation is that the authors are wrong and that the carbon monoxide that HO-1 is producing is not reducing the amount of heme directly, but indirectly by reducing the ability of the malaria parasites to kill red blood cells (the source of the extra heme). In this case, sickle cell heterozygotes have chronically higher levels of HO-1.

Both are testable hypotheses – the first can be tested by playing around with different levels of heme/HO-1 and observing how the amount of free-floating heme changes over time when mice are infected with malaria. The second can be tested by observing test tubes full of red blood cells and malaria parasites under different amounts of carbon monoxide.

In any event, I hope to see further studies in this area, especially ones which lead to more effective treatments for the many millions who are affected by malaria.

(Image Credit – Sickle Cell) (Figures from paper)

Paper: Ferreira et al., “Sickle Hemoglobin Confers Tolerance to Plasmodium Infection.” Cell 145 (Apr 2011) – doi: 10.1016/j.cell.2011.03.049

Leave a Comment

Planet Unbound

It seems each month, I get later and later with these paper a month blog entries.

imagesThis month, I went a little out of my element. Inspired by Origins, I decided that instead of my usual monthly biology/chemistry paper, I’d give an astronomy one a shot. This month’s paper comes from Nature and relies on a very cool phenomena called gravitational lensing. The basic idea comes from the fact that light bends in the presence of gravity (see image on left). That bending, which is very similar to the bending caused by the lens in a telescope or a pair of glasses, can be used by experienced astronomers to make conclusions about the “lens”, or the object who’s gravity is causing the light to bend. This is especially useful in cases where the “lens” is impossible or difficult to observe directly, i.e. in the case of dark matter or a brown dwarf. It is, however, “relatively” rare as it requires the lens, the light source, and the earth to all be in a straight line.

imageInterestingly, gravitational lensing can also be used to find planets – which because they tend to be significantly smaller and less bright than stars cannot be directly observed from Earth. How? Picture a planet passing through the line of sight between a star and the Earth. Because the planet itself is small and dim relative to the star, its main effect on the star’s apparent brightness is the gravitational lensing effect causing the apparent brightness of the star to increase when its in front of the star (see image on right) by virtue of bending more of the light from the “background star” on its way to Earth (and, consequentially, decrease when it moves away). Charting the apparent brightness over time and studying the shape of the “light curve” that results from the increase and then decrease in brightness can then provide hints on the mass of the planet, or, to use the words of the paper, “be used as a statistical probe of the mass function of the lens objects.”

So, what happens when scientists train their gravitational lensing-sensitive telescopes on 50 million stars in the the Galactic Bulge? Over the course of a year, they detected some 474 such gravitational lensing events, 10 of which were particularly interesting to them as the duration of the gravitational lensing light curve “blip” was short enough (less than 2 days in length) that it suggested it was a planet microlensing event (see examples of such blips below – the impact on brightness is on the vertical axes and the horizontal axes is time) and a quick scan of the sky/database shows no obvious stars nearby for these planets to orbit.



This next piece of their analysis feels like cheating to me – but it was also fairly ingenious. As the astute observer will note, the shape of the “blip” is only a good “statistical probe” – we can’t actually conclude anything about the mass of the gravitational lens (the potential planet or star that’s causing the gravitational lensing) without knowing the speed at which the lens is moving and the distance of the lens from the star. However, because we generally have a good idea of how massive and abundant stars are, we can actually use statistical modeling to calculate how many planet-sized gravitational lensing events we should expect out of 474. This would in turn also allow us to calculate the ratio of the number of these planets to the stars – a mini-galactic census, if you will.

The chart below shows the basic results of this modeling where the red and the blue curves are based on different means of approximating of the abundance/mass of various stars and the solid black line shows what was actually recorded. Along the vertical axes, we have the number of gravitational lensing events (actual and theoretical) which have a “light curve blilp” of a certain duration (the number on the horizontal axes). If you observe closely, you’ll notice two things:

  1. The theoretical line on the right-half of the graph fits very nicely with the observed data – suggesting that the overall model and our understanding of the abundance/mass of stars is reasonable.
  2. There’s a fair amount more (remember, the scale is logarithmic) observed events where the duration is shorter than 2 days (the left half of the graph) than expected… by a factor of 4-7!

Conclusion: the gravitational lensing numbers suggest strongly that they found a ton of planets with no obvious star that they are orbiting around and, further statistical modeling, suggests that there may even be up to 2 times the number of these “rogue planets” than there are “standard” stars!


Just imagine:  the Milky Way galaxy is potentially teeming with planets that have no stars (or are, at worst, orbiting very far away from their stars.

So, what does this all mean? For starters, its the first validation that the galaxy is teeming with star-less/unbound planets. That, in and of itself is cool, but it begs the question: how did these planets become unbound? And, how many other planets – potentially life-supporting – can we find?

(Image credit: light bending) (other images from paper or commentary piece)

Paper: MOA & OGLE. “Unbound or Distant Planetary Mass Population Detected by Gravitational Microlensing.” Nature 473 (May 2011) — doi:10.1038/nature10092


The Not So Secret (Half) Lives of Proteins

Ok, so I’m a little behind on my goal from last month to pump out my Paper a Month posts ahead of time, but better late than never!

This month’s paper comes once again from Science, as recommended to me by my old college roommate Eric. It was a pertinent recommendation for me, as the primary investigator on the paper is Uri Alon, a relatively famous scientist in the field of systems biology and author of the great introductory book An Introduction to Systems Biology: Design Principles of Biological Circuits , a book I happened to cite in my thesis :-).

In a previous paper-a-month post, I mentioned that scientists tend to assume proteins follow a basic first-order degradation relationship, where the higher the level of protein, the faster the proteins are cleared out. This gives a relationship which is not unlike the relationship you get with radioactive isotopes: they have half-lives where after a certain amount of time, half of the previous quantity is consumed. Within a cell, there are two ways for proteins to be cleared away in this fashion: either the cell grows/splits (so the same amount of protein has to be “shared” by more space – i.e. dilution) or the cell’s internal protein “recycling” machinery actively destroys the proteins. (i.e. degradation)

This paper tried to study this by developing a fairly ingenious experimental method, as described in Figure 1B (below). The basic idea is to use well-understood genetic techniques to introduce the proteins of interest tagged with fluorescent markers (like YFP = Yellow Fluorescent Protein) which will glow if subject to the right frequency of light. The researchers would then separate a sample of cells with the tagged proteins into two groups. One group would be the control (duh, what else do scientists do when they have two groups), and one would be subject to photobleachingwhere fluorescent proteins lose their ability to glow over time if they are continuously excited. The result, hopefully, is one group of cells where the level of fluorescence is a balance between protein creation and destruction (the control) and one group of cells where the level of fluorescence stems from the creation of new fluorescently tagged proteins. Subtract the two, and you should get a decent indicator of the rate of protein destruction within a cell.

But, how do you figure out whether or not the degradation is caused by dilution or degradation? Simple, if you know the rate at which the cells divide (or can control the cells to divide at a certain rate), then you effectively know the rate of dilution. Subtract that from the total and you have the rate of degradation! The results for a broad swatch of proteins is shown below in Figure 2, panels E & F, which show the ratio of the rate of dilution to rate of degradation for a number of proteins and classifies them by which is the biggest factor (those in brown are where degradation is much higher and hence they have a shorter half-life, those in blue are where dilution is much higher and hence they have a longer half-life, and those in gray are somewhere in between).

I was definitely very impressed with the creativity of the assay method and their ability to get data which matched up relatively closely (~10-20% error) with the “gold standard” method (which requires radioactivity and a complex set of antibodies), I was frankly disappointed by the main thrust of the paper. Cool assays don’t mean much if you’re not answering interesting questions. To me, the most interesting questions would be more functional: why do some proteins have longer or shorter half-lives? Why are some of their half-lives more dependent on one thing than the other? Do these half-lives change? If so, what causes them to change? What functionally determines whether a protein will be degraded easily versus not?

Instead, the study’s authors seemed to lose their creative spark shortly after creating their assay. They wound up trying to rationalize that which was already pretty self-evident to me:

  • If you a stress a cell, you make it divide more slowly
  • Proteins which have slow degradation rates tend to have longer half-lives
  • Proteins which have slow degradation rates will have even longer half-lives when you get cells to stop dividing (because you eliminate the dilution)
  • Therefore, if you stress a cell, proteins which have the longest half-lives will have even longer half-lives

Now, this is a worthy finding – but given the high esteem of a journal like Science and the very cool assay they developed, it seemed a bit anti-climactic. Regardless, I hope this was just the first in a long line of papers using this particular assay to understand biological phenomena.

(Figures from paper)

Paper: Eden et al. “Proteome Half-Life Dynamics in Living Human Cells.” Science 331 (Feb 2011) – doi: 10.1126/science.1199784

One Comment

Making the Enemy of your Enemy

What? Another science paper post? Yup, I’m trying to get ahead of my paper-a-month deadlines by posting February’s while actually still in February!

This month’s paper comes from Science and is a topic which is extremely relevant to global health. As you probably know, malaria kills close to 1 million people a year, with most of these deaths in areas lacking in the financial resources and public infrastructure needed to tackle the disease. In addition to the socioeconomic factors, the biology of the disease itself is extremely challenging to deal with because the malaria parasite Plasmodium falciparum not only rapidly shifts its surface proteins (so the immune system can’t get a good “fix” on it) it also has a very complex multi-stage life cycle (diagram below), where it goes from being carried around by a mosquito as a sporozoite, to infecting and effectively “hiding inside” human liver cells, to becoming merozoites which then infect and hide inside human red blood cells, and then producing gametocytes which are picked up by mosquito’s which combine to once again form sporozoites. Each stage is not only difficult to target (because the parasites spend a lot of their time “hiding”), but the sheer complexity of the lifecycle means the immune system and drugs humans come up with are always a step behind.


So, what to do? While there is active work being done to build vaccines and drugs to fight malaria, the “low-hanging fruit” is getting the upper-hand on the mosquito transmission phase. Unfortunately, controlling mosquitos has become almost as bad a nightmare as dealing with the Plasmodium parasite. The same socioeconomic factors which limit medical treatment for the disease also make it difficult to do things like exterminate mosquitos. Furthermore, pesticides not only have adverse environmental impacts (i.e., DDT) but will ultimately have limited lifetimes as the mosquito population will eventually develop resistance to them.

imageWell, enter the enterprising scientist. I can’t say for sure, but I have to believe that the scientists here must have read comic books like Spiderman or Captain America as a kid because the approach they chose feels like it came straight out of the comic book world. But, instead of building a monstrosity like the Scorpion (pictured to the right), the researchers built a super-fungus super-soldier to control malarial transmission.

Instead of giving the powers of a scorpion to smalltime thief Mac Gargan (who then named himself, appropriately, The Scorpion), the researchers engineered a fungus which naturally infects mosquitos called Metarhizium anisopliae to:

  • kill the infected mosquito more slowly (as to not push mosquitos to become resistant to the fungus)
  • coat the infected mosquito’s salivary glands with a protein fragment called SM1 to block the malaria parasites from getting there
  • produce a chemical derived from scorpions called scorpine which is extremely effective at killing malaria parasites and bacteria

Pretty cool idea, right? But does it work? Figure 3 of the chart below shows the results of their experiments:


Mosquitos were fed on malaria-infected blood 11 days before they were dosed with our super-fungus. Typically sporozoites take about 2 weeks to build in any reasonable number in a mosquito’s salivary glands, so 14-17 days after exposure to malaria, the researchers checked the salivary glands of uninfected mosquitos (the control [C] group), mosquitos infected with non-super-fungus (the wild-type [WT] group), and mosquitos infected with the super-fungus (transgenic [TS] group). As you can see in the chart above, the TS parasite count was not only significantly smaller than both the control and wild type groups, but the control and wild type groups behaved exactly as you would expect them to (the parasite counts went up over time).

So, have we discovered a super-soldier we can count on to stop mosquito-borne illnesses? I would hold off on that for a number of reasons. First, on an experimental level, the researchers only looked at 14-17 days post-infection. To be confident, I’d like to see what this looks like with different doses of fungus and over longer periods of time and a wider range of mosquitos (as nearly 70 species of mosquito transmit malaria and I don’t even know what the numbers look like for other diseases). Secondly, its not clear to me what the most effective way to dose large populations of mosquitos are. The researchers maintain that you can spray this like a pesticide and the fungus will adhere to surfaces and stay effective for long periods of time – but that needs to be validated and plans need to be drawn up to not only pay for this (I have no idea how expense this is) but also to deploy it.

Lastly, and this is something that almost any naturalist or economist will tell you: human actions always have unintended consequences. At a first glance, it looks like the researchers covered their bases. They build what looks like a strategy which avoids mosquito resistance (and, because it uses at least two ways of controlling the parasite, is probably less vulnerable to Plasmodium resistance than drugs/vaccines). But, more research needs to be done to ascertain if there are other environmental or economic impacts of using something like this.

All in all, however, this looks like a promising start for what could be an interesting and inspired way to help control malaria.

(Image credit – Malaria lifecycle) (Image credit – the Scorpion) (Figure 3 from paper)

Paper: Fang et al., “Development of Transgenic Fungi that Kill Human Malaria Parasite in Mosquitos.” Science 331: 1074-1077 (Feb 2011) – doi:10.1126/science.1199115

One Comment

Can’t Read My Monkey Face

(Yes, that was a Lady Gaga reference) I’m extremely late for January 2011’s paper-a-month blog post, but better late than never!

This month’s paper actually dates to a little over 3 years ago. And, it actually isn’t about monkeys – its about chimpanzees (but “monkey” and “poker” have the same number of syllables, whereas “chimpanzee” has one more and hence throws off the reference). But, in it, they describe a very interesting experimental design to look into a very bold question: how do chimpanzees play the ultimatum game?

First off, what’s the ultimatum game? The ultimatum game is considered to be a test of fairness vs. rationality. The basic idea is that you have a setup with two people – a offer-maker and an offer-taker. The offer-maker makes an offer to split a prize (be it money, or food, or something else that both the offer-maker and the offer-taker find valuable) between the offer-maker and the offer-taker (i.e., so the offer-maker could say that he/she gets 90% whereas the offer-taker gets 10%). The offer-taker then decides whether or not to accept the offer or to reject the offer – whereby rejection means both offer-maker and offer-taker get nothing. (refer to dinosaur comics below)


Why is this an interesting test of fairness vs rationality? If humans were perfectly rational, then the offer-taker would accept any offer better than 0 and, knowing this, the offer-maker would offer the bare minimum piece to the offer-taker and take the lion’s share for him/herself. The reason for this is that a rational offer-taker would realize that as long as the offer gives something better than 0 – which is what you get if you reject the offer no matter what — the offer-taker is actually better off just taking the offer.

However, study after study shows that human beings aren’t perfectly rational – that faced between a grossly unfair, but objectively better outcome and nothing, that most humans will prefer the latter. In fact, studies have shown that the offer-taker tends to reject offers where they receive less than 20%. There are many evolutionary, sociological, and psychological interpretations, but at the end of the day the combination of valuing fairness and punishing unfairness is something which lets people live and work together.

While there are all sorts of interesting scientific and philosophical work being done to explore how humans play the ultimatum game, it does beg the question: do other creatures value fairness the way we do? In other words, if we could get other animals to play the ultimatum game, how would they play?

The researchers here conducted a fascinating experiment where they “taught” chimpanzees, who are not only our closest genetic relative but also known to work collaboratively on tasks such as hunting and territorial patrol, to play a slight variation of the ultimatum game. The setup is shown below in Figure 1a:


This variation of the ultimatum game requires the offer-maker (or “proposer” in the language of the paper) to select between two possible distributions of the prize (in this case, different amounts of raisins), visible to both the offer-maker and offer-taker (or “responder”) by pulling on one of two different ropes. The offer-taker then decides whether or not to accept the offer by pulling on a rod which brings the raisins within reach of both chimpanzees to eat (see figures 1b and 1c below).

Also, to help get a sense for the relative value of fairness, in each “round” of the experiment, the offer-maker always had a chance to offer a 8/2 split (with the offer-maker getting 80% of the spoils and the offer-taker getting 20%), and depending on which experimental group, the other option was either: 10/0 (offer-maker gets 100%), 8/2 (same as the control), 2/8 (offer-maker gets 20%, offer-taker gets 80%), and 5/5 (even split). This gave the researchers a chance to see how offer-makers and offer-takers would view the relative merits of other offers against that control.


Very clever experimental design, in my humble opinion! To insure that the chimpanzees knew what was going on, they were trained for some time to make sure they understood how to operate the apparatus and that their actions had consequences for their partner (i.e., by having the chimpanzees operate the apparatus and then be allowed to enter the partner’s chamber to eat the raisins).

So, what were the results? See for yourself (below in Figure 2). The left side shows every experiment run (~50 for each comparison), how often the offer-maker in each group proposed 8/2 vs. the other option, as well as how often the offer-taker rejected the offer.


The first thing that jumped out to me is that, with the exception of the 10/0 offer where the offer-taker gets absolutely nothing, the offer-takers reject very few of the offers.  The second thing that jumped out to me is that, with the exception of the 8/2 vs 10/0 decision, the offer-makers strongly preferred the offer where they got more (they overwhelmingly selected 8/2 over 5/5 and 2/8), suggesting that they first value their own well-being, but are also cognizant that the other chimpanzee has minimal reason to accept a 10/0 offer where they get nothing. These two observations support the conclusion of the research group, that chimpanzees are mostly rational maximizers, and don’t place too much weight on fairness.

The third thing which jumped out to me is very confusing – the chimpanzees were actually more wiling to reject 8/2 deals when the alternative was equivalent or worse (another 8/2 deal or a 10/0 deal) than when they had the much better options of a 5/5 or a 2/8 deal. This is either random experimental noise or suggests that chimpanzees like having a fairer option even when they don’t get it?

If I were to suggest next steps for the team, I’d ask them to probe deeper into that third point – because it suggests either there is a very interesting behavioral quirk about chimpanzees that is not well-understood today or that the chimpanzees didn’t fully understand how to play the game. I’d also recommend them to try a more direct measure of finding out if chimpanzees value altruism (a concept related to but not exactly the same as fairness): I’d love to see if chimpanzees are Pareto efficient (are they willing to be generous if it doesn’t cost them anything): are they more likely to propose a 8/6 over a 8/2. A positive result there would reinforce the finding here, that chimpanzees are actually altruistic – but they are rational maximizers first and foremost.

(Image credit – Dinosaur Comics) (remainder from Figures 1 and 2)

Paper: Jensen et al., “Chimpanzees are Rational Maximizers in an Ultimatum Game.” Science 318: 107-109 (Oct 2007) – DOI: 10.1126/science.1145850


Of Mice and Cocaine

Yes, I know. I’m crazy late with the last of 2010’s paper-a-month blog entries (not to mention, given the list of 2011 resolutions, I also need to produce one more for this month), but better late than never!

This month’s paper was a bit of a curiosity to me (which will hopefully be clear soon). It was published in Molecular Therapy and caught my eye as the researchers claimed to be able to produce a cocaine vaccine which actually works!

Mammalian immune systems don’t usually respond to chemicals which aren’t related to a bacteria/virus/fungus (that’s one of the reasons that, provided you don’t have a food allergy/sensitivity, your body doesn’t start randomly attacking all the “foreign” food that you eat). Now the idea of tricking a mammal’s immune system to attack poisons and drugs isn’t new –various groups have tried chemically linking molecules of an addictive drug like cocaine to molecules that your body’s immune system will recognize (like cholera toxin and tetanus toxin) – but to get a high enough immune response which lasts for enough time to be potentially useful has been relatively out of reach.

This month’s paper did two interesting things. First, they attached a molecule which looks like cocaine directly to inactivated viruses. Secondly, they actually gauged how well these vaccines worked in live mice!


Figure 1a (above) shows, at a high-level, what the researchers did to create their vaccine. On the left is GNC, a molecule which looks very similar to cocaine. On the right is an overview of Adenovirus structure. By chemically linking the two, the researchers were able to create what they called dAd5GNC which they could then inject into mice in much the same way that people are vaccinated today. The theory being that the presence of a real virus would attract the attention of the immune system and the GNC would get the immune system to make antibodies which bind and attack cocaine.

So, did it work? The researchers were able to use an assay which measures antibody production to gauge if the right antibodies were being created (impressively, they were able to show high antibody levels even 13 weeks out) – but the real test is in whether or not it actually does anything in live animals. In figure 2d (below), the researchers took mice which were injected with saline (a negative control) and those who were injected with dAd5GNC (the vaccine) and, four weeks later, injected them with radioactively labeled cocaine (so that you can easily track and measure it). They then compared the amount of cocaine that was in the brain and the amount that stayed in the mouse’s bloodstream. As you can see, while the amount of cocaine reaching the brain in vaccinated mice was lower than in mice injected only with saline, but the amount of cocaine in the bloodstream was much higher – suggesting that the vaccine was successful in creating cocaine antibodies which would bind cocaine molecules and keep them from entering the brain.


But does that translate into actual behavior? This is one of the most interesting experiments I’ve seen (although, as I’m not a mousework expert, I’m not sure how clinically relevant), but in a nutshell, what the researchers did was take the two groups of mice, inject them with cocaine (or saline as a negative control), and see if the mice acted high by running around more. Think I’m kidding? They even showed diagrams of mouse activity (Figure 3a, below) – without looking at the labels, see if you can guess which mice are the frantic/high ones:


The paper has some numerical charts, but suffice to say the above figure was definitely one of those “pictures worth a thousand words”.

A very cool experiment – but I was definitely left me with a few questions. Foremost of all, it had always been my understanding that immune responses, even where the body has already been immunized, take place over the course of hours or days. That’s fast enough to stop you from catching measles or the same cold twice, but injected cocaine? It’s fast and there are a ton of cocaine molecules. The mice in the experiment above were literally injected with cocaine and observed for 10-30 minutes immediately afterwards. Maybe I’ve misunderstood how cocaine works or how the immune system works, but that is an open question which leaves me a little wary.

The second is the issue around the relative amounts of cocaine in the brain and the blood. I understand that measuring radioactive counts like the researchers did here is an imperfect science, but I’m confused how the vaccine produced such a huge difference in the amount of cocaine in the bloodstream but a much less dramatic difference in the brain, yet still produced a very striking difference in behavior. That last point could simply be explained by how sensitive the brain is to cocaine, but these two questions collectively leave me a little puzzled.

(Figures from paper)

Paper: Hicks et al., “Cocaine Analog Coupled to Disrupted Adenovirus: A Vaccine Strategy to Evoke High-titer Immunity Against Addictive Drugs.” Molecular Therapy (Jan 2010) – doi: 10.1038/mt.2010.280

One Comment

Do you have the guts for nori?

The paper I will talk about this month is from April of this year and highlights the diversity of our “gut flora” (a pleasant way to describe the many bacteria which live in our digestive tract and help us digest the food we eat). Specifically, this paper highlights how a particular bacteria in the digestive tracts of some Japanese individuals has picked up a unique ability to digest certain certain sugars which are common in marine plants (e.g., Porphyra, the seaweed used to make sushi) but not in terrestrial plants.


Interestingly, the researchers weren’t originally focused on how gut flora at all, but in understanding how marine bacteria digested marine plants. They started by studying a particular marine bacteria, Z. galactanivorans which was known for its ability to digest certain types of algae. Scanning the genome of Z., the researchers were able to identify a few genes which were similar enough to known sugar-digesting enzymes but didn’t seem to have the ability to act on the “usual plant sugars”.

Two of the identified genes, which they called PorA and PorB, were found to be very selective in the type of plant sugar they digested. In the chart below (from Figure 1), 3 different plants are characterized along a spectrum showing if they have more LA (4-linked 3,6-anhydro-a-L-galactopyranose) chemical groups (red) or L6S (4-linked a-L-galactopyranose-6-sulphate) groups (yellow). Panel b on the right shows the H1-NMR spectrum associated with these different sugar mixes and is a chemical technique to verify what sort of sugar groups are present.

These mixes were subjected to PorA and PorB as well as AgaA (a sugar-digesting enzyme which works mainly on LA-type sugars like agarose). The bar charts in the middle show how active the respective enzymes were (as indicated by the amount of digested sugar that came out):

As you can see, PorA and PorB are only effective on L6S-type sugar groups, and not LA-type sugar groups. The researchers wondered if they had discovered the key class of enzyme responsible for allowing marine life to digest marine plant sugars and scanned other genomes for other enzymes similar to PorA and PorB. What they found was very interesting (see below, from Figure 3):

What you see above is an evolutionary family tree for PorA/PorB-like genes. The red and blue boxes represent PorA/PorB-like genes which target “usual plant sugars”, but the yellow show the enzymes which specifically target the sugars found in nori (Porphyra, hence the enzymes are called porhyranases). All the enzymes marked with solid diamonds are actually found in Z. galactanivorans (and were henced dubbed PorC, PorD, and PorE – clearly not the most imaginative naming convention). The other identified genes, however, all belonged to marine bacteria… with the notable exception of Bateroides plebeius, marked with a open circle. And Bacteroides plebeius (at least to the knowledge of the researchers at the time of this publication) has only been found in the guts of certain Japanese people!

The researchers scanned the Bacteroides plebeius genome and found that the bacteria actually had a sizable chunk of genetic material which were a much better match for marine bacteria than other similar Bacteroides strains. The researchers concluded that the best explanation for this is that the Bacteroides plebeius picked up its unique ability to digest marine plants not on its own, but from marine bacteria (in a process called Horizontal Gene Transfer or HGT), most probably from bacteria that were present on dietary seaweed. Or, to put it more simply: your gut bacteria have the ability to “steal” genes/abilities from bacteria on the food we eat!

Cool! While this is a conclusion which we can probably never truly prove (its an informed hypothesis based on genetic evidence), this finding does make you wonder if a similar genetic screening process could identify if our gut flora have picked up any other genes from “dietary bacteria.”

(Image credit – Nori rolls) (Figures from paper)

Paper: Hehemann et al, “Transfer of carbohydrate-active enzymes from marine bacteria to Japanese gut microbiota.” Nature 464: 908-912 (Apr 2010) – doi:10.1038/nature08937

One Comment

Degradation Situation

Look at me, just third to last month on this paper-of-the-month thing, and I’m over 2 weeks late on this.

This month’s paper goes into something that is very near and dear to my heart – the application of math/models to biological systems (more commonly referred to as “Systems Biology”). The most interesting thing about Systems Biology to me is the contrast between the very physics-like approach it takes to biology – it immediately tries to approximate “pieces” of biology as very basic equations – and the relative lack of good quantitative data in biology to validate those models.

The paper I picked for this month bridges this gap and looks at a system with what I would consider to be probably the most basic systems biology equation/relationship possible: first-order degradation.

The biological system in question is very hot in biology: RNA-induced silencing. For those of you not in the know, RNA-induced silencing refers to the fact that short RNAs could act as more than just the “messenger” between the information encoded in DNA and the rest of the cell, but also as a regulator of other “messenger” RNAs, resulting in their destruction. This process not only let Craig Mello and Andrew Fire win a Nobel prize, but it became a powerful tool for scientists to study living cells (by selectively shutting down certain RNA “messages” with short RNA molecules) and has even been touted as a potential future medicine.

But, one thing scientists have noticed about RNA-induced silencing is that it how well it works depends on the RNA that it is trying to silence. For some genes, RNA-induced silencing works super-effectively. For others, RNA-induced silencing does a miserable job. Why?

While there are a number of factors at play, the Systems Biologist/physicist would probably go to the chalkboard and start with a simple equation. After all, logic would suggest that the amount of a given RNA in a cell is related to a) how quickly the RNA is being destroyed and b) how quickly the RNA is being created. If you write out the equation and make a few simplifying assumptions (that the rate the particular RNA is being created was relatively constant and that the rate at which a particular RNA was destroyed was proportional to the amount of RNA that is there), then you get a first-order degradation equation which has a few easy-to-understand properties:

    • The higher the speed of RNA creation, the higher the amount of RNA you would expect when the cell was “at balance”
    • The faster the rate at which RNA is destroyed, the lower the “balance” amount of RNA
    • The amount of “at balance” RNA is actually the ratio of the speed of RNA creation to the speed of RNA destruction
    • There are many possible values of RNA creation/destruction rates which could result in a particular “at balance” RNA level

And of course, you can’t forget the kicker:

    • When the rate of RNA creation/destruction is higher, the “at balance” amount of RNA is more stable

Intuitively, this makes sense. If I keep pouring water into a leaky bathtub, the amount of water in the bathtub is likely to be more stable if the rate of water coming in and the rate of water leaking out are both extremely high, because then small changes in leak rate or the water flow won’t have such a big impact. But, intuition and a super-simple equation don’t prove anything. We need data to bore this out.

And, data we have. The next two charts come from Figure 3 and highlight the controlled experiment the researchers set up. The researchers took luciferase, which is a firefly gene which glows in the dark, and tacked on 0, 3, 5, or 7 repeats of a short gene sequence which increases the speed at which the corresponding messenger RNA is destroyed to set up the experiment. You can see below that the brightness for the luciferase gene with 7 of these repeats is able to only produce 40% of the light of the “natural” luciferase, suggesting that the procedure worked – that we have created artificial genes which work the same but which degrade faster!

So, we have our test messenger RNA’s. Moment of truth: let’s take a look at what happens to the luciferase activity after we subject them to RNA-induced silencing. From Figure 3C:

The chart above shows that the same RNA-induced silencing is much more effective at shutting down “natural” luciferase than luciferase which has been modified to be destroyed faster.

But what about genes other than luciferase? Do those still work too? The researchers applied microarray technology (which allows you to measure at different points in time the amount of almost any specific RNA you may be interested in) to study both the “natural” degradation rate of RNA and the impact of RNA-induced silencing. This chart on the left from Figure 4C shows a weak, albeit distinct positive relationship between rate of RNA destruction (the “specific decay rate”) and resistance to RNA-induced silencing (the “best-achieved mRNA ratio”).

The chart on the right from figure 5A shows the results of another set of experiments with HeLa cells (a common lab cell line). In this case, genes that had RNAs with a long half-life (a slow degradation rate) were the most likely to be extremely susceptible to RNA-induced silencing [green bar], whereas genes with short half-life RNAs (fast degradation rate) were the least likely to be extremely susceptible [red bar].

This was a very interesting study which really made me nostalgic and, I think, provided some interesting evidence for the simple first-order degradation model. However, the results were not as strong as one would have hoped. Take the chart from Figure 5A – although there is clearly a difference between the green, yellow, and red bars, the point has to be made using somewhat arbitrary/odd categorizations: instead of just showing me how decay rate corresponds to the relative impact on RNA levels from RNA-induced silence, they concocted some bizarre measures of “long”, “medium”, and “short” half-lives and “fraction [of genes which, in response to RNA-induced silencing become] strongly repressed”. It suggests to me that the data was actually very noisy and didn’t paint the clear picture that the researchers had hoped.

That noise was probably a product of the fact that RNA levels are regulated by many different things, which is not the researcher’s fault. But, what the researchers could have done better, however, was quantify and/or rule out the impact of those other factors on the results we noticed using a combination of quantitative analysis and controlled experiments.

Those criticisms aside, I think the paper was a very cool experimental approach at verifying a systems biology-oriented hypothesis built around quite possibly the first equation that a modern systems biology class would cover!

(Images from Figure 3B, 3C, 4C, and 5A)

Paper: Larsson et al, “mRNA turnover limits siRNA and microRNA efficacy.” Molecular Systems Biology 6:433 (Nov 2010) – doi:10.1038/msb.2010.89


Of Ticks and Bacteria

Another month, another paper to blog.

imageOne of the most fascinating things about studying biology is finding out the numerous techniques living things use to survive through adversity. This month’s paper digs into an alliance between a tick species Ixodes scapularis and a bacterium Anaplasma phagocytophilum to help the pair survive through long winter months.

In places where winters can get extremely cold, people will oftentimes use antifreeze to help protect their car engines. Many cold-blooded animals (ectotherms) survive harsh winters in the same way. They produce antifreeze proteins (AFPs) and antifreeze glycoproteins (AFGPs) which are believed to bind to ice crystals and limit their growth and ability to damage the organism.

As Ixodes ticks are fairly active during winter months and are a known carrier of Anaplasma phagocytophilum which is a cause of human granulocytic anaplasmosis, a team of researchers from Yale Medical decided to investigate whether or not Anaplasma had any impact on the ability of Ixodes to survive cold weather.

Panel A of Figure 1 (below) is a survival curve. It shows what % of ticks which have Anaplasma (in dark black circles) and which don’t (in white circles) survived being placed in –20 degrees (Celsius, of course, not Farenheit: this is science after all!) for a given amount of time. While all the ticks died after 45 minutes, at any given timepoint more ticks with Anaplasma survived than the ticks without. While only ~50% of ticks without Anaplasma survived after ~25 minutes in the cold, over 80% of ticks with the bacterium survived!


What could explain this difference? The researchers suspected some sort of antifreeze protein, and, after combing through the tick’s genome, they were able to locate a protein which they called IAFGP which bore a striking resemblance to other antifreeze glycoproteins.  But, was IAFGP the actual antifreeze mechanism which kept Ixodes alive? And did Anaplasma somehow increase its effectiveness?


Panel C of Figure 4 (above) shows the key findings of the experiments designed to answer those two questions. Along the vertical axis, the researchers measured the amount of IAFGP gene expression (relative to the gene expression of a control, actin [a structural protein which shouldn’t vary]). Along the horizontal, the researchers tested four different temperature states (23, 10, 4, and 0 Celsius) with Ixodes ticks that were carrying Anaplasma (dark circles) and those that were not (white circles). Each individual circle is an individual tick and the line is the average value of all the ticks in the experimental group (the reason its not in the middle is because the vertical axis is a log scale). This sort of chart is one of my favorites, as it packs in a lot of information in one small area but without generating too much noise:

    • The lines get higher the further to the right we get: Translation: when temperatures go down, IAFGP levels go up – as you would expect if IAFGP was an antifreeze coping mechanism for Ixodes. (And if you could see Panel B of Figure 4, you’ll notice that IAFGP levels at 4 Celsius and 0 Celsius are statistically significantly higher than at 23 and 10)
    • The black dots on average are higher than the white dots: Translation: just carrying Anaplasma seems to push Ixodes’s “natural” levels of antifreeze protein up. And, judging from the P value comparisons, the differences we are seeing are statistically significant.

So, it would seem that IAFGP is somehow related to the affect of Anaplasma on Ixodes, but, is that the only link? To test that, the researchers used an experimental technique called RNA interference (RNAi) which allows a researcher to shut down the expression of a particular protein. In this case, the researchers shut down IAFGP to see what would happen.


These results are interesting. Although, sadly, the charts (Panels B and F of Figure 5) are not on the same scale and are for different experiments, the numbers are striking. In Panel B, the researchers tested for the survival of ticks which were given a control RNAi (simulates the RNAi process except without what it takes to actually silence IAFGP, white circles) versus those which had IAFGP shut down via RNAi (white triangles). As you can see from the chart, after 25 min at –20 degrees Celsius, the control group hit 50% survival whereas the RNAi group’s survival rate plummeted to only 20%.

The researchers then repeated the experiment with Ixodes ticks which were given control RNAi (black circles) vs. the real thing (black triangles) and then allowed to feed on Anaplasma-infected mice for 48-hours. These ticks were then tested for survival after 50 min at –20 degrees Celsius. As you can see in Panel F, a 75% survival level amongst Anaplasma carrying ticks became less than 50% when IAFGP was shut down with RNAi.

All in all, a very simple positive-control, negative-control experiment showing a pretty clear linkage between Anaplasma, IAFGP gene expression levels, and the ability of Ixodes ticks to survive the cold. However, a few things still bug me about the study and stand out as clear next steps:

    • Panels B and F of Figure 5 are fundamentally different experiments, but presented as comparable. At face value, its hard to tell if IAFGP is the primary mechanism for how Anaplasma alters tick response to the cold. The survival levels of Anaplasma-carrying ticks when IAFGP is shut down is still higher than the survival levels of Anaplasma-free ticks which also undergo the RNAi – but this could be a result of the different experimental conditions (feeding conditions and time). The paper text also reveals that the –20 degrees for 50 min condition was selected because it was supposed to be the point at which there was 50% survival for that particular experimental condition – but clearly, the control group was experiencing 75% survival (and the group with IAFGP shut off was at 50%). Something is off here… but I’m not sure what.
    • Most of this research was conducted on a very abstract level – showing the impact of IAFGP expression levels on cold survival. While the RNAi experiments are very compelling, the lack of clear functional studies is problematic in my mind as I cannot tell from this data if IAFGP is directly responsible for cold survival or linked to other, potentially more important responses to cold.
    • No mechanism was proposed for how Anaplasma increases IAFPG levels in Ixodes. Understanding that would be very powerful and could unveil a whole world of cross-species gene regulation which we were previously unaware of (and could reveal new potential targets for medical treatments of diseases borne by insects).

Regardless of my criticisms, though, this was an interesting study with a very cool result. However, its probably of no comfort to people who have to deal with ticks which can survive cold winter months…

(Image credit – tick) (Figures 1, 4, and 5 from paper)

Paper: Neelakanta, Grisih et al. “Anaplasma phagocytophilum induces Ixodes scapularis ticks to express an antifreeze glycoprotein gene that enhances their survival in the cold.” Journal of Clinical Investigations 120:9, 3179-3190 (Sep 2010) — doi:10.1172/JCI42868

Leave a Comment

Science of Social Networks

Another month has gone by which means another paper to cover!

image This month, instead of covering my usual stomping grounds of biology or chemistry, I decided to look into something a little bit more related to my work in venture capital: social networks!

The power behind the social network concept goes beyond just the number of users. Facebook’s 500 million users is pretty damn compelling, but what brings it home is that by focusing on relationships between people rather than the people themselves, social networks turn into a very interesting channel for information consumption and influence.

This month’s paper (from Damon Centola at MIT Sloan) covered influence – specifically, how different social network structures (or “topologies” if you want to be snooty and academic about it) might have different influences on the people in the networks. More specifically, it asked the question of what social network would you expect to be better able to influence behavior: one which is more “viral”, in the sense that connections aren’t clustered (i.e., I’m as likely to be friends with my friend’s friends as people my friends don’t know), or one which is more “clustered” (i.e., my friends are likely to be friends with one another).

It’s an interesting question, and I found this paper notable for two reasons. First, its the most rigorous social networking experiment I’ve ever seen. Granted, this isn’t saying very much. Most social network/graph studies are observational, but I was impressed by the methodology and the attempt to strip out as much bias and extraneous factors as possible:

  • The behavior being tested was whether or not they would sign up and re-visit a particular health forum. This forum had to be valuable enough to get people to use it (and actually contribute to it), but also unknown and inaccessible to the rest of the world (as to avoid additional social cues from the user’s “real world” social network).
  • The author (and I do mean one single author: pretty rare these days for a Science paper as far as I know) created different social graphs which were superficially identical (same number of users, same number of contacts per user) but had the different network structures he wanted to test(one structure had subgroups of tightly inter-connected users, the other structure had random connections scattered across the network). The figure below shows one example of the network structures: the black lines show connections between people. On the left-hand-side is the highly clustered social graph – the individual users are only connected to people “next to them”. The right-hand-side is the more “viral” social graph, where users can be connected to any user across the social network.
  • The users made profiles (with user name, avatar, and stated health interests), but to preserve anonymity (and limit the impact of a person’s “real world” social network on a user), the user names were blinded and users were not allowed to directly communicate (except in an anonymized fashion through the health forum) or add/remove contacts
  • However, whenever a user’s contacts participated in the health forum, the user would be notified.

The result was a somewhat bizarre and artificial “network” – but its certainly a very creative (and probably as good as it can get) means of turning social networking studies into a rigorous study with real controlled experiments.

Second, the conclusion is interesting and has many implications for people who want to use social networks to influence people. Virality may be a remarkably fast way to get people to hear about something, but the paper concludes that virality does not necessarily translate into people acting. The author conducted 6 different trials with slightly different network topologies (number of users ranged from 98 to 144, number of contacts per user ranged from 6 to 8). The results are in the graph below which shows the fraction of the users who joined the forum over time. As you can see, the clustered networks (solid circles) had much higher and faster adoption than the “viral” networks (open triangles):


Why would this be? The author’s standing theory is that while “viral” networks might be faster at disseminating information (e.g., a funny video), clustered networks work better at driving behavior because you get more reinforcement from your friends. In a clustered network, if you have one friend join the forum, chances are the two of you will have a mutual friend who will also join. At a very basic level, this means you get the same cue to join the forum from two of your friends. In a un-clustered network, however, if you have one friend join the forum, the two of you are less likely to have a mutual friend, and so you are less likely to receive that second cue.

Does this matter? According to the study, someone who had two contacts join the forum was ~75% more likely to join than someone who only had one contact join. And, someone who had four contacts join was ~150% more likely to join than someone who only had one contact. While this effect rapidly diminshes with more contacts (having five or six contacts join made relatively little difference compared with four), its a powerful illustration of quality vs. speed in a social network – something which is also borne out by the fact that while only 15% of people who only had one contact join returned to the forum, 35-45% of users who had multiple contacts join did.

This was definitely a very impressive and well-designed study. While it would be fair to attack the study for its artificiality, I don’t really think there’s any other way to systematically strip out the  biases that are intrinsic to most observational (not a controlled experiment) studies of social networks.

Where I do think this was lacking (and maybe the researcher has already teed this up) is the black-and-white nature of the study. What I mean by this is while I find the argument that network clustering helps drive greater behavior plausible, I think there needs to be a more rigorous/mathematical conception – how “clustered” does a network need to be? If a network is overly clustered, then it loses the virality which helps to spread ideas more quickly and widely – is there an optimal balance somewhere in the middle? Also, the paper only dug, on a very superficial level, into how network size and the number of contacts per user might impact this. I think further experimental and mathematical modeling/computational studies would be nice to really flesh this out.

Paper: Centola, Damon. “The Spread of Behavior in an Online Social Network Experiment.” Science 329 (Sep 2010) – doi:10.1126/science.1185231

(Image credit – social network diagram) (Figures 1 and 2 from paper)


How You Might Cure Asian Glow

image The paper I read for the past month is something that is very near and dear to my heart. As is commonly known, individuals of Asian ancestry are more likely to experience dizziness and flushed skin after drinking alcohol. This is due to the prevalence of a genetic defect in the Asian population which affects an enzyme called Aldehyde Dehydrogenase 2 (ALDH2). ALDH2 processes one of the by-products of alcohol consumption (acetaldehyde). In people with the genetic defect, ALDH2 works very poorly. So, people with the ALDH2 defect build up higher levels of acetaldehyde which leads them to get drunker (and thus hung-over/sick/etc) quicker. This is a problem for someone like me, who needs to drink a (comically) large amount of water to be able to socialize properly while drinking wine/beer/liquor. Interestingly, the anti-drinking drug Disulfiram (sold as “Antabuse” and “Antabus”) helps alcoholics keep off of alcohol by basically shutting down a person’s ALDH2, effectively giving them “Asian alcohol-induced flushing syndrome” (aka an Asian person’s inability to hold their liquor) and making them get drunk and sick very quickly.


So, what can you do? At this point, nothing really (except, either avoid alcohol or drink a ton of water when you do drink). But, I look forward to the day when there may actually be a solution. A group at Stanford recently identified a small molecule, Alda-1 (chemical structure above), which not only increases the effectiveness of normal ALDH2, but can help “rescue” defective ALDH2!

Have we found the molecule which I have been searching for ever since I started drinking? Jury’s still out, but the same group at Stanford partnered with structural biologists at Indiana University to conduct some experiments on Alda-1 to try to find out how it works.

To do this, and why this paper was published in Nature Structural and Molecular Biology rather than another journal, they used a technique called X-ray Crystallography to “see” if (and how) Alda-1 interacts with ALDH2. Some of the results of these experiments are shown on the left. The top (panel b) shows a 3D structure of the “defective’ version of ALDH2. If you’re new to structural biology papers, this will take some time getting used to it, but if you look carefully, you can see that ALDH2 is a tetramer: there are 4 identical pieces (in the top-left, top-right, bottom-left, bottom-right) which are attached together in the middle.

It’s not clear from this picture, but the defective version of the enzyme differs from the normal because it is unable to maintain the 3D structure needed to link up with a coenzyme (a chemical needed by enzymes which do this sort of chemical reaction to be able to work properly) called NAD+ or even carry out the reaction (the “active site”, or the part of the enzyme which actually carries out the reaction, is “disrupted” in the mutant).

So what does Alda-1 do, then? In the bottom (panel c), you can see where the Alda-1 molecules (colored in yellow) are when they interact with ALDH2.  While the yellow molecules have a number of impacts on ALHD2’s 3D structure, the most obvious changes are highlighted in pink (those have no clear counterpart in panel b). This is the secret of Alda-1: it actually changes the shape of ALDH2, (partially) restoring the enzyme’s ability to bind with NAD+ and carry out the chemical reactions needed to process acetaldehyde, and all without actually directly touching the active site (this is something which you can’t see in the panel I shared above, but you can make out from other X-ray crystallography models in the paper).

The result? If you look at the chart below, you’ll see two relationships at play. First, the greater the amount of co-enzyme NAD+ (on the horizontal axis), the faster the reaction speed (on the vertical axis). But, if you increase the amount of Alda-1 from 0 uM (the bottom-most curve) to 30 uM (the highest-most curve), you see a dramatic increase in the enzyme’s reaction speed, for the same amount of NAD+. So, does Alda-1 activate ALDH2? Judging from this chart, it definitely does.

Alda-1 is particularly interesting because most of the chemicals/drugs which we are able to develop work by breaking, de-activating, or inhibiting something. Have a cold? Break the chemical pathways which lead to runny noses. Suffering from depression? De-activate the process which cleans up serotonin (“happiness” chemicals in the brain) quickly. After all, its much easier to break something than it is to fix/create something. But, instead, Alda-1 is actually an activator (rather than a de-activator), which the authors of the study leave as a tantalizing opportunity for medical science:

This work suggests that it may be possible to rationally design similar molecular chaperones for other mutant enzymes by exploiting the binding of compounds to sites adjacent to the structurally disrupted regions, thus avoiding the possibility of enzymatic inhibition entirely independent of the conditions in which the enzyme operates.

If only it were that easy (it’s not)…

Where should we go from here? Frankly, while the paper tackled a very interesting topic in a pretty rigorous fashion, I felt that a lot of the conclusions being drawn were not clear from the presented experimental results (which is why this post is a bit on the vague side on some of those details).

I certainly understand the difficulty when the study is on phenomena which is molecular in nature (does the enzyme work? are the amino acids in the right location?). But, I personally felt a significant part of the paper was more conjecture than evidence, and while I’m sure the folks making the hypotheses are very experienced, I would like to see more experimental data to back up their theories. A well-designed set of site-directed mutagenesis (mutating specific parts of ALDH2 in the lab to play around with they hypotheses that the group put out) and well-tailored experiments and rounds of X-ray crystallography could help shed a little more light on their fascinating idea.

Paper: Perez-Miller et al. “Alda-1 is an agonist and chemical chaperone for the common human aldehyde dehydrogenase 2 variant.” Nature Structural and Molecular Biology 17:2 (Feb 2010) –doi:10.1038/nsmb.1737

(Image credit) (Image credit) (figures from Figure 4 from paper)


Diet Coke + Mentos = Paper

Unless you just discovered YouTube yesterday, you’ve probably seen countless videos of (and maybe even have tried?) the infamous Diet Coke + Mentos reaction… which brings us to the subject of this month’s (belated) paper that I will blog about.

An enterprising physics professor from Appalachian State University decided to have her sophomore physics class take a fairly rigorous look at what drives the Diet Coke + Mentos reaction and what factors might influence its strength and speed. They were not only able to publish their results in the American Journal of Physics, but the students were also given an opportunity to present their findings in a poster session (Professor Coffey reflected on the experience in a presentation she gave). In my humble opinion, this is science education at its finest: instead of having students re-hash boring experiments which they already know the results of, this allowed them to do fairly original research in a field which they probably had more interest in than in the typical science lab course.

So, what did they find?

The first thing they found is that it’s not an acid-base reaction. A lot of people, myself included, believe the diet coke + Mentos reaction is the same as the baking soda + vinegar “volcano” reactions that we all did as kids. Apparently, we were dead wrong, as the paper points out:

The pH of the diet Coke prior to the reaction was 3.0, and the pH of the diet Coke after the mint Mentos reaction was also 3.0. The lack of change in the pH supports the conclusion that the Mint Mentos–Diet Coke reaction is not an acid-base reaction. This conclusion is also supported by the ingredients in the Mentos, none of which are basic: sugar, glucose, syrup, hydrogenated coconut oil, gelatin, dextrin, natural flavor, corn starch, and gum arabic … An impressive acid-base reaction can be generated by adding baking soda to Diet Coke. The pH of the Diet Coke after the baking soda reaction was 6.1, indicating that much of the acid present in the Diet Coke was neutralized by the reaction.

Secondly, the “reaction” is not chemical (no new compounds are created), but a physical response because the Mentos makes bubbles easier to form. The Mentos triggers bubble formation because the surface of the Mentos is itself extremely rough which allows bubbles to aggregate (like how adding string/popsicle stick to an oversaturated mixture of sugar and water is used to make rock candy). But that doesn’t explain why the Mentos + Diet Coke reaction works so well. The logic blew my mind but, in retrospect, is pretty simple. Certain liquids are more “bubbly” by nature – think soapy water vs. regular water. Why? Because the energy that’s needed to form a bubble is lower than the energy available from the environment (e.g., thermal energy). So, the question is, what makes a liquid more “bubbly”? One way is to heat the liquid (heating up Coke makes it more bubbly because heating the carbon dioxide inside the soda gives the gas more thermal energy to draw upon), which the students were able to confirm when they looked at how much mass was lost during a Mentos + Diet coke reaction under three different temperatures (Table 3 below):

What else? It turns out that what other chemicals a liquid has dissolved is capable of changing the ease at which bubbles are made. Physicists/chemists will recognize this “ease” as surface tension (how tightly the surface of a liquid pulls on itself) which you can see visually as a change in the contact angle (the angle that the bubble forms against a flat surface, see below):

The larger the angle, the stronger the surface tension (the more tightly the liquid tries to pull in on itself to become a sphere). So, what happens when we add the artificial sweetener aspartame and potassium benzoate (both ingredients in Diet Coke) to water? As you can see in Figure 4 below, the contact angle in (b) [aspartame] and (c) [potassium benzoate] are smaller than (a) [pure water]. Translation: if you add aspartame and/or potassium benzoate to water, you reduce the amount of work that needs to be done by the solution to create a bubble. Table 4 below that shows the contact angles of a variety of solutions that the students tested as well as the amount of work needed to create a bubble relative to pure water:

This table also shows why you use Diet Coke rather than regular Coke (basically sugar-water) to do the Mentos thing – regular coke has a higher contact angle (and ~20% more energy needed to make a bubble).

Another factor which the paper considers is how long it takes the dropped Mentos to sink to the bottom. The faster a Mentos falls to the bottom, the longer the “average distance” that a bubble needs to travel to get to the surface. As bubbles themselves attract more bubbles, this means that the Mentos which fall to the bottom the fastest will have the strongest explosions. As the paper points out:

The speed with which the sample falls through the liquid is also a major factor. We used a video camera to measure the time it took for Mentos, rock salt, Wint-o-Green Lifesavers, and playground sand to fall through water from the top of the water line to the bottom of a clear 2 l bottle. The average times were 0.7 s for the Mentos, 1.0 s for the rock salt and the Lifesavers, and 1.5 s for the sand … If the growth of carbon  dioxide bubbles on the sample takes place at the bottom of the bottle, then the bubbles formed will detach from the sample and rise up the bottle. The bubbles then act as growth sites, where the carbon dioxide still dissolved in the solution moves into the rising bubbles, causing even more liberation of carbon dioxide from the bottle. If the bubbles must travel farther through the liquid, the reaction will be more explosive.

So, in conclusion, what makes a Diet Coke + Mentos reaction stronger?

  • Temperature (hotter = stronger)
  • Adding substances which reduce the surface tension/contact angle
  • Increasing the speed at which the Mentos sink to the bottom (faster = stronger)

I wish I had done something like this when I was in college! The paper itself also goes into a lot of other things, like the use of an atomic force microscope and scanning electron microscopes to measure the “roughness” of the surface of the Mentos, so if you’re interested in additional things which can affect the strength of the reaction (or if you’re a science teacher interested in coming up with a cool project for your students), I’d strongly encourage taking a look at the paper!

Paper: Coffey, T. “Diet Coke and Mentos: What is really behind this physical reaction?”. American Journal of Physics 76:6 (Jun 2008) – doi: 10.1119/1.2888546

(Table 3, Figure 4, Table 5 from paper) (Contact angle description from presentation)