On violence and gratitude.

On violence and gratitude.

Although I consider myself a benevolent tyrant, some of my cells have turned against me.  Mutinous, they were swayed by the propaganda of a virus and started churning out capsids rather than helping me type this essay.  Which leaves me sitting at a YMCA snack room table snerking, goo leaking down my throat and out my nose.

Unconsciously, I take violent reprisal against the traitors.  I send my enforcers to put down the revolt – they cannibalize the still-living rebels, first gnawing the skin, then devouring the organs that come spilling out.  Then the defector dies.

CD8+ T cell destruction of infected cells by Dananguyen on Wikimedia.

My cells are also expected to commit suicide whenever they cease to be useful for my grand designs.  Any time a revolutionary loses the resolve to commit suicide, my enforcers put it down.  Unless my internal surveillance state fails to notice in time – the other name for a cell that doesn’t want to commit suicide is “cancer,” and even the most robust immune system might be stymied by cancer when the traitor’s family grows too large.

Worse is when the rebels “metastasize,” like contemporary terrorists.  This word signifies that the family has sent sleeper agents to infiltrate the world at large, attempting to develop new pockets of resistance in other areas.  Even if my enforcers crush one cluster of rebellion, others could flourish unchecked.

How metastasis occurs. Image by the National Cancer Institute on Wikimedia.

I know something that perhaps they don’t – if their rebellion succeeds, they will die.  A flourishing cancer sequesters so many resources that the rest of my body would soon prove too weak to seek food and water, causing every cell inside of me to die.

But perhaps they’ve learned my kingdom’s vile secret – rebel or not, they will die.  As with any hereditary monarchy, a select few of my cells are privileged above all others.  And it’s not the cells in my brain that rule.

Every “somatic cell” is doomed.  These cells compose my brain and body.  Each has slight variations from “my” genome – every round of cell division introduces random mutations, making every cell’s DNA slightly different from its neighbors’.

The basic idea behind Richard Dawkins’s The Selfish Gene is that each of these cells “wants” for its genome to pass down through the ages.  Dawkins argued that familial altruism is rational because any sacrifice bolsters the chances for a very similar genome to propagate.  Similarly, each somatic cell is expected to sacrifice itself to boost the odds for a very similar genome carried by the gametes.

Only gametes – the heralded population of germ cells in our genitalia – can possibly see their lineage continue.  All others are like the commoners who (perhaps foolishly) chant their king or kingdom’s name as they rush into battle to die.  I expect them to show absolute fealty to me, their tyrant.  Apoptosis – uncomplaining suicide – was required of many before I was even born, like when cells forming the webbing between my fingers slit their own bellies in dramatic synchronized hara-kiri.

Human gametes by Karl-Ludwig Poggemann on Flickr.

Any evolutionary biologist could explain that each such act of sacrifice was in a cell’s mathematical best interest.  But if I were a conscious somatic cell, would I submit so easily?  Or do I owe some sliver of respect to the traitors inside me?

The world is a violent place.  I’m an extremely liberal vegan environmentalist – yet it takes a lot of violence to keep me going.

From Suzana Herculano-Houzel’s The Human Advantage:

image (1)Animals that we are, we must face, every single day of our lives, the consequences of our most basic predicament: we don’t do photosynthesis.  For lack of the necessary genes, we don’t just absorb carbon from the air around us and fix it as new bodily matter with a little help from sunlight.  To survive, we animals have to eat other living organisms, whether animal, vegetable, or fungus, and transform their matter into ours.

And yet the violence doesn’t begin with animals.  Photosynthesis seems benign by comparison – all you’d need is light from the sun! – unless you watch a time-lapsed video of plant growth in any forest or jungle.

The sun casts off electromagnetic radiation without a care in the world, but the amount of useful light reaching any particular spot on earth is limited.  And plants will fight for it.  They race upwards, a sprint that we sometimes fail to notice only because they’ve adapted a timescale of days, years, and centuries rather than our seconds, hours, and years.  They reach over competitors’ heads, attempting to grab any extra smidgen of light … and starving those below.  Many vines physically strangle their foes.  Several trees excrete poison from their roots.  Why win fair if you don’t have to?  A banquet of warm sunlight awaits the tallest plant left standing.

And so why, in such a violent world, would it be worthwhile to be vegan?  After all, nothing wants to be eaten.  Sure, a plant wants for animals to eat its fruit – fruits and animals co-evolved in a system of gift exchange.  The plant freely offers fruit, with no way of guaranteeing recompense, in hope that the animal might plant its seeds in a useful location.

But actual pieces of fruit – the individual cells composing an apple – probably don’t want to be eaten, no more than cancers or my own virus-infected cells want to be put down for the greater good.

A kale plant doesn’t want for me to tear off its leaves and dice them for my morning ramen.

But by acknowledging how much sacrifice it takes to allow for us to be typing or reading or otherwise reaping the pleasures of existence, I think it’s easier to maintain awe.  A sense of gratitude toward all that we’ve been given.  Most humans appreciate things more when we think they cost more.

We should appreciate the chance to be alive.  It costs an absurd amount for us to be here.

But, in the modern world, it’s possible to have a wonderful, rampantly hedonistic life as a vegan.  Why make our existence cost more when we don’t have to?  A bottle of wine tastes better when we’re told that it’s $45-dollar and not $5-dollar wine, but it won’t taste any better if you tell somebody “It’s $45-dollar wine, but you’ll have to pay $90 for it.”

Personally, I’d think it tasted worse, each sip with the savor of squander.

On ethics and Luke Dittrich’s “Patient H.M.”

On ethics and Luke Dittrich’s “Patient H.M.”

The scientific method is the best way to investigate the world.

Do you want to know how something works?  Start by making a guess, consider the implications of your guess, and then take action.  Muck something up and see if it responds the way you expect it to.  If not, make a new guess and repeat the whole process.

Image by Derek K. Miller on Flickr.

This is slow and arduous, however.  If your goal is not to understand the world, but rather to convince other people that you do, the scientific method is a bad bet.  Instead you should muck something up, see how it responds, and then make your guess.  When you know the outcome in advance, you can appear to be much more clever.

A large proportion of biomedical science publications are inaccurate because researchers follow the second strategy.  Given our incentives, this is reasonable.  Yes, it’s nice to be right.  It’d be cool to understand all the nuances of how cells work, for instance.  But it’s more urgent to build a career.

Both labs I worked in at Stanford cheerfully published bad science.  Unfortunately, it would be nearly impossible for an outsider to notice the flaws because primary data aren’t published.

A colleague of mine obtained data by varying several parameters simultaneously, but then graphed his findings against only one of these.  As it happens, his observations were caused by the variable he left out of his charts.  Whoops!

(Nobel laureate Arieh Warshel quickly responded that my colleague’s conclusions probably weren’t correct.  Unfortunately, Warshel’s argument was based on unrealistic simulations – in his model, a key molecule spins in unnatural ways.  This next sentence is pretty wonky, so feel free to skip it, but … to show the error in my colleague’s paper, Warshel should have modeled multiple molecules entering the enzyme active site, not molecules entering backward.  Whoops!)

Another colleague of mine published his findings about unusual behavior from a human protein.  But then his collaborator realized that they’d accidentally purified and studied a similarly-sized bacterial protein, and were attempting to map its location in cells with an antibody that didn’t work.  Whoops!

No apologies or corrections were ever given.  They rarely are, especially not from researchers at our nation’s fanciest universities.  When somebody with impressive credentials claims a thing is true, people often feel ready to believe.

antibodies.JPGIndeed, for my own thesis work, we wanted to test whether two proteins are in the same place inside cells.  You can do this by staining with light-up antibodies for each.  If one antibody is green and the other is red, you’ll know how often the proteins are in the same place based on how much yellow light you see.

Before conducting the experiment, I wrote a computer program that would assess the data.  My program could identify various cellular structures and check the fraction that were each color.

As it happened, I didn’t get the results we wanted.  My data suggested that our guess was wrong.

But we couldn’t publish that.  And so my advisor told me to count again, by hand, claiming that I should be counting things of a different size.  And then she continued to revise her instructions until we could plausibly claim that we’d seen what we expected.  We made a graph and published the paper.

This is crummy.  It’s falsehood with the veneer of truth.  But it’s also tragically routine.


41B1pZkOwmL._SX329_BO1,204,203,200_Luke Dittrich intertwines two horror stories about scientific ethics in Patient H.M.: A Story of Memory, Madness, and Family Secrets.

One of these nightmares is driven by the perverse incentives facing early neurosurgeons.  Perhaps you noticed, above, that an essential step of the scientific method involves mucking things up.  You can’t tell whether your guesses are correct until you perform an experiment.  Dittrich provides a lovely summary of this idea:

The broken illuminate the unbroken.

An underdeveloped dwarf with misfiring adrenal glands might shine a light on the functional purpose of these glands.  An impulsive man with rod-obliterated frontal lobes [Phineas Gage] might provide clues to what intact frontal lobes do.

This history of modern brain science has been particularly reliant on broken brains, and almost every significant step forward in our understanding of cerebral localization – that is, discovering what functions rely on which parts of the brain – has relied on breakthroughs provided by the study of individuals who lacked some portion of their gray matter.

. . .

While the therapeutic value of the lobotomy remained murky, its scientific potential was clear: Human beings were no longer off-limits as test subjects in brain-lesioning experiments.  This was a fundamental shift.  Broken men like Phineas Gage and Monsieur Tan may have always illuminated the unbroken, but in the past they had always become broken by accident.  No longer.  By the middle of the twentieth century, the breaking of human brains was intentional, premeditated, clinical.

Dittrich was dismayed to learn that his own grandfather had participated in this sort of research, intentionally wrecking at least one human brain in order to study the effects of his meddling.

Lacking a specific target in a specific hemisphere of Henry’s medial temporal lobes, my grandfather had decided to destroy both.

This decision was the riskiest possible one for Henry.  Whatever the functions of the medial temporal lobe structures were – and, again, nobody at the time had any idea what they were – my grandfather would be eliminating them.  The risks to Henry were as inarguable as they were unimaginable.

The risks to my grandfather, on the other hand, were not.

At that moment, the riskiest possible option for his patient was the one with the most potential rewards for him.


By destroying part of a brain, Dittrich’s grandfather could create a valuable research subject.  Yes, there was a chance of curing the patient – Henry agreed to surgery because he was suffering from epileptic seizures.  But Henry didn’t understand what the proposed “cure” would be.  This cure was very likely to be devastating.

At other times, devastation was the intent.  During an interview with one of his grandfather’s former colleagues, Dittrich is told that his grandmother was strapped to the operating table as well.

It was a different era,” he said.  “And he did what at the time he thought was okay: He lobotomized his wife.  And she became much more tractable.  And so he succeeded in getting what he wanted: a tractable wife.”


Compared to slicing up a brain so that its bearer might better conform to our society’s misogynistic expectations of female behavior, a bit of scientific fraud probably doesn’t sound so bad.  Which is a shame.  I love science.  I’ve written previously about the manifold virtues of the scientific method.  And we need truth to save the world.

Which is precisely why those who purport to search for truth need to live clean.  In the cut-throat world of modern academia, they often don’t.

Dittrich investigated the rest of Henry’s life: after part of his brain was destroyed, Henry became a famous study subject.  He unwittingly enabled the career of a striving scientist, Suzanne Corkin.

Dittrich writes that

Unlike Teuber’s patients, most of the research subjects Corkin had worked with were not “accidents of nature” [a bullet to the brain, for instance] but instead the willful products of surgery, and one of them, Patient H.M., was already clearly among the most important lesion patients in history.  There was a word that scientists had begun using to describe him.  They called him pure.  The purity in question didn’t have anything to do with morals or hygiene.  It was entirely anatomical.  My grandfather’s resection had produced a living, breathing test subject whose lesioned brain provided an opportunity to probe the neurological underpinnings of memory in unprecedented ways.  The unlikelihood that a patient like Henry could ever have come to be without an act of surgery was important.

. . .

By hiring Corkin, Teuber was acquiring not only a first-rate scientist practiced in his beloved lesion method but also by extension the world’s premier lesion patient.

. . .

According to [Howard] Eichenbaum, [a colleague at MIT,] Corkin’s fierceness as a gatekeeper was understandable.  After all, he said, “her career is based on having that exclusive access.”

Because Corkin had (coercively) gained exclusive access to this patient, most of her claims about the workings of memory would be difficult to contradict.  No one could conduct the experiments needed to rebut her.

Which makes me very skeptical of her claims.

Like most scientists, Corkin stumbled across occasional data that seemed to contradict the models she’d built her career around.  And so she reacted in the same was as the professors I’ve worked with: she hid the data.

Dittrich: Right.  And what’s going to happen to the files themselves?

She paused for several seconds.

Corkin: Shredded

Dittrich: Shredded?  Why would they be shredded?

Corkin: Nobody’s gonna look at them.

Dittrich: Really?  I can’t imagine shredding the files of the most important research subject in history.  Why would you do that?

. . .

Corkin: Well, the things that aren’t published are, you know, experiments that just didn’t … [another long pause] go right.