On ethics and Luke Dittrich’s “Patient H.M.”

On ethics and Luke Dittrich’s “Patient H.M.”

The scientific method is the best way to investigate the world.

Do you want to know how something works?  Start by making a guess, consider the implications of your guess, and then take action.  Muck something up and see if it responds the way you expect it to.  If not, make a new guess and repeat the whole process.

Image by Derek K. Miller on Flickr.

This is slow and arduous, however.  If your goal is not to understand the world, but rather to convince other people that you do, the scientific method is a bad bet.  Instead you should muck something up, see how it responds, and then make your guess.  When you know the outcome in advance, you can appear to be much more clever.

A large proportion of biomedical science publications are inaccurate because researchers follow the second strategy.  Given our incentives, this is reasonable.  Yes, it’s nice to be right.  It’d be cool to understand all the nuances of how cells work, for instance.  But it’s more urgent to build a career.

Both labs I worked in at Stanford cheerfully published bad science.  Unfortunately, it would be nearly impossible for an outsider to notice the flaws because primary data aren’t published.

A colleague of mine obtained data by varying several parameters simultaneously, but then graphed his findings against only one of these.  As it happens, his observations were caused by the variable he left out of his charts.  Whoops!

(Nobel laureate Arieh Warshel quickly responded that my colleague’s conclusions probably weren’t correct.  Unfortunately, Warshel’s argument was based on unrealistic simulations – in his model, a key molecule spins in unnatural ways.  This next sentence is pretty wonky, so feel free to skip it, but … to show the error in my colleague’s paper, Warshel should have modeled multiple molecules entering the enzyme active site, not molecules entering backward.  Whoops!)

Another colleague of mine published his findings about unusual behavior from a human protein.  But then his collaborator realized that they’d accidentally purified and studied a similarly-sized bacterial protein, and were attempting to map its location in cells with an antibody that didn’t work.  Whoops!

No apologies or corrections were ever given.  They rarely are, especially not from researchers at our nation’s fanciest universities.  When somebody with impressive credentials claims a thing is true, people often feel ready to believe.

antibodies.JPGIndeed, for my own thesis work, we wanted to test whether two proteins are in the same place inside cells.  You can do this by staining with light-up antibodies for each.  If one antibody is green and the other is red, you’ll know how often the proteins are in the same place based on how much yellow light you see.

Before conducting the experiment, I wrote a computer program that would assess the data.  My program could identify various cellular structures and check the fraction that were each color.

As it happened, I didn’t get the results we wanted.  My data suggested that our guess was wrong.

But we couldn’t publish that.  And so my advisor told me to count again, by hand, claiming that I should be counting things of a different size.  And then she continued to revise her instructions until we could plausibly claim that we’d seen what we expected.  We made a graph and published the paper.

This is crummy.  It’s falsehood with the veneer of truth.  But it’s also tragically routine.


41B1pZkOwmL._SX329_BO1,204,203,200_Luke Dittrich intertwines two horror stories about scientific ethics in Patient H.M.: A Story of Memory, Madness, and Family Secrets.

One of these nightmares is driven by the perverse incentives facing early neurosurgeons.  Perhaps you noticed, above, that an essential step of the scientific method involves mucking things up.  You can’t tell whether your guesses are correct until you perform an experiment.  Dittrich provides a lovely summary of this idea:

The broken illuminate the unbroken.

An underdeveloped dwarf with misfiring adrenal glands might shine a light on the functional purpose of these glands.  An impulsive man with rod-obliterated frontal lobes [Phineas Gage] might provide clues to what intact frontal lobes do.

This history of modern brain science has been particularly reliant on broken brains, and almost every significant step forward in our understanding of cerebral localization – that is, discovering what functions rely on which parts of the brain – has relied on breakthroughs provided by the study of individuals who lacked some portion of their gray matter.

. . .

While the therapeutic value of the lobotomy remained murky, its scientific potential was clear: Human beings were no longer off-limits as test subjects in brain-lesioning experiments.  This was a fundamental shift.  Broken men like Phineas Gage and Monsieur Tan may have always illuminated the unbroken, but in the past they had always become broken by accident.  No longer.  By the middle of the twentieth century, the breaking of human brains was intentional, premeditated, clinical.

Dittrich was dismayed to learn that his own grandfather had participated in this sort of research, intentionally wrecking at least one human brain in order to study the effects of his meddling.

Lacking a specific target in a specific hemisphere of Henry’s medial temporal lobes, my grandfather had decided to destroy both.

This decision was the riskiest possible one for Henry.  Whatever the functions of the medial temporal lobe structures were – and, again, nobody at the time had any idea what they were – my grandfather would be eliminating them.  The risks to Henry were as inarguable as they were unimaginable.

The risks to my grandfather, on the other hand, were not.

At that moment, the riskiest possible option for his patient was the one with the most potential rewards for him.


By destroying part of a brain, Dittrich’s grandfather could create a valuable research subject.  Yes, there was a chance of curing the patient – Henry agreed to surgery because he was suffering from epileptic seizures.  But Henry didn’t understand what the proposed “cure” would be.  This cure was very likely to be devastating.

At other times, devastation was the intent.  During an interview with one of his grandfather’s former colleagues, Dittrich is told that his grandmother was strapped to the operating table as well.

It was a different era,” he said.  “And he did what at the time he thought was okay: He lobotomized his wife.  And she became much more tractable.  And so he succeeded in getting what he wanted: a tractable wife.”


Compared to slicing up a brain so that its bearer might better conform to our society’s misogynistic expectations of female behavior, a bit of scientific fraud probably doesn’t sound so bad.  Which is a shame.  I love science.  I’ve written previously about the manifold virtues of the scientific method.  And we need truth to save the world.

Which is precisely why those who purport to search for truth need to live clean.  In the cut-throat world of modern academia, they often don’t.

Dittrich investigated the rest of Henry’s life: after part of his brain was destroyed, Henry became a famous study subject.  He unwittingly enabled the career of a striving scientist, Suzanne Corkin.

Dittrich writes that

Unlike Teuber’s patients, most of the research subjects Corkin had worked with were not “accidents of nature” [a bullet to the brain, for instance] but instead the willful products of surgery, and one of them, Patient H.M., was already clearly among the most important lesion patients in history.  There was a word that scientists had begun using to describe him.  They called him pure.  The purity in question didn’t have anything to do with morals or hygiene.  It was entirely anatomical.  My grandfather’s resection had produced a living, breathing test subject whose lesioned brain provided an opportunity to probe the neurological underpinnings of memory in unprecedented ways.  The unlikelihood that a patient like Henry could ever have come to be without an act of surgery was important.

. . .

By hiring Corkin, Teuber was acquiring not only a first-rate scientist practiced in his beloved lesion method but also by extension the world’s premier lesion patient.

. . .

According to [Howard] Eichenbaum, [a colleague at MIT,] Corkin’s fierceness as a gatekeeper was understandable.  After all, he said, “her career is based on having that exclusive access.”

Because Corkin had (coercively) gained exclusive access to this patient, most of her claims about the workings of memory would be difficult to contradict.  No one could conduct the experiments needed to rebut her.

Which makes me very skeptical of her claims.

Like most scientists, Corkin stumbled across occasional data that seemed to contradict the models she’d built her career around.  And so she reacted in the same was as the professors I’ve worked with: she hid the data.

Dittrich: Right.  And what’s going to happen to the files themselves?

She paused for several seconds.

Corkin: Shredded

Dittrich: Shredded?  Why would they be shredded?

Corkin: Nobody’s gonna look at them.

Dittrich: Really?  I can’t imagine shredding the files of the most important research subject in history.  Why would you do that?

. . .

Corkin: Well, the things that aren’t published are, you know, experiments that just didn’t … [another long pause] go right.


On stuttering.

On stuttering.

CaptureDuring his first year of graduate school at Harvard, a friend of mine was trying to pick a research advisor.  This is a pretty big deal — barring disaster, whoever you choose will have a great deal of control over your life for the next five to eight years.

My friend found someone who seemed reasonable.  The dude was conducting research in an exciting field.  He seemed personable.  Or, well, he seemed human, which can be what passes for personable among research professors at top-tier universities.  But while my friend and the putative advisor-to-be were talking, they got onto the topic of molecular dynamics simulations.

My friend mentioned that his schoolmate’s father studies simulations of cellular membranes.  And that guy, the father, is incredibly intelligent and very friendly — when I showed up at a wedding too broke for a hotel, he let me sleep on the floor of the room he’d booked for himself and his wife.

But the putative advisor corrected my friend when he mentioned the guy’s name.  “Oh, you mean duh, duh, duh, duh, Doctor ________.”  And smiled, as though my friend was going to chuckle too.

stutter_by_visualtextproject-d49ak0vThat’s when my friend realized, okay, I don’t wanna talk to you no more.  He found a different advisor.  He never regretted his choice.

Well, no, that’s not true.  All graduate students regret their choice of advisor sometimes.  But my friend never wished he’d worked for the jerk.

Yes, some people, with a huge amount of effort and probably an equal measure of luck, are able to get over stuttering.  But most can’t.  So it’s crummy that even well-educated, ostensibly sophisticated people would feel entitled to mock somebody for a stutter.  Presumably even that jerk would’ve refrained from an equivalent comment if my friend’s schoolmate’s father was blind or confined to a wheelchair.

But stuttering, along with a few other conditions like depression and obsessive compulsive disorder, still gets treated like a moral failing.  Like a sufferer should be able to try harder and just get over it.

That attitude is especially bad as regards stuttering, because mockery and castigation seems to make the condition worse.  There are genetic factors that confer a predilection toward stuttering, but (unpublished, evil) work from Dr. Wendell Johnson showed that sufficiently vituperative abuse can cause children of any genetic background to become stutterers.

CaptureYou’ve read about the “monster” study, right?  Dr. Johnson stuttered, and he had a theory that his stuttering had been exacerbated by people’s well-meaning attempts to cure him.  His parents would correct his speech, draw attention to his mistakes, exhort him to be more mindful when talking.  Dr. Johnson thought that the undue attention placed on his speech patterns made him more likely to freeze up and stutter.  And, once that cycle had begun, his brain dug itself into a rut.  He began to castigate himself for his mistakes, perpetuating the condition.

Of course, that was just a theory.  To test it, you’d want to show two things.  First, that by not paying attention to the mistakes of an incipient stutterer, you can help that person evade or cure the condition.  And, second, that you could cause well-spoken people to develop stutters by convincing them and their interlocutors that they already were stuttering, and castigating them for it.

It’s totally ethical to conduct the first experiment.  The process itself would cause no harm, and the intention is to improve someone’s life.  If you can help someone get over a stutter, you’ll smooth future social interactions.  Stave off some mockery from colleagues at Harvard.  That sort of thing.

But the second experiment?  The process is miserable for the study subjects — you’re cutting them off all the time, criticizing them, forcing them to say things over and over until their thoughts are expressed perfectly.  And, worse, if you succeed, you’ve saddled them with burdens they’ll have to deal with for the rest of their lives.  Let the mockery commence!

CaptureDr. Johnson made one of his students conduct that second experiment on six orphaned children.  In the end, none of the children developed the syllabic repetition typical of most stutterers, but they became extremely self-conscious and reluctant to speak — symptoms that stayed with them for the rest of their lives.

Indeed, the symptoms triggered in those children are equivalent to the symptoms monitored for a stuttering model in mice.  One of the genetic factors associated with stuttering was recreated in mice, and those mice exhibited a condition somewhat analogous to human stuttering.

Dr. Dolittle did not participate in this new study, which made matters much more difficult for Barnes & colleagues.  If you don’t know what a mouse is saying, how do you know whether it’s studying?  They did measure variance from one vocalization to the next — in humans, repeating the initial syllable of a word lowers total syllabic variance — and saw that their mice with the stuttering gene repeated sounds more often.

Their best measurements, though, were the rate of squeaking, and the length of pauses between squeaks.  Like an oft-badgered child, the mice with the stuttering gene talked less and spent more time waiting, maybe thinking, between statements.

And it pleases me, given my pre-existing biases, to see more data showing that, if somebody stutters, it’s not that person’s fault.  Genetic predilection certainly isn’t the same thing as destiny, but it’s a nice corrective to the mocking jerks.  Sure, you can speak fine, Mister Mockingpants, but are you fighting against the current of a lysosomal targeting mutation?

(Oh, right, sorry, my mistake. Doctor Mockingpants. You jerk.)


Capturep.s. As it happens, the mutation Barnes et al. introduced into mice is involved in the pathway I studied for my thesis work.  They introduced a mutation in the Gnptab gene (trust me, you don’t want me to write out the full name that Gnptab stands for), which is supposed to produce a protein that links a targeting signal onto lysosomal enzymes.  In less formal terms, Gnptab is supposed to slap shipping labels onto machinery destined for the cell’s recycling plants.  Without Gnptab function, bottles & cans & old televisions pile up in the recycling plant. The machinery to process them never arrives.

Which does seem a little strange to me… stuttering is a very specific phenotype, and that is such a general cellular function.  Lysosomal targeting is needed for all cells, not just neurons in speech areas of the brain.  It’s a sufficiently common function that biologists often refer to Gnptab as a “housekeeping” gene.  And proper lysosome function is sufficiently important that problems typically cause major neurodegeneration, seizures, blindness, and death, typically at a very young age.  Compared to that litany of disasters, stuttering doesn’t sound so bad.