When I was a child, my parents gave me a toy walrus to sleep with. While cuddling this walrus, I’d twist my fingers through a small looped tag on its back, until one day I knotted the tag so thoroughly that I cut off my circulation. I screamed; my finger turned blue; my parents rushed in and wanted to cut off the tag.
“No!” I apparently screamed. “The soft tag is the best part!”
I continued to refuse their help until they offered a compromise, merely slicing the loop in half so we could save my throbbing finger and prevent any future calamity.
I continued to sleep with that toy walrus until I was midway through high school. As I fell asleep, my parents would sometimes peer inside my bedroom and see me lying there, eyes closed, breath slow, my fingers gently stroking that soft tag.
Yes, kids with autism are sometimes quite particular about sensory stimulation. But I am not alone! Baby monkeys also love soft fabric.
So do their mothers.
#
After biologist Margaret Livingstone published a research essay, “Triggers for Mother Love,” animal welfare activists and many other scientists were appalled. In the essay, Livingstone casually discusses traumatic ongoing experiments in which hours-old baby monkeys are removed from their mothers. The babies are then raised in environments where they never glimpse anything that resembles a face, either because they’re kept in solitary confinement and fed by masked technicians or because the babies’ eyes are sutured shut.
After the babies are removed from their mothers, Livingstone offers the mothers soft toys. And the mothers appear to bond with these soft toys. When one particular baby was returned to its mother several hours later, Livingstone writes that:
“The mother looked back and forth between the toy she was holding and the wiggling, squeaking infant, and eventually moved to the back of her enclosure with the toy, leaving the lively infant on the shelf.”
Although I dislike this ongoing research, and don’t believe that it should continue, I find Livingstone’s essay to be generally compassionate.
Livingstone discusses parenting advice from the early twentieth century – too much touch or physical affection will make your child weak! – that probably stunted the emotional development of large numbers of children. Livingstone expresses gratitude that the 1950s-era research of Harry Harlow – the first scientist to explore using soft toys to replace a severed maternal bond – revealed how toxic these recommendations really were.
Harlow’s research may have improved the lives of many human children.
Harlow’s research intentionally inflicted severe trauma on research animals.
#
To show that the aftereffects of trauma can linger throughout an animal’s life, Harlow used devices that he named “The Rape Rack” and “The Pit of Despair” to harm monkeys (whom he did not name).
Harlow did not justify these acts by denigrating the animals. Indeed, in Voracious Science and Vulnerable Animals, research-scientist-turned-animal-activist John Gluck describes working with Harlow as both a student and then professorial collaborator, and believes that Harlow was notable at the time for his respect for monkeys. But this was not enough. Gluck writes that:
“The accepted all-encompassing single ethical principle was simple: if considerations of risk and significant harm blocked the use of human subjects, using animals as experimental surrogates was automatically justified.”
“Harlow showed that monkeys could be emotionally destroyed when opportunities for maternal and peer attachment were withheld. He argued that affectionate relationships in monkeys were worthy of terms like love.”
“In his work on learning in monkeys … [he offered] abundant evidence that monkeys develop and evaluate hypotheses during attempts to develop a solution.”
“Everything that Harlow learned from his research declared that monkeys are self-conscious, emotionally complex, intentional, and capable of substantial levels of suffering.”
#
For my own scientific research, I purchased cow’s brains from slaughterhouses. I used antibodies that were made in the bodies of rabbits and mice who lived (poorly) inside industrial facilities. For my spouse’s scientific research, she killed male frogs to take their sperm.
We’re both vegan.
I’d like to believe that we’d find alternative ways to address those same research questions if we were to repeat those projects today. But that’s hypothetical – at the time, we used animals.
And I certainly believe that there are other ways for Livingstone to study, for instance, the developmental ramifications of autistic children rarely making eye contact with the people around them – without blinding baby monkeys. I believe that Livingstone could study the physiological cues for bonding without removing mothers’ babies (especially since Harlow’s work, from the better part of a century ago, already showed how damaging this methodology would be).
Personally, I don’t think the potential gains from these experiments are worth their moral costs.
But also I recognize that, as a person living in the modern world, I’ve benefited from Harlow’s research. I’ve benefited from the research using mice, hamsters, and monkeys that led to the Covid-19 vaccines. I’ve benefited from innumerable experiments that caused harm.
Livingstone’s particular research might not result in any benefits – a lot of scientific research doesn’t – but unfortunately we can’t know in advance what knowledge will be useful and won’t won’t.
And if there’s any benefit, then I will benefit from this, too. It’s very hard to avoid being helped by knowledge that’s out there in the world.
To my mind, this means I have to atone – to find ways to compensate for some of the suffering that’s been afflicted on my behalf – but reparations are never perfect. And no one can force you to recognize a moral debt.
You will have to decide what any of this means to you.
At track practice, a pair of high
school runners were arguing. Knowing
that I’ve completed twenty-two years of schooling, they figured I could resolve
their debate.
“Coach Brown, who would win in a fight, Superman or The Hulk?”
I stared at them blankly. I knew a bit about Dr. Jekyll and Mr. Hyde, which helps to understand The Hulk, but I’d never read a Superman comic. Superman didn’t sound like an interesting hero: he seemed too powerful. Even The Hulk is more interesting within the context of a complex campaign, when he might become enraged and wreck his own plans, than in a single fight.
I failed to provide an answer, and the
kids went back to arguing. (“Superman
could just turn back time to before The Hulk got enraged, then smash
him!”)
And I resolved to read a Superman book,
to shore up this gap in my education.
Astounding, isn’t it, that Stanford would allow me to graduate without
knowing anything about the paragon of the DC universe?
I chose Grant Morrison’s All-Star Superman. And was pleasantly surprised – although Superman is indeed too powerful for the risk of danger to provide narrative tension, he’s still sad. He doesn’t get the recognition that he feels he’s due; his powers leave him feeling isolated and alone; during the 24-hours when his girlfriend becomes his equal due to a magic serum, she spends her time flirting with other heroes.
Doing great work can feel hollow if nobody appreciates it.
Midway through the series, Superman meets two other survivors from his native Krypton. He expects that they’ll congratulate him on how well he’s kept his adopted planet safe. Instead, they’re disgusted by his complacency.
Superman, in turn, feels disappointed by his brethren. Within the world of comic books, characters who view their powers as conferring a responsibility are heroes; those who think that power gives them the right to do whatever they want are villains.
Homo sapiens are not as intelligent as the new arrivals from Krypton. We are smaller, slower, and weaker. Our tools are less technologically advanced. If they chose to cull our kind, we could do nothing to resist.
This particular colony of macaques has been studied closely for years. Researchers have voluminous observational data from both before and after the hurricane; they’ve stored many tissue samples as well. The hope is that this dataset could unveil the biochemical consequences of trauma, and elucidate traits that allow some people to weather trauma more effectively than others.
With clear insights into the specific pathways affected by trauma, we might even be able to develop drugs that would allow humans to stave off PTSD. Or cure it.
Macaques have long been used as subjects for medical research. We’ve developed several vaccines that prevent AIDS in macaques, but unfortunately the differences between SIV (simian immunodeficiency virus) and HIV meant that some of these vaccines increased human susceptibility to the disease. Whoops.
An image attributed to the Primate Research Laboratory at the University of Wisconsin – Madison and disseminated in 1992.
Macaques are highly intelligent, social animals with approximately 93% the same DNA sequences as us humans. For immunology research, they’re kept in wire cages. They can’t touch, don’t really get to move around. But that’s not so bad compared to the nightmarish psychological studies that have been conducted on macaques in the past. Dittrich’s article summarizes a few of Dr. Harry Harlow’s experiments. Harlow named several pieces of his research equipment, such as “The Pit of Despair,” a small box devoid of light or sound in which children could be trapped for months on end, or “The Rape Rack,” which shouldn’t be described.
“[Harlow] found that the females who
had endured the trauma of both the Pit of Despair and the Rape Rack tended to
become neglectful or even severely abusive mothers.”
#
We’ve conducted studies on humans who have been traumatized. By surveying hurricane survivors, we’ve found that many suffer from PTSD. But one drawback of these investigations, Dittrich writes, is that “the humans in these studies … almost never become experimental subjects until after the traumatic events in question, which makes it hard to gauge how the events actually changed them.”
“If a researcher interested in how
trauma affects individuals or societies were to dream up an ideal natural
laboratory, she might imagine a discrete landmass populated by a
multigenerational community that has been extensively and meticulously studied
for many decades before the traumatizing event.
Even better, it would be a population to which researchers would have
unfettered access – not only to their minds, but also to their bodies, and even
their brains.”
We are to macaques as Superman is to
us. We are stronger, smarter,
technologically superior. We can fly
into space; macaques have done so only at our whims.
In “St. Francis Visits the Research
Macaques of Modern Science” by John-Michael Bloomquist, we eavesdrop on a
conversation between the saint and Miss Able, the first primate to leave our
planet. St. Francis asks about her
experience of the voyage; she tells him “The Gods did not let me see
anything, the damn cone didn’t have a window.”
The capsule and couch used by one of America’s first spacefarers, a rhesus monkey named Able, is displayed at the National Air and Space Museum. Able and a companion squirrel monkey named Miss Baker were placed inside a Jupiter missile nose cone and launched on a test flight in May 1959.
We are indeed like gods among macaques, but we have not elected to be heroes. Instead, we’ve ravaged their ancestral lands. We’ve wracked their children with twisted nightmares that they could not wake from.
Even the Puerto Rican macaque colony that Dittrich writes about – some individuals are permitted to live out their days in relative peace, but this is a breeding center. If you’re developing an HIV vaccine, your lab’s macaques will die; for a few thousand dollars each, this colony will furnish replacements. According to their website, they maintain “an available pool of rhesus macaques in optimal condition for research.”
We humans are like gods, but, unlike Superman, we’ve chosen to be villains.
In the United States, people are having sex less often. And between alcohol, marijuana, recreational painkillers – not to mention anti-depressants and anti-anxiety medication – we take a lot of drugs.
Many of us work long hours at jobs we dislike so that we can afford to buy things that promise to fill some of the emptiness inside. The most lucrative businesses are advertising companies … one of which, Facebook, is designed to make you feel worse so that you’ll be more susceptible to its ads.
It might seem as though we
don’t know how to make people happier.
But, actually, we do.
Now, I know that I’ve written previously with bad medical advice, such as the suggestion that intentionally infecting yourself with the brain parasite Toxoplasma gondii could make you happier. This parasite boosts dopamine levels in your brain (dopamine is a neurotransmitter that conveys feelings of pleasure and mirth) and makes you feel bolder (in controlled laboratory experiments, infected mice show less stress when making risky decisions, and observational data suggests the same to be true for infected humans). You also might become more attractive (infected rodents have more sex, and portrait photographs of infected human men are perceived as more dominant and masculine).
There are drawbacks to Toxoplasma infection, of course. Infected rodents are more likely to be killed by cats. Infected humans may become slower as well, both physically and intellectually. Toxoplasma forms cysts in your brain. It might increase the chance of developing schizophrenia. It can kill you if you’re immunocompromised. And the surest way to contract toxoplasmosis, if incidental exposure hasn’t already done it for you, is by eating cat excrement.
My advice today is
different. No feces required!
And I’m not suggesting
anything illegal. I mentioned, above,
that people in the United States take a lot of drugs. Several of these boost dopamine levels in
your brain. Cocaine, for instance, is a
“dopamine re-uptake inhibitor,” ensuring that any momentary sensation of pleasure
will linger, allowing you to feel happy longer.
But cocaine has a nasty
side effect of leading to incarceration, especially if the local law
enforcement officers decide that your epidermal melanin concentration is too
high. And jail is not a happy
place.
Instead, you could make yourself happier with a bit of at-home trepanation, followed by the insertion of an electrode into the nucleus accumbens of your brain. Now, I know that sounds risky, what with the nucleus accumbens being way down near the base of your brain. But your brain is rather squishy – although you’ll sheer some cells as you cram a length of conductive wire into your cranium, the hope is that many neurons will be pushed out of the way.
The nucleus accumbens tends to show high activity during pleasure. For instance, cocaine stimulates activity in this part of your brain. So does money — tell research subjects that they’ve won a prize and you’ll see this region light up. If rats are implanted with an electrode that lets them jolt their own nucleus accumbens by pushing a lever, they’ll do it over and over. Pressing that lever makes them happier than eating, or drinking water, or having sex. They’ll blissfully self-stimulate until they collapse. From James Olds’s Science paper, “Self-Stimulation of the Brain”:
If animals with electrodes
in the hypothalamuswere run for 24 hours or 48 hours
consecutively, they continued to respond as long as physiological endurance
permitted.
Setup for Olds’s experiment.
Perhaps I should have
warned you – amateur brain modification would carry some risks. Even if you have the tools needed to drill
into your own skull without contracting a horrible infection, you don’t want to
boost your mood just to die of dehydration.
After all, happiness might have some purpose. There might be reasons why certain activities – like eating, drinking water, having sex … to say nothing of strolling outdoors, or volunteering to help others – make us feel happy. After discussing several case studies in their research article “How Happy Is Too Happy,” Matthis Synofzik, Thomas Schlaepfer, and Joseph Fins write that using deep brain stimulation for the “induction of chronic euphoria could also impair the person’s cognitive capacity to respond to reasons about which volitions and preferences are in his or her best interests.”
When an activity makes us
feel happy, we’re likely to do it again.
That’s how people manage to dedicate their lives to service. Or get addicted to drugs.
And it’s how brain
stimulation could be used for mind control.
If you show me a syringe,
I’ll feel nervous. I don’t particularly
like needles. But if you display that
same syringe to an intravenous drug user, you’ll trigger some of the rush of
actually shooting up. The men in my
poetry classes have said that they feel all tingly if they even see the word
“needle” written in a poem.
For months or years, needles
presaged a sudden flush of pleasure.
That linkage was enough for their brains to develop a fondness for the
needles themselves.
If you wanted to develop a taste for an unpalatable food, you could do the same thing. Like bittermelon – I enjoy bittermelons, which have a flavor that’s totally different from anything else I’ve ever eaten, but lots of people loathe them.
Still, if you used deep
brain stimulation to trigger pleasure every time a person ate bittermelon, that
person would soon enjoy it.
Bittermelon. Image by [cipher] in Tokyo, Japan on Wikimedia.
Or you could make someone
fall in love.
Far more effective than
any witch’s potion, that. Each time your
quarry encounters the future beloved, crank up the voltage. The beloved’s presence will soon be
associated with a sense of comfort and pleasure. And that sensation – stretched out for long
enough that the pair can build a set of shared memories – is much of what love
is.
Of course, it probably
sounds like I’m joking. You wouldn’t really
send jolts of electricity into the core of somebody’s brain so that he’d fall
in love with somebody new … right?
Fifty years passed between
the discovery of pleasure-inducing deep brain stimulation and its current use
as a treatment for depression … precisely because one of the pioneering
researchers decided that it was reasonable to use the electrodes as a
love potion.
In 1972, Charles Moan and Robert Heath published a scientific paper titled “Septal stimulation for the initiation of heterosexual behavior in a homosexual male.” Their study subject was a 24-year-old man who had been discharged from the military for homosexuality. Moan and Heath postulated that the right regimen of electrode stimulation – jolted while watching pornography, or while straddled by a female prostitute whom Moan and Heath hired to visit their lab – might lead this young man to desire physical intimacy with women.
Moan and Heath’s paper is
surprisingly salacious:
After about 20 min of such
interaction she begun [sic] to mount him, and though he
was somewhat reticent he did achieve penetration. Active intercourse followed during which she
had an orgasm that he was apparently able to sense. He became very excited at this and suggested
that they turn over in order that he might assume the initiative. In this position he often paused to delay
orgasm and to increase the duration of the pleasurable experience. Then, despite the milieu [inside a lab,
romping under the appraising eyes of multiple fully-clothed scientists] and
the encumbrance of the electrode wires, he successfully ejaculated. Subsequently, he expressed how much he had
enjoyed her and how he hoped that he would have sex with her again in the near
future.
The science writer Lone Frank recently published The Pleasure Shock, a meticulously researched book in which she concludes that Heath was unfairly maligned because most people in the 1970s were reticent to believe that consciousness arose from the interaction of perfectly ordinary matter inside our skulls. Changing a person’s mood with electricity sounds creepy, especially if you think that a mind is an ethereal, inviolable thing.
But it isn’t.
The mind, that is. The mind isn’t an ethereal, inviolable thing.
Zapping new thoughts into somebody’s brain, though, is definitely still understood (by me, at least) to be creepy.
Discussing the contemporary resurgence of electrical brain modification, Frank writes that:
In 2013, economist Ernst Fehr
of Zurich University experimented with transcranial direct current stimulation,
which sends a weak current through the cranium and is able to influence
activity in areas of the brain that lie closest to the skull.
Fehr had sixty-three
research subjects available. They played
a money game in which they each were given a sum and had to take a position on
how much they wanted to give an anonymous partner. In the first round, there were no sanctions
from the partner, but in the second series of experiments, the person in
question could protest and punish the subject.
There were two opposing
forces at play. A cultural norm for
sharing fairly – that is, equally – and a selfish interest in getting as much
as possible for oneself. Fehr and his people
found that the tug of war could be influenced by the right lateral prefrontal
cortex. When the stimulation increased
the brain activity, the subjects followed the fairness norm to a higher degree,
while they were more inclined to act selfishly when the activity was
diminished.
Perhaps the most
thought-provoking thing was that the research subjects did not themselves feel
any difference. When they were asked
about it, they said their idea of fairness had not changed, while the
selfishness of their behavior had changed.
Apparently, you can fiddle
with subtle moral parameters in a person without the person who is manipulated
being any the wiser.
The human brain evolved to create elaborate narratives that rationalize our own actions. As far as our consciousness is concerned, there’s no difference between telling a just so story about a decision we made un-aided, versus explaining a “choice” that we were guided toward by external current.
Frank believes that Heath
was a brilliant doctor who sincerely wanted to help patients.
When bioethicist Carl Elliott reviewed The Pleasure Shock for the New York Review of Books, however, he pointed out that even – perhaps especially – brilliant doctors who sincerely want to help patients can stumble into rampantly unethical behavior.
The problem isn’t just that Heath pulsed electricity into the brain of a homosexual man so that he could ejaculate while fooling around with a woman. Many of Heath’s patients – who, it’s worth acknowledging, had previously been confined to nightmarish asylums – developed infections from their electrode implantations and died. Also, Heath knowingly promoted fraudulent research findings because he’d staked his reputation on a particular theory and was loathe to admit that he’d been wrong (not that Heath has been the only professor to perpetuate falsehoods this way).
Elliott concludes that:
Heath was a physician in
love with his ideas.
Psychiatry has seen many
men like this. Heath’s contemporaries
include Ewen Cameron, the CIA-funded psychiatrist behind the infamous “psychic
driving” studies at McGill University, in which patients were drugged into
comas and subjected to repetitive messages or sounds for long periods, and
Walter Freeman, the inventor of the icepick lobotomy and its most fervent
evangelist.
These men may well have
started with the best of intentions. But
in medical research, good intentions can lead to the embalming table. All it takes is a powerful researcher with a
surplus of self-confidence, a supportive institution, and a ready supply of
vulnerable subjects.
Heath had them all.
It’s true that using an
electrode to stimulate the nucleus accumbens inside your brain can probably
make you feel happier. By way of
contrast, reading essays like this one make most people feel less happy.
Sometimes it’s good to
feel bad, though.
As Elliott reminds us, a
lot of vulnerable people were abused in this research. A lot of vulnerable people are still
treated with cavalier disregard, especially when folks with psychiatric issues
are snared by our country’s criminal justice system. And the torments that we dole upon non-human
animals are even worse.
[University of Chicago
researcher Inbal Ben-Ami Bartal] placed one rat in an enclosure, where it
encountered a small transparent container, a bit like a jelly jar. Squeezed inside it was another rat, locked
up, wriggling in distress.
Not only did the free rat learn how to open a little door to liberate the other, but she was remarkably eager to do so. Never trained on it, she did so spontaneously.
Then Bartal challenged her
motivation by giving her a choice between two containers, one with chocolate
chips – a favorite food that they could easily smell – and another with a
trapped companion. The free rat often
rescued her companion first, suggesting that reducing her distress counted more
than delicious food.
Is it possible that these
rats liberated their companions for companionship? While one rat is locked up, the other has no
chance to play, mate, or groom. Do they
just want to make contact? While the
original study failed to address this question, a different study created a
situation where rats could rescue each other without any chance of further
interaction. That they still did so
confirmed that the driving force is not a desire to be social.
Bartal believes it is
emotional contagion: rats become distressed when noticing the other’s distress,
which spurs them into action.
Conversely, when Bartal gave
her rats an anxiety-reducing drug, turning them into happy hippies, they still
knew how to open the little door to reach the chocolate chips, but in their
tranquil state, they had no interest in the trapped rat. They couldn’t care less, showing the sort of
emotional blunting of people on Prozac or pain-killers.
The rats became
insensitive to the other’s agony and ceased helping.
You could feel
happier. We know enough to be able to
reach into your mind and change it.
A miniscule flow of electrons is enough to trigger bliss.
But should we do it? Or use our unhappiness as fuel to change the
world instead?
The scientific method is the best way to investigate the world.
Do you want to know how something works? Start by making a guess, consider the implications of your guess, and then take action. Muck something up and see if it responds the way you expect it to. If not, make a new guess and repeat the whole process.
This is slow and arduous, however. If your goal is not to understand the world, but rather to convince other people that you do, the scientific method is a bad bet. Instead you should muck something up, see how it responds, and then make your guess. When you know the outcome in advance, you can appear to be much more clever.
A large proportion of biomedical science publications are inaccurate because researchers follow the second strategy. Given our incentives, this is reasonable. Yes, it’s nice to be right. It’d be cool to understand all the nuances of how cells work, for instance. But it’s more urgent to build a career.
Both labs I worked in at Stanford cheerfully published bad science. Unfortunately, it would be nearly impossible for an outsider to notice the flaws because primary data aren’t published.
A colleague of mine obtained data by varying several parameters simultaneously, but then graphed his findings against only one of these. As it happens, his observations were caused by the variable he left out of his charts. Whoops!
(Nobel laureate Arieh Warshel quickly responded that my colleague’s conclusions probably weren’t correct. Unfortunately, Warshel’s argument was based on unrealistic simulations – in his model, a key molecule spins in unnatural ways. This next sentence is pretty wonky, so feel free to skip it, but … to show the error in my colleague’s paper, Warshel should have modeled multiple molecules entering the enzyme active site, not molecules entering backward. Whoops!)
Another colleague of mine published his findings about unusual behavior from a human protein. But then his collaborator realized that they’d accidentally purified and studied a similarly-sized bacterial protein, and were attempting to map its location in cells with an antibody that didn’t work. Whoops!
No apologies or corrections were ever given. They rarely are, especially not from researchers at our nation’s fanciest universities. When somebody with impressive credentials claims a thing is true, people often feel ready to believe.
Indeed, for my own thesis work, we wanted to test whether two proteins are in the same place inside cells. You can do this by staining with light-up antibodies for each. If one antibody is green and the other is red, you’ll know how often the proteins are in the same place based on how much yellow light you see.
Before conducting the experiment, I wrote a computer program that would assess the data. My program could identify various cellular structures and check the fraction that were each color.
As it happened, I didn’t get the results we wanted. My data suggested that our guess was wrong.
But we couldn’t publish that. And so my advisor told me to count again, by hand, claiming that I should be counting things of a different size. And then she continued to revise her instructions until we could plausibly claim that we’d seen what we expected. We made a graph and published the paper.
This is crummy. It’s falsehood with the veneer of truth. But it’s also tragically routine.
One of these nightmares is driven by the perverse incentives facing early neurosurgeons. Perhaps you noticed, above, that an essential step of the scientific method involves mucking things up. You can’t tell whether your guesses are correct until you perform an experiment. Dittrich provides a lovely summary of this idea:
The broken illuminate the unbroken.
An underdeveloped dwarf with misfiring adrenal glands might shine a light on the functional purpose of these glands. An impulsive man with rod-obliterated frontal lobes [Phineas Gage] might provide clues to what intact frontal lobes do.
This history of modern brain science has been particularly reliant on broken brains, and almost every significant step forward in our understanding of cerebral localization – that is, discovering what functions rely on which parts of the brain – has relied on breakthroughs provided by the study of individuals who lacked some portion of their gray matter.
. . .
While the therapeutic value of the lobotomy remained murky, its scientific potential was clear: Human beings were no longer off-limits as test subjects in brain-lesioning experiments. This was a fundamental shift. Broken men like Phineas Gage and Monsieur Tan may have always illuminated the unbroken, but in the past they had always become broken by accident. No longer. By the middle of the twentieth century, the breaking of human brains was intentional, premeditated, clinical.
Dittrich was dismayed to learn that his own grandfather had participated in this sort of research, intentionally wrecking at least one human brain in order to study the effects of his meddling.
Lacking a specific target in a specific hemisphere of Henry’s medial temporal lobes, my grandfather had decided to destroy both.
This decision was the riskiest possible one for Henry. Whatever the functions of the medial temporal lobe structures were – and, again, nobody at the time had any idea what they were – my grandfather would be eliminating them. The risks to Henry were as inarguable as they were unimaginable.
The risks to my grandfather, on the other hand, were not.
At that moment, the riskiest possible option for his patient was the one with the most potential rewards for him.
By destroying part of a brain, Dittrich’s grandfather could create a valuable research subject. Yes, there was a chance of curing the patient – Henry agreed to surgery because he was suffering from epileptic seizures. But Henry didn’t understand what the proposed “cure” would be. This cure was very likely to be devastating.
At other times, devastation was the intent. During an interview with one of his grandfather’s former colleagues, Dittrich is told that his grandmother was strapped to the operating table as well.
“It was a different era,” he said. “And he did what at the time he thought was okay: He lobotomized his wife. And she became much more tractable. And so he succeeded in getting what he wanted: a tractable wife.”
#
Compared to slicing up a brain so that its bearer might better conform to our society’s misogynistic expectations of female behavior, a bit of scientific fraud probably doesn’t sound so bad. Which is a shame. I love science. I’ve written previously about the manifold virtues of the scientific method. And we need truth to save the world.
Which is precisely why those who purport to search for truth need to live clean. In the cut-throat world of modern academia, they often don’t.
Dittrich investigated the rest of Henry’s life: after part of his brain was destroyed, Henry became a famous study subject. He unwittingly enabled the career of a striving scientist, Suzanne Corkin.
Dittrich writes that
Unlike Teuber’s patients, most of the research subjects Corkin had worked with were not “accidents of nature” [a bullet to the brain, for instance] but instead the willful products of surgery, and one of them, Patient H.M., was already clearly among the most important lesion patients in history. There was a word that scientists had begun using to describe him. They called him pure. The purity in question didn’t have anything to do with morals or hygiene. It was entirely anatomical. My grandfather’s resection had produced a living, breathing test subject whose lesioned brain provided an opportunity to probe the neurological underpinnings of memory in unprecedented ways. The unlikelihood that a patient like Henry could ever have come to be without an act of surgery was important.
. . .
By hiring Corkin, Teuber was acquiring not only a first-rate scientist practiced in his beloved lesion method but also by extension the world’s premier lesion patient.
. . .
According to [Howard] Eichenbaum, [a colleague at MIT,] Corkin’s fierceness as a gatekeeper was understandable. After all, he said, “her career is based on having that exclusive access.”
Because Corkin had (coercively) gained exclusive access to this patient, most of her claims about the workings of memory would be difficult to contradict. No one could conduct the experiments needed to rebut her.
Which makes me very skeptical of her claims.
Like most scientists, Corkin stumbled across occasional data that seemed to contradict the models she’d built her career around. And so she reacted in the same was as the professors I’ve worked with: she hid the data.
Dittrich: Right. And what’s going to happen to the files themselves?
She paused for several seconds.
Corkin: Shredded.
Dittrich: Shredded? Why would they be shredded?
Corkin: Nobody’s gonna look at them.
Dittrich: Really? I can’t imagine shredding the files of the most important research subject in history. Why would you do that?
. . .
Corkin: Well, the things that aren’t published are, you know, experiments that just didn’t … [another long pause] go right.
I think most laypeople understand that academic scientists, in order to keep their jobs, have to publish new findings. I assume most people also intuitively understand that not all venues for publication are equal. Not to malign my hometown newspaper, but it’s less impressive to write an editorial for Bloomington’s Herald Times than the New York Times.
In the research world, journals are ranked by “impact factor.” At the top of the heap are journals like Cell, Nature, and Science; these have “impact factors” in the 30s. The Journal of Cell Biology, where I published my thesis work, has an impact factor around 10.
And the Journal of Assisted Reproduction & Genetics? Its impact factor is slightly below 2. My local university’s medical library doesn’t even subscribe.
So I was puzzled: why did the research paper with one of the flashiest single-sentence summaries land in J Assist Reprod Genet?
Research journals: tiny nudges to the frontier of human knowledge, & a whole lotta people who got to keep their jobs.
Here’s the summary, in case you missed it: a new genome editing technique was used to insert an HIV-resistance gene into human IVF embryos.
To my mind, that’s a pretty big deal. It’s not that genetically-modified organisms are anything new. The big difference is that the technique this group used, CRISPR, makes the whole process incredibly fast, precise, and cheap. The difference is that sculpting the genome of a human embryo will be easy soon.
A charming schematic of CRISPR from Wikipedia. To use CRISPR for a new gene modification, only the short blue / orange targeting strand in the schematic above needs to be synthesized. Eazy-peezy, right?
At the moment, nobody understands the human genome well enough to propose the sort of editing that shows up routinely in science fiction movies — probably the best way to convince you quickly, without getting into too much detail, is to slap up the title of a recent paper: “Most reported genetic associations with general intelligence are probably false.” We know that many aspects of human physiology and personality are partially controlled by genetics, but we haven’t yet decoded which genes in which combination give any particular effect.
I don’t think we even understand fully the trade-offs inherent in human personality. We’ve recently begun to understand that many traits designated “mental illnesses” exist on a spectrum and that the challenges are inextricably linked to good qualities — creativity and schizophrenia, puzzle solving and autism, awareness and ADHD. It’s unlikely that any recipe for a “perfect” human brain exists.
Still, there are traits that parents prefer. Male height. Facial symmetry. Disease resistance. We’ll soon know which genes modulate these.
As far as I can tell, this is also the explanation for why their super-flashy experiment landed in a low impact factor journal. It’s not a typical research paper. They wrote an opinion piece about scientific ethics with a somewhat-unsuccessful experiment grafted on in order to get the thing published.
I don’t mean that as criticism. I think they’ve done the right thing. If anything, the problem is with scientific publishing; I assume their paper was rejected by a higher impact factor journal. This paper, with its focus on ethics, is not what fancy journals typically publish.
For instance, the reason why their experiment was somewhat unsuccessful? Kang et al. were using CRISPR to introduce HIV resistance into a human embryo. But, because they think that using CRISPR on human embryos is unethical, they specifically chose polyploid embryos — these are non-viable cells produced when two sperm fuse with a single egg. They have too much DNA and can’t possibly become people.
Because CRISPR uses a DNA-reading guide strand to direct a DNA-modifying enzyme to a particular location, and because the experiment would be “successful” only if all copies of a gene were modified, using a polyploid embryo with more copies of each gene increases the chance of “failure.” In basketball, making three free throws in a row is obviously more difficult than making two in a row. That’s what they were trying to do.
Which is why, even though the typical way to read a research paper is to look at the pictures, then read the captions, then maybe read the results section — to wit, ignoring the bulk of the text — the most important part of Kang et al.’s paper is the discussion section. From their paper:
“Because human in vitro fertilization methods are well established and site-specific nuclease technologies are readily available, it is foreseeable that a genetically modified human could be generated. We believe that any attempt to generate genetically modified humans through the modification of early embryos needs to be strictly prohibited until we can resolve both ethical and scientific issues.”
That’s a sentiment a lot of people probably agree with. But I think it carries more weight in a paper that demonstrates just how easy this process is.
And, sure, they did not sequence the full genomes of their modified embryos. One risk with CRISPR genome editing is that you’ll have “off target effects” — you might change more of the genome than you were intending. But there are plenty of very smart people working to make the technology more precise. Within five years, I’d guess, you’ll be able to change single target genes reliably.
Gattaca chillingly illustrates the dystopia of unregulated genetic manipulation, but even that film understates what we’ll soon be capable of. The premise of Gattaca is that, by sequencing IVF embryos, parents can choose what sort of child they want. From hundreds of options, parents pick one.
Scary, sure. But not this scary. CRISPR could let parents sculpt the child they want.
Not that you’d want this, but it wouldn’t be that hard to make your kid glow in the dark. Maybe you’d want your progeny to be eight-feet tall and brilliant, too. You could do it. But, should you?