On Gabrielle Zevin’s ‘Tomorrow, and Tomorrow, and Tomorrow.’

On Gabrielle Zevin’s ‘Tomorrow, and Tomorrow, and Tomorrow.’

All art is designed to alter the audience’s minds.

Even more accurately: all art is designed to alter the physical structure of the audience’s brains.

Art reshapes the biological world. And from that foundation, perchance, art can change the world at large as well.

#

At the simplest level, the changes wrought by art are memories. Each new memory is created through physical changes in the composition of our brains – freshly pruned connections between neurons, or new synapses that begin reaching out from one cell to another. The pair become more or less likely to fire in concert.

If ever an artwork doesn’t change the physical structure of your brain, then you won’t recall having experienced it. If you happen to see that painting, read that book, or hear that song again, it’ll be as though it were the first time.

#

Most artists hope to create works that will make more significant changes in their audience. Not just that the artwork will remembered, but that the audience might think of the world differently, perhaps even act upon newfound philosophical priorities.

I played a lot of video games while growing up, including first-person shooters like Wolfenstein, Doom II, and Golden Eye. But despite the plethora of violence that I immersed myself in – artworks that constantly suggested I could murder my problems – I’ve never wanted to hold a gun. Luckily, video games didn’t change my brain that way.

But a peculiar commonality between all the video games I played did fundamentally alter my view of the world.

By the time I started playing, most computer games had save functions. A player could create a backup copy of their game’s progress just before a tricky section, and then the player could repeat the ensuing challenge again and again, loading the saved copy after each new mistake while attempting to complete that section of the game perfectly. (A mid-nineties rock album – I believe by the band Ben Folds Five – toyed with this idea of computerized perfection by claiming in the liner notes that each measure of every song was recorded in a separate take, all to give listeners an experience absolutely free of musical mistakes.)

Over many years playing video games, I grew so accustomed to this opportunity to save my progress and repeat sections again and again – whether in openly violent games like Doom and Diablo, in games where the violence was partially abstracted like Civilization 2, or even in peaceable games like Sim City and Sierra’s pinball simulator – that I felt confident whenever I played. I was absolutely certain that I’d eventually get things right.

In video games, a player could always try again.

And so an awful feeling gnawed at me when things went wrong in my day-to-day life, outside the world of games. Whether it was asking someone to prom, running poorly in a track meet, going through a bad collegiate break-up, or a depressed semester that culminated with my research advisor asking for a progress report about the work I (hadn’t) done in his laboratory: I wanted to reload my save file. If I were playing a game, then I’d be prepared. I could zip back and attempt the previous few hours – or months – again.

With repetition, I’d get it right.

Which isn’t a desire to re-live the past, exactly. Instead, video games cultivate a desire to bring all our present knowledge with us. The past, but blessed with portents from an unrealized future. If I’d gotten to try again, I would’ve been a different person – wiser, more experienced – who could make different choices from my prior self and thereby reap a more glorious present.

In his game Braid, Jonathan Blow describes this dream:

Tim and the princess lounge in the castle garden, laughing together, giving names to the colorful birds. Their mistakes are hidden from each other, tucked away between the folds of time, safe.

While playing Braid, players don’t even have to save their game. There are six buttons: left, right, up, down, jump, & “rewind time.” With any mistake – the game’s protagonist tumbling aflame toward the bottom of the screen – a player simply presses “rewind time” and tries again.

#

Even midway through my life, I still experience this misguided desire. Perhaps it’s worse in people predisposed toward rumination. I find myself thinking about how phenomenally well I could re-live my youth, if I were able to keep my knowledge from all this practice that I’ve had living but was somehow given the chance to try again.

And that, to me, is the most horrible fallout from playing video games. They inculcated such a strong expectation that there would always be identical second chances. Not just the potential for forgiveness or redemption, but an opportunity for repetition until perfection.

I assume this sensation was experienced by people even before anyone played video games. I also assume it’s stronger now. And commoner.

#

Still, there are certain types of artistic messages that video games convey more adroitly than any other media. When we play a game, we control what happens. Our impulses are reflected in movements on a screen. This is a powerful way to engender empathy.

First-person narration in books does something similar. At times I cringed, but I still thrilled at the protagonist’s misbehavior in Sara Levine’s Treasure Island!!! because I’d felt welcomed into her experiences. (A bold novel, since our culture considers Bukowskian misbehavior to be much less acceptable if described from a woman’s perspective – Anna Kavan’s Julia and the Bazooka was never granted the same cachet as William Burrough’s Naked Lunch.)

Even with the mind-control fungus in Hiron Ennes’s Leech, the first-person narration swayed me to root for the fungus’s destructively selfish pursuits.

#

Nearly all the games made by the designers at the heart of Gabrielle Zevin’s Tomorrow, and Tomorrow, and Tomorrow have narratives that are rigidly circumscribed. A game’s protagonist might jump only when the player presses “jump,” but in this sort of game – heavily scripted, with a pre-programmed “correct” way to win – players have only the illusion of choice and freedom. The artwork’s audience is forced to follow a single structured path (much like how Mario can’t even turn back to revisit areas that have passed off the left side of the screen) if they want to experience the story.

The designers in Tomorrow, and Tomorrow, and Tomorrow don’t allow emergent behaviors into their games (which are generally the most interesting and / or bizarre aspects of artificial intelligence, all the strange interactions that occur when a sufficient quantity of simple systems concurrently pursue mathematically-designated goals. A few years ago, researchers O. Peleg & colleagues found that a very basic algorithm could replicate the gravity- and wind-defying stability of honeybee clusters. And a simple word prediction model – albeit trained on a huge trove of data – creates the lifelike creepiness of chat algorithms.)

In the games described in Tomorrow, and Tomorrow, and Tomorrow, a player’s choice is generally to either do the “right” thing, which will allow the game to progress, or to stop playing and never reach its ending.

Which does carry an inherent drawback. Artists want for an audience to be able to experience their creations. But when that artwork is a game, many people find themselves excluded.

Recently, my youngest child started drawing video game levels on paper (despite living in a house without a television and having never played that type of game), so over spring break I invited them to sit in front of my 17-year-old laptop computer and play Braid. I thought this would be a good introduction to gaming because there’s no risk – with the “rewind time” button, the game’s protagonist can’t die.

But my kids couldn’t coordinate the button pressing required for jumping between platforms. Neither can my spouse. I think Braid is a fantastic piece of artwork, but without me sitting with them, ready to press the buttons at the tricky parts, they wouldn’t get to experience it.

#

The game Limbo, like Mario, is a left to right sidescroller, and the player begins by moving through a forest world where other children have set murderous traps or will attack you when you approach. Limbo is replete with puzzles that can’t be solved without multiple attempts – often the signs of danger won’t be noticed until you’ve succumbed to them several times – forcing the player to watch their protagonist die again and again.

I felt like the experience of playing was changing me, making me inured to the game’s incessant horror and tragedy as I continued. If the final levels of the game had included puzzles in which the player had to move from right to left and construct traps to thwart the next “generation” of characters attempting to escape the forest, I think the game would’ve been more compelling. It would have had a closed circle of meaning and conveyed a powerful message about trauma and its aftereffects.

My youngest child is in a multi-age classroom where several students are lashing out in pretty terrible, disruptive ways – my kid has often sobbed at bedtime while describing things that happened. But the outbursts are coming from kindergartners and first-graders. No child that age wants to be terrible! No child wants to rage and scream and invite punishment. It takes problematic occurrences earlier in life, and not enough support in the moment, for a child to devolve to that.

A game like Limbo (with a more compelling ending) could readily teach this, because players could reflect upon the changes that had occurred in their own behavior from the beginning to the end of the game.

Video games, by putting us in “control” of characters in desperate situations, can demonstrate how close we are to ethical slips. Video games are the art form best suited to inspire their audience to reckon with their own behavior. (Admittedly, players don’t always undertake this reckoning.)

For instance, the game Grand Theft Auto 5 asks players to torture a character in order to continue the story. I don’t think that kids should have been exposed to that game, but for adult audiences, I believe there’s real value in demonstrating what regular people will do when they feel trapped or bullied into unethical behavior. In demonstrating what you might do.

There was no way to experience the rest of the game without committing torture, but then again, GTA was just a game – not getting to experience the rest of the plot isn’t a huge punishment, is it? And even so, most players chose to torture. How much more severe do you think the pressure would be if you’d devoted your life to a certain career and then felt like all your progress – your income, your identity – would be wrenched away from you if you failed to conform with a culture of unethical behavior?

Occasionally, when other volunteers visit the county jail to help with my classes, they’ve expressed disappointment afterward that I’m always friendly and polite with the correctional officers who work there. I ask about the COs plans for their weekends (at the jail, staff are typically assigned a 9-day work week, with three weekend days and a back-to-back pair of “Wednesdays” mid-week, so you never know how close somebody is to their own personal Friday). Everyone who volunteers at the jail is, like me, extremely liberal, and sometimes the other volunteers say that it feels disquieting to be chummy with the people administering state-sponsored violence and incarceration.

But it’s not the fault of anyone who work in jail. Honestly, the jail staff is in jail every day they go to work, and we live in a country where most people have to work in order to have enough to eat. (I’m very privileged, but much of my extended family has been on food stamps at some time or other, and the money always ran out before the end of the month.)

Even at jails or prisons where the staff are openly abuse toward inmates – or police departments that cultivate a culture of brutality – it’s worth considering what pressures caused people to break and behave that way.

Video games – with their twitchy dexterity challenges that cause our hearts to race – can demonstrate the ways that we, too, might break.

Playing Minecraft at the local YMCA, I watched my gentle vegan child swing a pickax at a sheep.

If I were playing, I’d probably do it, too. Just to see what happens.

The moving pixels stopped moving. The sheep became a resource.

#

I haven’t learned to program yet, but the first game I make will probably be called Psalmist, where players write poems to conjure forth a deity. The deity would become the player’s character, with its controls made intentionally buggy in ways that reflected the player’s style of worship. If a player wrote something like Psalm 137 (“Happy shall he be, that taketh and dasheth thy little ones against the stones.”) they might reap a god wrathful enough to lunge and squash the player’s own people even as the player attempted to navigate toward an enemy city.

Automated text analysis isn’t good enough to understand poetry yet, but a simple statistical model could register the emotional tenor of the words, whether there was frantic or lugubrious pacing, tight or loose syntax.

The main problem, in making a game like that, would be a question of artistic intent: it’s easy to imagine incorporating deities into strategy games like Starcraft or Civilization, but do we need another work of art that reinforces players’ assumptions that it’s good to conquer the world? To extract every resource and crush enemies?

#

Near the end of Tomorrow, and Tomorrow, and Tomorrow, a protagonist says,

Art doesn’t typically get made by happy people.

And while it’s true that all art aims to change the world, I think it’s possible to simultaneously experience both happiness and a recognition that the world has many flaws.

When my spouse and I moved into our home, I remarked that our bedroom ceiling looked wrong. “The round light fixture, the square walls, they don’t look right. There ought to be a maze.” Every time I looked at that ceiling, something felt off to me.

Then, two years later, I painted the maze that I thought should have been there all along. (I painted while standing precariously on a folding chair, and at one point slipped and dumped half a can of black paint on my face and dreadlocked hair. The next day I was telling this story to a local poet, and he said, “Oh, and it’s not like you even needed to paint it, which probably makes it worse.” Except that I did need to paint that ceiling – it hadn’t looked right to me!)

It’s not that I was unhappy then, or am blissfully happy now – we make art to make the world that we think ought to be.

Whether from a place of happiness or unhappiness, artists want the world to change.

On disillusionment and knock knock jokes.

On disillusionment and knock knock jokes.

Most children love telling knock knock jokes – the traditional call and response gives them such power.

When a child says, “Knock knock,” you have to say, “Who’s there?” That’s the system!

The jokes aren’t funny. They’re never funny. At their worst, they’re also long – “Orange-n’t you glad I didn’t say banana?”

And yet, kids know that when they say, “Knock knock,” you have to say “Who’s there?”

Until, one day, somebody doesn’t.

#

In “Rational Snacking: Young Children’s Decision-Making on the Marshmallow Task Is Moderated by Beliefs about Environmental Reliability,”Celeste Kidd, Holly Palmeri, & Richard Aslin write that:

When children draw on walls, reject daily baths, or leave the house wearing no pants and a tutu, caretakers may reasonably doubt their capacity for rational decision-making.

However, recent evidence suggests that even very young children possess sophisticated decision-making capabilities …”

The authors conducted an experiment: a marshmallow was set in front of a small child; the child was told that if they waited to eat it, they’d be given two marshmallows instead; the child was left alone in the room with the marshmallow for up to fifteen minutes.

This is a common experiment – variants have been conducted since the 1970s. In Kidd, Palmeri, & Aslin’s 2013 version, each child was first shown that the researcher offering marshmallows was either reliable or unreliable. At the beginning of each child’s encounter with the researcher, the researcher provided mediocre art supplies and promised that, if the child waited, the researcher would bring something better. Then the researcher either fulfilled that promise (bringing fresh markers or cool stickers!), or came back offering only apologies and saying that the child should just use the mediocre supplies that had been in the room all along. The wait had been for naught!

During the subsequent marshmallow test, children were asked to trust this same researcher to fulfill a promise, even after being shown that the researcher wasn’t reliable.

The children who’d been disappointed were less likely to wait.

#

Actually, it’s not just “knock knock” jokes – none of the jokes that children tell are funny.

And yet, parents feign excitement. We smile, maybe even laugh.

My kids are two years apart. When they were six and four, my younger child would often watch and listen and then tell the exact same joke to me.

I’d do my best to respond in the exact same way. As though surely I couldn’t know – no child wants for you to actually try to guess the answer when they tell a joke.

“I don’t know, where does a cow go for entertainment?”

Eventually a child will experience disillusionment from the world; it needn’t come from a caretaker.

#

At the end of Kidd, Palmeri, & Aslin’s marshmallow experiment, every child was given evidence that the researchers were unreliable. No matter if the child had waited to eat the marshmallow or had scarfed it right away, each child was given three additional marshmallows.

No child’s expectations were met. And the children who’d decided that waiting was pointless had their beliefs reinforced.

In the great scheme of things, giving children a few extra marshmallows doesn’t cause much harm. Although it’s curious that this group of researchers would intentionally undermine children’s trust in scientists.

#

At the local high school, the boys’ bathroom adjacent to the cafeteria doesn’t have soap. Empty plastic shells are affixed to the wall where soap dispensers used to be.

There’s a soap dispenser in the hallway outside the bathroom. If someone wanted to wash their hands properly, they’d have to turn on a sink, get their hands wet, walk outside, use the dispenser, then walk back into the bathroom to rinse the soap off. Few students do.

The administration removed the dispensers because some students were stealing them, and, at least once, somebody urinated into the soap pouch – these students needed devious licks to boast about on social. Similar incidents happened all around the country.

The problem, several high school seniors insisted to me, is that schools were closed for a while during the pandemic, which meant that current sophomores and juniors didn’t get bullied enough during middle school.

Obviously, their theory is ridiculous – “more bullying” is never a good solution to the world’s problems. But I find it fascinating that this would be the students’ first hypothesis. That the underlying problem isn’t that children were forcibly isolated during a crucial phase of their development, nor that we’ve inundated children’s lives with addictive, psychologically manipulative smartphone apps. No, the real problem is that these young people weren’t bullied enough!

#

By middle school, nearly all students will have experienced the disillusionment of having a knock knock joke batted away without a “Who’s there?” in response – I believe most middle school humor still revolves around sex, sarcasm, and dead baby jokes.

But I find it difficult to believe that young people – whose lives transitioned from in-person interactions with people their own age to transpiring almost entirely on the internet – would’ve experienced significantly less bullying during the pandemic. The internet is a nightmare!

#

The best audience for a child’s knock knock joke is another child – maybe, just maybe, a child might think it’s funny to hear the turnabout from “Boo who?” to “Oh, what’s wrong, are you hurt?”

The interaction is personal, localized, and impermanent.

And when the disillusionment comes – a friend not saying “Who’s there?” – the moment is brief and private.

How much worse might it feel to have your moments of embarrassment linger in full view?

Personally, I’m embarrassed about the world we’re building for young people.

On maternal bonds and cruelty.

On maternal bonds and cruelty.

When I was a child, my parents gave me a toy walrus to sleep with. While cuddling this walrus, I’d twist my fingers through a small looped tag on its back, until one day I knotted the tag so thoroughly that I cut off my circulation. I screamed; my finger turned blue; my parents rushed in and wanted to cut off the tag.

“No!” I apparently screamed. “The soft tag is the best part!”

I continued to refuse their help until they offered a compromise, merely slicing the loop in half so we could save my throbbing finger and prevent any future calamity.

I continued to sleep with that toy walrus until I was midway through high school. As I fell asleep, my parents would sometimes peer inside my bedroom and see me lying there, eyes closed, breath slow, my fingers gently stroking that soft tag.

Yes, kids with autism are sometimes quite particular about sensory stimulation. But I am not alone! Baby monkeys also love soft fabric.

So do their mothers.

#

After biologist Margaret Livingstone published a research essay, “Triggers for Mother Love,” animal welfare activists and many other scientists were appalled. In the essay, Livingstone casually discusses traumatic ongoing experiments in which hours-old baby monkeys are removed from their mothers. The babies are then raised in environments where they never glimpse anything that resembles a face, either because they’re kept in solitary confinement and fed by masked technicians or because the babies’ eyes are sutured shut.

After the babies are removed from their mothers, Livingstone offers the mothers soft toys. And the mothers appear to bond with these soft toys. When one particular baby was returned to its mother several hours later, Livingstone writes that:

The mother looked back and forth between the toy she was holding and the wiggling, squeaking infant, and eventually moved to the back of her enclosure with the toy, leaving the lively infant on the shelf.

Although I dislike this ongoing research, and don’t believe that it should continue, I find Livingstone’s essay to be generally compassionate.

Livingstone discusses parenting advice from the early twentieth century – too much touch or physical affection will make your child weak! – that probably stunted the emotional development of large numbers of children. Livingstone expresses gratitude that the 1950s-era research of Harry Harlow – the first scientist to explore using soft toys to replace a severed maternal bond – revealed how toxic these recommendations really were.

Harlow’s research may have improved the lives of many human children.

Harlow’s research intentionally inflicted severe trauma on research animals.

#

To show that the aftereffects of trauma can linger throughout an animal’s life, Harlow used devices that he named “The Rape Rack” and “The Pit of Despair” to harm monkeys (whom he did not name).

Harlow did not justify these acts by denigrating the animals. Indeed, in Voracious Science and Vulnerable Animals, research-scientist-turned-animal-activist John Gluck describes working with Harlow as both a student and then professorial collaborator, and believes that Harlow was notable at the time for his respect for monkeys. But this was not enough. Gluck writes that:

The accepted all-encompassing single ethical principle was simple: if considerations of risk and significant harm blocked the use of human subjects, using animals as experimental surrogates was automatically justified.

Harlow showed that monkeys could be emotionally destroyed when opportunities for maternal and peer attachment were withheld. He argued that affectionate relationships in monkeys were worthy of terms like love.

In his work on learning in monkeys … [he offered] abundant evidence that monkeys develop and evaluate hypotheses during attempts to develop a solution.

Everything that Harlow learned from his research declared that monkeys are self-conscious, emotionally complex, intentional, and capable of substantial levels of suffering.”

#

For my own scientific research, I purchased cow’s brains from slaughterhouses. I used antibodies that were made in the bodies of rabbits and mice who lived (poorly) inside industrial facilities. For my spouse’s scientific research, she killed male frogs to take their sperm.

We’re both vegan.

I’d like to believe that we’d find alternative ways to address those same research questions if we were to repeat those projects today. But that’s hypothetical – at the time, we used animals.

And I certainly believe that there are other ways for Livingstone to study, for instance, the developmental ramifications of autistic children rarely making eye contact with the people around them – without blinding baby monkeys. I believe that Livingstone could study the physiological cues for bonding without removing mothers’ babies (especially since Harlow’s work, from the better part of a century ago, already showed how damaging this methodology would be).

Personally, I don’t think the potential gains from these experiments are worth their moral costs.

But also I recognize that, as a person living in the modern world, I’ve benefited from Harlow’s research. I’ve benefited from the research using mice, hamsters, and monkeys that led to the Covid-19 vaccines. I’ve benefited from innumerable experiments that caused harm.

Livingstone’s particular research might not result in any benefits – a lot of scientific research doesn’t – but unfortunately we can’t know in advance what knowledge will be useful and won’t won’t.

And if there’s any benefit, then I will benefit from this, too. It’s very hard to avoid being helped by knowledge that’s out there in the world.

To my mind, this means I have to atone – to find ways to compensate for some of the suffering that’s been afflicted on my behalf – but reparations are never perfect. And no one can force you to recognize a moral debt.

You will have to decide what any of this means to you.

On unrequited love.

On unrequited love.

As translated by Edith Grossman, Gabriel García Márquez’s novel Love in the Time of Cholera begins:

It was inevitable: the scent of bitter almonds always reminded him of the fate of unrequited love.

An unhealthy longing. Unidirectional affection is often based on an illusion, with the besotted failing to see the whole complex, contradictory, living person in front of them.

Infatuation can feel overwhelming: the scent of bitter almonds, which about 40% of us can’t detect (early in my research career, a lab director counseled that “You should find some and check, you’ll be safer if you know whether or not you can smell it”) is cyanide, an agent of suicide. A release from the emotions that a person momentarily believes they cannot live with.

Throughout high school and college – bumbling through social situations as an undiagnosed, awkward, empathetic autistic person – I was prone to unrequited love. I could recognize when a classmate was intelligent, friendly, and fun; I understood less about the scaffolding of mutual care that might allow for reciprocal love to grow.

And so I grew adept at expressing unrequited affection: heartfelt handwritten letters; delivering home-cooked meals; offering compassion and care when a person I liked was sick; making fumbling offers to hold hands during an evening we spent jaunting about together.

It wasn’t love, exactly, which between adults needs both trust and the accumulation of shared memories to grow, but it was something. An imagined swirl of possibility that helped me feel hopeful about the future. In Love and the Time of Cholera, a character maintains his unrequited love for fifty-three years before finally building a reciprocal relationship:

Then [the captain] looked at Florentino Ariza, his invincible power, his intrepid love, and he was overwhelmed by the belated suspicion that it is life, more than death, that has no limits.

#

Unrequited love is unhealthy, and it surrounds us. The world depends on our desire to care, our willingness to occasionally sacrifice our own interests for the benefit of others.

In “The Bear’s Kiss,” Leslie Jamison writes that:

Every consciousness, whether human or animal, loves differently. When we love animals, we love creatures whose conception of love we’ll never fully understand. We love creatures whose love for us will always be different from our love for them.

But isn’t this, you might wonder, the state of loving other people as well? Aren’t we always flinging our desire at the opacity of another person, and receiving care we cannot fully comprehend?

Well, yes.

#

A friend recently contacted me in the middle of the night: he and his spouse just had their first child, and – surprise, surprise! – they weren’t sleeping.

“Right now,” my friend told me, “he’s quiet if he’s nursing, or if we’re walking around with him in the carrier, but other than that, he’ll wake up and yell.”

I tried to think of what cheerful advice I could possibly give. “Sometimes I’d put my kids in the carrier,” I said, “then bounce on an exercise ball while I watched TV, to trick them into thinking we were walking.”

“Hey!” shouted my six-year-old, who was drawing cartoon monsters at my feet. “You tricked us!”

“Yup,” I told her, “and you’ll be glad to know how I did it, in case you’re ever trying to soothe a baby.”

For most of human evolution, most people’s lives were intimately entwined with their whole community. New parents would have watched other people raise children. But in recent years, upper and middle class Americans have segregated themselves by age. After leaving college, many rarely spend time around babies until having their own. And then, wham! After only a few months preparation, there’s a hungry, helpless, needy being who needs care.

Those first few weeks – which hazily become the first few months – are particularly punishing because very young babies can express contentment or angst, but not appreciation. New parents upend everything about their lives to provide for these tiny creatures, and they’re given so little back. In the beginning there are no smiles, no giggles or coos – just a few moments’ absence of yelling.

Upon reflection – thinking about the handwritten letters that my spouse and I have penned in journals for our children, interspersed with bits of their art that we’ve taped to the pages; all the excessively bland meals I’ve cooked; the doting cuddles and care when their stuffy noses made it hard for them to breathe; my continued insistence that we hold hands when crossing busy streets – I realized that unrequited love was perhaps my major preparation.

“I guess it was nice,” I told my friend, “that after all those years, I finally had a relationship where unrequited love was considered healthy.”

Luckily for me, my friend was sleep deprived enough to laugh.

On scientific beliefs, Indigenous knowledge, and paternity.

On scientific beliefs, Indigenous knowledge, and paternity.

Recently my spouse & I reviewed Jennifer Raff’s Origin: A Genetic History of the Americas for the American Biology Teacher magazine (in brief: Raff’s book is lovely, you should read it! I’ll include a link to our review once it’s published!), which deftly balances twin goals of disseminating scientific findings and honoring traditional knowledge.

By the time European immigrants reached the Americas, many of the people living here told stories suggesting that their ancestors had always inhabited these lands. This is not literally true. We have very good evidence that all human species – including Homo sapiens, Homo neaderthalensis, and Homo denisovans among possible others – first lived in Africa. Their descendants then migrated around the globe over a period of a few hundred thousand years.

As best we know, no lasting population of humans reached the Americas until about twenty thousand years ago (by which time most human species had gone extinct – only Homo sapiens remained).

During the most recent ice age, a few thousand humans lived in an isolated, Texas-sized grassland called Beringia for perhaps a few thousand years. They were cut off from other humans to the west and an entire continent to the east by glacial ice sheets. By about twenty thousand years ago, though, some members of this group ventured south by boat and established new homes along the shoreline.

By about ten thousand years ago, and perhaps earlier, descendants of these travelers reached the southern tip of South America, the eastern seaboard of North America, and everywhere between. This spread was likely quite rapid (from the perspective of an evolutionary biologist) based on the diversity of local languages that had developed by the time Europeans arrived, about five hundred years ago.

So, by the time Europeans arrived, some groups of people had probably been living in place for nearly 10,000 years. This is not “always” from a scientific perspective, which judges our planet to be over 4,000,000,000 years old. But this is “always” when in conversation with an immigrant who believes the planet to be about 4,000 years old. Compared with Isaac Newton’s interpretation of Genesis, the First People had been living here long before God created Adam and Eve.

If “In the beginning …” marks the beginning of time, then, yes, their people had always lived here.

#

I found myself reflecting on the balance between scientific & traditional knowledge while reading Gabriel Andrade’s essay, “How ‘Indigenous Ways of Knowing’ Works in Venezuela.” Andrade describes his interactions with students who hold the traditional belief in partible paternity: that semen is the stuff of life from which human babies are formed, and so every cis-man who ejaculates during penetrative sex with a pregnant person becomes a father to the child.

Such beliefs might have been common among ancient humans – from their behavior, it appears that contemporary chimpanzees might also hold similar beliefs – and were almost certainly widespread among the First Peoples of South America.

I appreciate partible paternity because, although this belief is often framed in misogynistic language – inaccurately grandiose claims about the role of semen in fetal development, often while ignoring the huge contribution of a pregnant person’s body – the belief makes the world better. People who are or might become pregnant are given more freedom. Other parents, typically men, are encouraged to help many children.

Replacing belief in partible paternity with a scientifically “correct” understanding of reproduction would probably make the world worse – people who might become pregnant would be permitted less freedom, and potential parents might cease to aid children whom they didn’t know to be their own genetic offspring.

Also, the traditional knowledge – belief in partible paternity – might be correct.

Obviously, there’s a question of relationships – what makes someone a parent? But I also mean something more biological — a human child actually can have three or more genetic contributors among their parents.

#

Presumably you know the scientific version of human reproduction. To wit: a single sperm cell merges with a single egg cell. This egg rapidly changes to exclude all the other sperm cells surrounding it, then implants in the uterine lining. Over the next nine months, this pluripotent cell divides repeatedly to form the entire body of a child. The resulting child has exactly two parents. Every cell in the child’s body has the same 3 billion base pair long genome.

No scientist believes in this simplified version. For instance, every time a cell divides, the entire genome must be copied – each time, this process will create a few mistakes. By the time a human child is ready to be born, their cells will have divided so many times that the genome of a cell in the hand is different from the genome of a cell in the liver or in the brain.

In Unique, David Linden writes that:

Until recently, reading someone’s DNA required a goodly amount of it: you’d take a blood draw or a cheek swab and pool the DNA from many cells before loading it into the sequencing machine.

However, in recent years it has become possible to read the complete sequence of DNA, all three billion or so nucleotides, from individual cells, such as a single skin cell or neuron. With this technique in hand, Christopher Walsh and his coworkers at Boston Children’s Hopsital and Harvard Medical School isolated thirty-six individual neurons from three healthy postmortem human brains and then determined the complete genetic sequence for each of them.

This revealed that no two neurons had exactly the same DNA sequence. In fact, each neuron harbored, on average, about 1,500 single-nucleotide mutations. That’s 1,500 nucleotides out of a total of three billion in the entire genome – a very low rate, but those mutations can have important consequences. For example, one was in a gene that instructs the production of an ion channel protein that’s crucial for electrical signaling in neurons. If this mutation were present in a group of neurons, instead of just one, it could cause epilepsy.

No human has a genome: we are composite creatures.

#

Most scientists do believe that all these unique individual genomes inside your cells were composed by combining genetic information from your two parents and then layering on novel mutations. But we don’t know how often this is false.

Pluripotent (“able to form many things”) cells from a developing human embryo / fetus / baby can travel throughout a pregnant person’s body. This is quite common – most people with XX chromosomes who have given birth to people with XY chromosomes will have cells with Y chromosomes in their brains. During the gestation of twins, the twins often swap cells (and therefore genomes).

At the time of birth, most humans aren’t twins, but many of us do start that way. There’s only a one in fifty chance of twin birth following a dizygotic pregnancy (the fertilization of two or more eggs cells released during a single ovulation). Usually what happens next is a merger or absorption of one set of these cells by another, resulting in a single child. When this occurs, different regions of a person’s body end up with distinct genetic lineages, but it’s difficult to identify. Before the advent of genetic sequencing, you might notice only if there was a difference in eye, skin, or hair color from one part of a person’s body to the next. Even now, you’ll only notice if you sequence full genomes from several regions of a person’s body and find that they’re distinct.

For a person to have more than two genetic contributors, there would have to be a dizygotic pregnancy in which sperm cells from unique individuals merged with the two eggs.

In the United States, where the dominant culture is such that people who are trying to get pregnant are exhorted not to mate with multiple individuals, studies conducted in the 1990s found that at least one set of every few hundred twins had separate fathers (termed “heteropaternal superfecundication”). In these cases, the children almost certainly had genomes derived from the genetic contributions of three separate people (although each individual cell in the children’s bodies would have a genome derived from only two genetic contributors).

So, we actually know that partible paternity is real. Because it’s so difficult to notice, our current estimates are probably lower bounds. If 1:400 were the rate among live twins, probably that many dizygotic pregnancies in the United States also result from three or more genetic contributors. Probably this frequency is higher in cultures that celebrate rather than castigate this practice.

Honestly, I could be persuaded that estimates ranging anywhere from 1:20 to 1:4,000 were reasonable for the frequency that individuals from these cultures have three or more genetic contributors.** We just don’t know.

#

I agree with Gabriel Andrade that we’d like for medical students who grew up believing in partible paternity to benefit from our scientific understanding of genetics and inheritance – this scientific knowledge will help them help their patients. But I also believe that, even in this extreme case, the traditional knowledge should be respected. It’s not as inaccurate as we might reflexively believe!

The scientific uncertainty I’ve described above doesn’t quite match the traditional knowledge, though. A person can only receive genetic inheritance from, ahem, mating events that happen during ovulation, whereas partible paternity belief systems also treat everyone who has sex with the pregnant person over the next few months as a parent, too.

But there’s a big difference between contributing genes and being a parent. In Our Transgenic Future: Spider Goats, Genetic Modification, and the Will to Change Nature, Lisa Jean Moore discusses the many parents who have helped raise the three children she conceived through artificial insemination. Even after Moore’s romantic relationships with some of these people ended, they remained parents to her children. The parental bond, like all human relationships, is created by the relationship itself.

This should go without saying, but: foster families are families. Adopted families are families. Families are families.

Partible paternity is a belief that makes itself real.

.

.

.

** A note on the math: Dizygotic fertilization appears to account for 1:10 human births, and in each of these cases there is probably at least some degree of chimerism in the resulting child. My upper estimate for the frequency that individuals have three or more genetic contributors, 1:20, would be if sperm from multiple individuals had exactly equal probabilities of fertilizing each of the two egg cells. My lower estimate of 1:4,000 would be if dizygotic fertilization from multiple individuals had the same odds as the 1:400 that fraternal twin pairs in the U.S. have distinct primary genetic contributors. Presumably a culture that actively pursues partible paternity would have a higher rate than this, but we don’t know for sure. And in any case, these are large numbers! Up to 5% of people from these cultures might actually have three or more genetic contributors, which is both biologically relevant and something that we’d be likely to overlook if we ignored the traditional Indigenous knowledge about partible paternity.

.

.

header image from Zappy’s Technology Solution on flickr

On work.

On work.

If you’re living in a capitalist society, having money is great! Money gets you space to live! Money gets you food to eat! And if you ever think of something else you want, money lets you buy it! Right now! Wham!

Hooray for money!

Except that the actual process of getting money can be pretty miserable.

Most people get money by finding a job. At the job, somebody will tell them what to do. They do it, they get paid.

The pay, in the United States, tends to be quite low. Working forty hours a week for fifty two weeks a year, the US minimum wage would net you less than twenty thousand dollars. Even if the US minimum wage were lavishly raised to $15 an hour, you’d still only get about thirty thousand dollars a year.

To keep the US economy going, we’ve relied on desperation. If people had other options, they wouldn’t do dangerous, difficult, or demeaning work for so little pay.

Until recently, though, most people felt like they didn’t have other options. And so they took terrible jobs, hoping to scrape by.

Now, things are looking different.

In the US, lots of people chose not to re-enter the post-pandemic labor force. Among people who did return to work, huge numbers have been quitting.

In China, many young people are advocating for cheaper ways of living. Instead of working long hours at an odious job in order to have enough money to buy fancy things, maybe it’d be better to work less and take joy in simpler pleasures. Of course, this is a rather anti-progress sentiment, so references to the “tang ping” or “lie flat” movement have been deleted from the Chinese internet to quell the ideology.

Even among people who are lucky enough to be paid for doing something fun – and, honestly, among the professional classes, a lot of work is fun, lots of tricksy little puzzles to solve – there’s often an imbalance between how much time we spend working and how much time we spend on family or other sources of lasting joy. This is, roughly, the main argument in the essay by New York Times writer Farhad Manjoo, “Even With a Dream Job, You Can Still Be Anti-Work.

There are lots of ways to find fulfillment in life. And, yes, work can definitely provide that satisfying sensation of having done something worthwhile with your time! Especially if you’re lucky enough to be paid for doing something you love. My spouse loves to teach. Manjoo loves to research big ideas. I love to write!

But the work that many people find themselves doing – trading away their time so that they’ll have enough money to meet their needs – doesn’t feel rewarding. And even a good job can suck up too much time. Caretaking, conversation, art, travel, philosophy, religious practice – these are also excellent avenues to a fulfilling life, except that they don’t draw a salary. Most people aren’t lucky enough to be able to use their time in those ways.

So: work can feel lousy for the people doing the work.

Boo!

And it gets worse. Because there’s another big problem with work: in a capitalist society, much work makes the world worse.

In the US, for instance, our recent economic miracles are advertising companies: Google and Facebook. Their founders have become absurdly rich; a huge number of people have found well-paying, intellectually-stimulating jobs working for these companies. But their money comes from hurting people! Our world would be better off if all those people’s work wasn’t being done.

Very occasionally, advertising benefits a person. An ad might make you aware of something that improves your life! Maybe you’ve always wanted a little automated rake that cleans your cat’s litter box. (I saw an ad for one of those on the YMCA television while I was lifting weights.)

Or maybe you’d like to go out for Indian food, but hadn’t realized there was an Indian restaurant in your home town. Good thing you saw their ad!

But more often, advertising harms us. An effective advertisement instills a sense of absence that some company’s product can supposedly fill. Huge amounts of money are spent creating and distributing ads for beer, for cruise ships, for fast food.

Which people, exactly, do we suppose are unaware of the existence of beer? And would the newfound knowledge help them?

Especially in the face of climate change, our society will have to change. In some fields – manufacturing, advertising, drilling – we need for people to work less. We need for less stuff to be made, used briefly, and shunted off to landfills. The work makes our planet less hospitable.

I used to do biomedical research. I stopped; it seemed that if I did my job well, I too would help wreck our planet. New discoveries are much more likely to yield slight, expensive extensions to the ends of wealthy people’s lives, rather than any additional happiness for the majority of our population.

We already spend inordinate amounts of money on frantic efforts to extend the end of life, even though studies have shown that “the less money spent in this time period, the better the death experience is for the patient.

This sort of work is good for the economy. But it’s bad for people. Wouldn’t it be nice to live in a world where everyone thought that the latter mattered more?

On magic.

On magic.

There’s broad scientific consensus that school closures hurt children, probably making a significant contribution to future increases in premature death.

There’s also broad scientific consensus that school closures – particularly elementary school closures – aren’t helpful in slowing the spread of Covid-19. Children aren’t major vectors for this virus. Adults just have to remember not to congregate in the teachers’ lounge.

Worldwide, a vanishingly small percentage of viral transmissions have occurred inside schools.

And … our district just closed in-person school for all children.

In-person indoor dining at restaurants is still allowed. Bars are still open.

Older people are sending a clear message to kids: “Your lives matter less than ours.”

#

For at-risk children, school closures are devastating. A disruption in social-emotional learning; lifelong education gaps; skipped meals.

But for my (privileged!) family, the closure will be pretty nice. I was recently feeling nostalgic about the weeks in August when my eldest and I spent each morning together.

Our youngest attends pre-K at a private school. Her school, like most private schools around the country, (sensibly) re-opened on time and is following its regular academic calendar.

My eldest and I will do two weeks of home schooling before winter break. And it’ll be fun. I like spending time with my kids, and my eldest loves school so much that she often uses up most of her energy during the day – teachers tell us what a calm, lovely, hard-working kid she is. And then she comes home and yells, all her resilience dissipated.

Which is normal! Totally normal. But it’s a little crummy, as a parent, to know you’ve got a great kid but that you don’t get to see her at her best.

Right now she’s sad about not going to school – on Monday, she came home crying, “There was an announcement that we all have to switch to online only!” – but I’m lucky that I can be here with her. Writing stories together, doing math puzzles, cooking lunch.

Maybe we’ll practice magic tricks. She loves magic.

#

Last month, I was getting ready to drive the kids to school. T. (4 years old) and I were in the bathroom. I’d just handed T. her toothbrush.

N. (6 years old) walked over holding a gallon-sized plastic bag.

“Father, do you want to see a magic trick?” she asked.

“Okay, but I have to brush my teeth while you’re doing it.”

“Okay,” she said, and opened the bag. She took out a multi-colored lump of clay. It was vaguely spherical. Globs of red, white, and blue poked up from random patches across the surface, as though three colors of clay had been haphazardly moshed together.

“So you think this is just this,” she said, but then …”

She took out a little wooden knife and began sawing at the lump. “This is just this?”, I wondered. It’s an interesting phrase.

Her sawing had little effect. The knife appeared useless. I’m pretty sure this wooden knife is part of the play food set she received as a hand-me-down when she was 9 months old. “Safe for babies” is generally correlated with “Useless for cutting.”

She was having trouble breaking the surface of her lump.

I spat out my toothpaste.

She kept sawing. She set down the knife and stared at the clay intently. A worthy adversary.

I stood there, watching.

She grabbed the knife again and resumed sawing. More vigorously, this time. She started stabbing, whacking. This was enough to make a tiny furrow. She tossed aside the knife and pulled with her fingertips, managing to pry two lobes of the strange lump away from each other.

“Okay,” she said, “it’s hard to see, but there’s some green in there.”

T. and I crouched down and peered closely. Indeed, there was a small bit of round green clay at the center of the lump.

“Wow!” exclaimed T. “I thought it was just a red, and, uh, blue, and white ball! But then, on the inside, there’s some green!”

“I know!” said N., happy that at least one member of her audience understood the significance of her trick. “And look, I might even get it back together!”

#

N. started performing magic when she was four. T. was asleep for her afternoon nap.

“Okay,” she said, “you sit there, and I’ll put on a magic show. Watch, I’ll make, um … this cup! See this cup? I’ll make it disappear.”

“Okay,” I said, curious. We’d just read a book that explained how to make a penny disappear from a glass cup – the trick is to start with the cup sitting on top of the penny, so that the coin looks like it’s inside the cup but actually isn’t.

I had no idea how she planned to make the cup itself disappear.

“Okay, so, um, now you’re ready, and …” she looked at the cup in her hands. Suddenly, she whisked it behind her back. And stood there, looking at me somberly, with her hands behind her back.

“I don’t have it,” she said.

#

Magic – convincing an audience to believe in an illusion.

This is just this.

I don’t have the cup – it’s gone.

Much of our Covid-19 response has been magic-based. We repeat illusory beliefs – schools are dangerous, reinfections are rare, death at any age is a tragedy – and maybe our audience is swayed.

But that doesn’t change the underlying reality.

The cup still exists – it was behind her back.

#

Everyone will die. Mortality is inescapable.

Our species is blessed with prodigious longevity, probably because so many grandmothers among our ancestors worked hard to help their grandchildren survive.

(The long lives of men are probably an accidental evolutionary byproduct, like male nipples or female orgasms. Elderly men, with their propensity to commandeer resources and start conflicts, probably reduced the fitness of their families and tribes.)

After we reach our seventies, though – when our ancestors’ grandchildren had probably passed their most risky developmental years – our bodies fail. We undergo immunosenescence – our immune systems become worse at suppressing cancer and infections.

We will die. Expensive interventions can stave off death for longer – we can now vaccinate 90-year-olds against Covid-19 – but we will still die.

Dying at the end of a long, full life shouldn’t feel sad, though. Everybody dies. Stories end. That’s the natural arc of the world.

What’s sad is when people die young.

Children will face the risk of dying younger due to unnecessary school closures.

Children will face the risk of dying younger due to unmitigated climate change.

Children will face the risk of dying younger due to antibiotic resistant bacteria.

These are urgent threats facing our world. And we’re not addressing them.

The cup is still there.

#

For my daughter, of course, I played along. I smiled, and laughed. She stood there beaming, holding the cup behind her back.

“Magic!” I said.

N. nodded proudly, then asked, “Do you want me to bring it back?”

It’ll take the same measure of magic to bring back schools.

On childcare.

On childcare.

After my eldest was born, I spent the first autumn as her sole daytime caretaker. She spent a lot of time strapped to my chest, either sleeping or wiggling her head about to look at things I gestured to as I chittered at her.

We walked around our home town, visiting museums and the library. I stacked a chair on top of my desk to make a standing workspace and sometimes swayed from side to side while I typed. At times, she reached up and wrapped her little hands around my neck; I gently tucked them back down at my sternum so that I could breath.

She seemed happy, but it felt unsustainable for me. Actually getting my work done while parenting was nigh impossible.

And so our family bought a membership at the YMCA. They offer two hour blocks of child care for children between six weeks and six years old.

The people who work in our YMCA’s child care space are wonderful. Most seem to be “overqualified” for the work, which is a strange thing to write. Childhood development has huge ramifications for both the child’s and their family’s whole lifetime, and child psychology is an incredibly rich, complex subject. Helping to raise children is important, fulfilling work. No one is overqualified to do it.

Yet we often judge value based on salary. Childcare, because it was traditionally seen by European society as “women’s work,” is poorly remunerated. The wages are low, there’s little prestige – many people working in childcare have been excluded from other occupations because of a lack of degrees, language barriers, or immigration status.

I like to think that I appreciate the value of caretaking – I’m voting with my feet – but even I insufficiently valued the work being done at our YMCA’s childcare space.

Each time I dropped my children off – at which point I’d sit and type at one of the small tables in the snack room, which were invariably sticky with spilled juice or the like – I viewed it as a trade-off. I thought that I was being a worse parent for those two hours, but by giving myself time to do my work, I could be a fuller human, and maybe would compensate for those lapsed hours by doing better parenting later in the day.

I mistakenly thought that time away from their primary parent would be detrimental for my children.

Recently, I’ve been reading Sarah Blaffer Hrdy’s marvelous Mothers and Others, about the evolutionary roots of human childhood development, and learned my mistake.

Time spent in our YMCA’s childcare space was, in and of itself, almost surely beneficial for my children. My kids formed strong attachments to the workers there; each time my children visited, they were showered with love. And, most importantly, they were showered with love by someone who wasn’t me.

Hrdy explains:

A team headed by the Israeli psychologist Abraham Sagi and his Dutch collaborator Marinus van IJzendoorn undertook an ambitious series of studies in Israel and the Netherlands to compare children cared for primarily by mothers with those cared for by both mothers and other adults.

Overall, children seemed to do best when they have three secure relationships – that is, three relationships that send the clear message “You will be cared for no matter what.”

Such findings led van IJzendoorn and Sagi to conclude that “the most powerful predictor of later socioemotional development involves the quality of the entire attachment network.”

In the United States, we celebrate self-sufficient nuclear families, but these are a strange development for our species. In the past, most humans lived in groups of close family and friends; children would be cared for by several trusted people in addition to their parents.

Kids couldn’t be tucked away in a suburban house with their mother all day. They’d spend some time with her; they’d spend time with their father; they’d spend time with their grandparents; they’d spend time with aunties and uncles, and with friends whom they called auntie or uncle. Each week, children would be cared for by many different people.

The world was a harsh place for our ancestors to live in. There was always a risk of death – by starvation, injury, or disease. Everyone in the group had an incentive to help each child learn, because everyone would someday depend upon that child’s contributions.

And here I was – beneficiary of some million years of human evolution – thinking that I’d done so well by unlearning the American propaganda that caretaking is unimportant work.

And yet, I still mistakenly believed that my kids needed it to be done by me.

Being showered with love by parents is important. Love from primary caretakers is essential for a child to feel secure with their place in the world. But love from others is crucial, too.

I am so grateful that our YMCA provided that for my kids.

And, now that they’re old enough, my kids receive that love from school. Each day when they go in, they’re with teachers who let them know: You will be cared for no matter what.

On sending kids to school.

On sending kids to school.

I was walking my eldest child toward our local elementary school when my phone rang.

We reached the door, shared a hug, and said goodbye. After I left, I called back – it was a friend of mine from college who now runs a cancer research laboratory and is an assistant professor at a medical school.

“Hey,” I said, “I was just dropping my kid off at school.”

“Whoa,” he said, “that’s brave.”

I was shocked by his remark. For most people under retirement age, a case of Covid-19 is less dangerous than a case of seasonal influenza.

“I’ve never heard of anybody needing a double lung transplant after a case of the flu,” my friend said.

But our ignorance doesn’t constitute safety. During this past flu season, several young, healthy people contracted such severe cases of influenza that they required double lung transplants. Here’s an article about a healthy 30-year-old Wyoming man nearly killed by influenza from December 2019, and another about a healthy 20-year-old Ohio woman from January 2020. And this was a rather mild flu season!

One of the doctors told me that she’s the poster child for why you get the flu shot because she didn’t get her flu shot,” said [the 20-year-old’s mother].

These stories were reported in local newspapers. Stories like this don’t make national news because we, as a people, think that it’s normal for 40,000 to 80,000 people to die of influenza every year. Every three to five years, we lose as many people as have died from Covid-19. And that’s with vaccination, with pre-existing immunity, with antivirals like Tamiflu.

Again, when I compare Covid-19 to influenza, I’m not trying to minimize the danger of Covid-19. It is dangerous. For elderly people, and for people with underlying health issues, Covid-19 is very dangerous. And, sure, all our available data suggest that Covid-19 is less dangerous than seasonal influenza for people under retirement age, but, guess what? That’s still pretty awful!

You should get a yearly flu shot!

A flu shot might save your life. And your flu shot will help save the lives of your at-risk friends and neighbors.

#

For a while, I was worried because some of my remarks about Covid-19 sounded superficially similar to things said by the U.S. Republican party. Fox News – a virulent propaganda outlet – was publicizing the work of David Katz – a liberal medical doctor who volunteered in a Brooklyn E.R. during the Covid-19 epidemic and teaches at Yale’s school of public health.

The “problem” is that Katz disagrees with the narrative generally forwarded by the popular press. His reasoning, like mine, is based the relevant research data – he concludes that low-risk people should return to their regular lives.

You can see a nifty chart with his recommendations here. This is the sort of thing we’d be doing if we, as a people, wanted to “follow the science.”

And also, I’m no longer worried that people might mistake me for a right-wing ideologue. Because our president has once again staked claim to a ludicrous set of beliefs.

#

Here’s a reasonable set of beliefs: we are weeks away from a safe, effective Covid-19 vaccine, so we should do everything we can to slow transmission and get the number of cases as low as possible!

Here’s another reasonable set of beliefs: Covid-19 is highly infectious, and we won’t have a vaccine for a long time. Most people will already be infected at least once before there’s a vaccine, so we should focus on protecting high-risk people while low-risk people return to their regular lives.

If you believe either of those sets of things, then you’re being totally reasonable! If you feel confident that we’ll have a vaccine soon, then, yes, delaying infections is the best strategy! I agree! And if you think that a vaccine will take a while, then, yes, we should end the shutdown! I agree!

There’s no right answer here – it comes down to our predictions about the future.

But there are definitely wrong answers. For instance, our current president claims that a vaccine is weeks away, and that we should return to our regular lives right now.

That’s nonsense. If we could get vaccinated before the election, then it’d make sense to close schools. To wait this out.

If a year or more will pass before people are vaccinated, then our efforts to delay the spread of infection will cause more harm than good. Not only will we be causing harm with the shutdown itself, but we’ll be increasing the death toll from Covid-19.

#

On October 14th, the New York Times again ran a headline saying “Yes, you can be reinfected with the coronavirus. But it’s extremely unlikely.

This is incorrect.

When I’ve discussed Covid-19 with my father – a medical doctor specializing in infectious diseases, virology professor, vaccine developer with a background in epidemiology from his masters in public health – he also has often said to me that reinfection is unlikely. I kept explaining that he was wrong until I realized that we were talking about different things.

When my father uses the word “reinfection,” he means clearing the virus, catching it again, and becoming sicker than you were the first time. That’s unlikely (although obviously possible). This sort of reinfection happens often with influenza, but that’s because influenza mutates so rapidly. Covid-19 has a much more stable genome.

When I use the word “reinfection” – and I believe that this is also true when most laypeople use the word – I mean clearing the virus, catching it again, and becoming sick enough to shed the viral particles that will make other people sick.

This sense of the word “reinfection” describes something that happens all the time with other coronaviruses, and has been documented to occur with Covid-19 as well.

The more we slow the spread of Covid-19, the more total cases there will be. In and of itself, more cases aren’t a bad thing – most people’s reinfection will be milder than their first exposure. The dangerous aspect is that a person who is reinfected will have another period of viral shedding during which they might expose a high-risk friend or neighbor.

#

If our goal is to reduce the strain on hospitals and reduce total mortality, we need to avoid exposing high-risk people. Obviously, we should be very careful around nursing home patients. We should provide nursing homes with the resources they need to deal with this, like extra testing, and preferably increased wages for nursing home workers to compensate them for all that extra testing.

It’s also a good idea to wear masks wherever low-risk and high-risk people mingle. The best system for grocery stores would be to hire low-risk shoppers to help deliver food to high-risk people, but, absent that system, the second-best option would be for everyone to wear masks in the grocery store.

Schools are another environment where a small number of high-risk teachers and a small number of students living with high-risk family members intermingle with a large number of low-risk classmates and colleagues.

Schools should be open – regions where schools closed have had the same rates of infection as regions where schools stayed open, and here in the U.S., teachers in districts with remote learning have had the same rates of infection as districts with in-person learning.

Education is essential, and most people in the building have very low risk.

A preponderance of data indicate that schools are safe. These data are readily accessible even for lay audiences – instead of reading research articles, you could read this lovely article in The Atlantic.

Well, I should rephrase.

We should’ve been quarantining international travelers back in December or January. At that time, a shutdown could have helped. By February, we were too late. This virus will become endemic to the human species. We screwed up.

But, given where we are now, students and teachers won’t experience much increased risk from Covid-19 if they attend in person, and schools aren’t likely to make the Covid-19 pandemic worse for the surrounding communities.

That doesn’t mean that schools are safe.

Schools aren’t safe: gun violence is a horrible problem. My spouse is a teacher – during her first year, a student brought weapons including a chainsaw and some pipe bombs to attack the school; during her fourth year, a student had amassed guns in his locker and was planning to attack the school.

Schools aren’t safe: we let kids play football, which is known to cause traumatic brain injury.

Schools aren’t safe: the high stress of grades, college admissions, and even socializing puts some kids at a devastatingly high risk for suicide. We as a nation haven’t always done a great job of prioritizing kids’ mental health.

And the world isn’t safe – as David Katz has written,

If inclined to panic over anything, let it be climate change Not the most wildly pessimistic assessment of the COVID pandemic places it even remotely in the same apocalyptic ballpark.