On unintended consequences.

On unintended consequences.

After our current president ordered the assassination of an Iranian general by drone, my class in jail discussed excerpts from Gregoire Chamayou’s A Theory of the Drone.

Chamayou argues that drone warfare is qualitatively distinct from other forms of state violence.  The psychological rift stems from asymmetry – one side risks money, the other risks life. 

The use of drones keeps U.S. soldiers safer.  But in Chamayou’s opinion (translated by Janet Lloyd, and slightly modified by me for students to read aloud),

If the U.S. military withdraws from the battlefield, enemy violence will turn against targets that are easier to reach.  Even if soldiers are safe, civilians are not.

Drone warfare compels enemy combatants to engage in terrorism.  They cannot shoot back at the soldier who is shooting them – that soldier might be sitting in a nondescript office building thousands of miles away, unleashing lethal force as though it were a video game.

I don’t mean to trivialize the suffering of U.S. soldiers who are involved in drone warfare.  Pilots have an extremely high suicide rate – they are expected to placidly shift from the battlefield to the civilian world each evening, and this is deeply disturbing to most people.

But enemy soldiers cannot fight back.  They could shoot down the drone, but the U.S. military would launch a new one.  There’s no comparison between that and the drone shooting a missile at your family’s home.

Image by Debra Sweet on Flickr.

An enemy combatant can only put U.S. lives at risk by attacking the general public.

Our policies don’t always have the outcomes we want.

Not unexpectedly, somebody in class mentioned the War on Drugs.  Banning marijuana caused a lot of problems, he said.

Somebody else disagreed – he’s been in and out of prison on drug charges for seventeen years, but has high hopes that this next stint of rehab is going to take.  “I still think marijuana’s a gateway drug.  That’s what I started with.”

“It’s not pot, it’s the lying about pot.  They say over and over that marijuana’s as bad as heroin.  What do they think will happen once kids realize marijuana’s safe?”

“If people could’ve bought pot, maybe nobody would’ve invented spice.  Like that K2 stuff was sold as incense or whatever, but everybody knew it was pot replacer.”

“You take this,” a guy said, holding up a sheet of paper, “spray it with spice, send it into prison.  Two thousand dollars, easy.  You get somebody to OD, then everybody’s gonna want some.  People like that feeling, right at the brink between life and death.”

Somebody sighed.  “I know.  I’ve done a lot of drugs, and with most drugs, I could take it or leave it.  But that spice, man.  No offense to anyone, but I’ve never sucked cock for drugs.  For spice, though, I’d think about it.”

“You just get so sick.”

“So sick!  I’ve kicked heroin, and that feeling sick was bad.  But not like this.  There were weeks when I had to set an alarm, get up every two hours to take another hit.  Otherwise I’d wake up puking and shitting myself.  And I’d be in there, you know, sitting on the toilet with a bag, still taking my hit.”

“I got that too.  I was waking up every ninety minutes.”

“Would you have started smoking spice if marijuana was legal?” I asked.

“I mean, yeah, now you’re gonna have people who would.  Because everybody knows about it.  Like you had that summer two years ago, people all along the street, up and down Kirkwood, smoking it right out in the open.  But, like, before it all started?  Nobody would’ve sat down and tried to invent spice if they could’ve sold pot.”

“I remember reading a review of K2 spice on Amazon,” I said, “must’ve been in 2008, before it was banned, all full of puns and innuendo.  The reviewer was talking about how it made him feel so ‘relaxed,’ in quotes.”

“ ‘Relaxed,’ shit, I get that.  I never touched the stuff before this last time I came to jail.  But I’ve smoked hella marijuana.  So somebody handed it to me and I took this giant hit, the way I would, and I shook my head and said, ‘Guys, that didn’t do shiii …’ and, BAM, I fell face first into the table.”

“You were so out of it!”

“It was like, WHOA, blast off.  I was lying there, like flopping all over.  That night I pissed myself.”

“That sounds … “ I said, “… bad.  A whole lot worse than smoking pot.”

“But you can get it!”

And there lies the rub.  With so many technologies, we’re playing whack-a-mole.  We solve one problem and create another.  But sometimes what comes up next isn’t another goofy-eyed stuffed animal mole – the arcade lights flash and out pops a hungry crocodile. 

Since people couldn’t buy pot, they started smoking a “not-for-human consumption” (wink wink) incense product that you could order online.  Since enemy combatants can’t shoot back at soldiers, they plant more bombs in subways.

As one American soldier explains, “We must understand that attempts to isolate our force against all potential enemy threats shifts the ‘burden of risk’ from a casualty-averse military force onto the populace.  We have lifted the burden from our own shoulders and placed it squarely upon civilians who do not have the material resources to bear it.”

On Jonathan Safran Foer’s ‘We Are the Weather.’

On Jonathan Safran Foer’s ‘We Are the Weather.’

The choices we’re making might cause everyone to die.

That’s kind of sad.  I like being alive, and I like the thought that other humans might be alive even after I am gone. 

Some people – the original Millennials, for instance – prefer to imagine that the world would end when their world ends.  But for those of us who feel that helping others adds to the meaning of our lives, it’s more satisfying to imagine humanity’s continued existence.  Each good deed is like a wave, rippling outward, causing people to be a little kinder to others in turn. 

These waves of kindness can’t last forever – our universe began with a finite quantity of order, which we use up in order to live – but they could persist for a very long time.  Humans could have many billions of years with which to colonize the stars.

Unless we go extinct sooner.  Which we might.  We’re destabilizing the climate of the only habitable planet we know.

Venus used to be habitable.  We humans could’ve flown there and set up a colony.  But a blip of excess greenhouse gas triggered runaway climate change.  Now Venus has no liquid water.  Instead, the planet is covered in thick smog.  Sulfuric acid rains from the sky.

I would rather we not doom Earth to the same fate.

There are things you can do to help.  In We Are the Weather, Jonathan Safran Foer lists the (abundant!) evidence that animal agriculture is the leading cause of climate change.

You should still turn off the lights when you leave a room.  If you can walk to the park instead of driving, do it!  Every effort you make to waste less energy is worthwhile!

But it helps to take stock of the numbers.  If everyone with a conventional automobile could suddenly exchange it for a hybrid vehicle, we’d still be emitting 96% as much greenhouse gas.  If everyone decided to eliminate animal products from their diet, we’d be emitting 50% as much.

Switching to hybrid vehicles wouldn’t save us.  Deciding to eat plant-based foods would.

Unfortunately, it’s hard to make this switch.  Not least because the peril we’ve placed ourselves in doesn’t feel compelling.  It’s like the difference between venus flytraps and pitcher plants.  With a venus flytrap, you can see the exact moment that a bug is doomed.  Those spikey mandibles close and that’s the end!  When a bug lands on a pitcher plant, though, its fate is sealed well before the moment when it finally topples into the digestive water.  The lip of a pitcher plant is sloped and slippery; the actual boundary between life and death is unnoticeable.

Because climate change will be exacerbated by so many feedback loops, by the time we see the precipice it’ll be too late.

In Foer’s words,

The chief threat to human life – the overlapping emergencies of ever-stronger superstorms and rising seas, more severe droughts and declining water supplies, increasingly large ocean dead zones, massive noxious-insect outbreaks, and the daily disappearance of forests and species – is, for most people, not a good story. 

When the planetary crisis matters to us at all, it has the quality of a war being fought over there.  We are aware of the existential stakes and the urgency, but even when we know that a war for our survival is raging, we don’t feel immersed in it.  That distance between awareness and feeling can make it very difficult for even thoughtful and politically engaged people – people who want to act – to act.

History not only makes a good story in retrospect; good stories become history.  With regard to the fate of our planet – which is also the fate of our species – that is a profound problem.  As the marine biologist and filmmaker Randy Olson put it, “Climate is quite possibly the most boring subject the science world has ever had to present to the public.”

I like that Foer tries to wring empathy from this dull story.  He writes about his personal struggles to be good.  If it were necessary to blow hot air from a hairdryer into a small child’s face each time we bought a cheeseburger, few people would buy them.  But it’s more difficult to restrain ourselves when we instead know vaguely – rationally, unemotionally – that each cheeseburger we buy will exacerbate the hot air – and floods, and droughts, and malaria – that children will one day have to bear.

Our brains are good at understanding cause and effect when they are closely linked in time and space.  Push a button, hear a sound!  Even babies understand how to work a toy piano.  Even my ill behaved dogs know better than to misbehave in front of me (chew the pillow, get shut in bathroom).

My dogs struggle when an effect comes long after the initial cause.  Furtively chew a pillow, get shut in bathroom several days later, once the human finally discovers evidence?  That’s not compelling for my dogs.  The punishment is too long delayed to dissuade them from mastication.

Buy a cheeseburger today – make our children’s children’s children go hungry from global crop failure.  That’s not compelling.  Our brains can’t easily process that story.

We can understand it, but we can’t feel it.

And that’s the message of Foer’s book.  How can we – collaboratively – create a world in which it’s easy to do the right thing?  How can we make cheeseburgers feel bad?

An intellectual understanding – cheeseburgers requires farms with cows, cows emit methane, cows take space, farmers destroy forests to make space, cheeseburgers cause climate change – isn’t enough to create that feeling.  Climate change is too dull a story.

Even worse, climate change isn’t even the most boring story to tell about our extinction.  In We Are the Weather – an entire book in which Foer castigates himself for contributing to harms that will befall his descendants some 100 to 200 years in the future (because that’s when climate change will get really bad) – Foer doesn’t even mention that he’s also causing harms that will befall his descendants 30 to 60 years in the future.

Even though these nearer term harms are equally calamitous.  Even though these nearer term harms are just as definitively known to be caused by cheeseburgers.

Climate change is dull.  Antibiotic resistance is even more dull.

It’s pretty bad when something is more boring than talking about the weather.

Most farmed animals are constantly given low doses of antibiotics. As it happens, this is exactly the protocol you’d use for a directed evolution experiment if you were trying to make antibiotic-resistant bacteria.

There’s an old story about a king, Mithridates, whose father was assassinated with poison.  Mithridates trained his body with exposure to low doses of poison so that he would be able to survive higher doses. 

It was a clever strategy.  We’re helping bacteria do the same thing.

Our world will be nightmarishly different once antibiotics stop working.  My own children are three and five years old.  They’ve gotten infections that we needed to treat with antibiotics about a dozen times.  Two weeks of taking the pink stuff and my kids got better.

In a world with antibiotic resistant bacteria – which we are creating through animal agriculture – any of those dozen infections could have killed my kids. 

You should watch the New York Times video about antibiotic resistance.  By 2050, it’s likely that more people will die from antibiotic resistant bacterial infections than from cancer.

Click the image to head to the NYT movie — well worth it.

Huge quantities of money are being spent to develop new anti-cancer drugs – new ways for elderly people to stave off time.  Meanwhile, it’s not just that we spend so little developing antibiotics.  We are actively making these drugs worse.

Antibiotic resistance isn’t a compelling story, though.  To feel a connection between a cheeseburger and your someday grandkid dying in bed, feverish and septic, you’d have to understand the biochemistry of lateral gene transfer, DNA replication, mutation, drug metabolism.  You’d need to be able to see in your mind’s eye the conditions that farmed animals are raised in.

And, honestly?  People who can vividly picture a concentrated animal feeding operation or slaughterhouse probably aren’t the ones buying cheeseburgers.

But if the world doesn’t change, their grandkids will die too.

.

.

Featured image: Everglades National Park by B. Call.

On the ethics of eating.

On the ethics of eating.

Every living thing needs energy.  But our world is finite.  Energy has to come from somewhere.

Luckily, there’s a lot of potential energy out there in the universe.  For instance, mass can be converted into energy.  Our sun showers us with energy drawn from the cascade of nuclear explosions that transpire in its core. A tiny difference in mass between merging hydrogen atoms and the resultant helium atom allows our sun to shine.

Our sun radiates about 10^26 joules per second (which is 100,000 times more than the combined yearly energy usage from everyone on Earth), but only a fraction of that reaches our planet.  Photons radiate outward from our sun in all directions, so our planet intercepts only a small sliver of the beam.  Everything living here is fueled by those photons.

When living things use the sun’s energy, we create order – a tree converts disordered air into rigid trunk, a mouse converts a pile of seeds into more mouse, a human might convert mud and straw into a house.  As we create order, we give off heat.  Warming the air, we radiate infrared photons.  That’s what night vision goggles are designed to see.  The net effect is that the Earth absorbs high-energy photons that were traveling in a straight beam outward from the sun … and we convert those photons into a larger number of low-energy photons that fly off every which way.

We the living are chaos machines.  We make the universe messier.  Indeed, that’s the only way anything can live.  According to the Second Law of Thermodynamics, the only processes that are sufficiently probable so as to occur are those that make the world more random.

We’re lucky that the universe started out as such a bland, orderly place – otherwise we might not even be able to tell “before” from “later,” let alone extract enough energy to live.

Dog!

The earliest living things took energy from the sun indirectly – they used heat, and so they were fueled by each photon’s delivery of warmth to the Earth.  (Please allow me this little hedge – although it’s true that the earliest life was fueled only by warmth, that warmth might not have come from the sun.  Even today, some thermophilic bacteria live in deep sea vents and bask in the energy that leaks from our Earth’s molten core.  The earliest life might have lived in similar nooks far from the surface of the Earth.  But early life that resided near the surface of the seas seems more likely. Complicated chemical reactions were necessary to form molecules like RNA.  Nucleic acids were probably first found in shallow, murky pools pulsed with lightning or ultraviolet radiation.)

Over time, life changed.  Organisms create copies of themselves through chemical processes that have imperfect fidelity, after all.  Each copy is slightly different than the original.  Most differences make an organism worse than its forebears, but, sometimes, through sheer chance, an organism might be better at surviving or at creating new copies of itself.

When that happens, the new version will become more common. 

Over many, many generations, this process can make organisms very different from their forebears.  When a genome is copied prior to cell division, sometimes the polymerase will slip up and duplicate a stretch of code.  These duplication events are incredibly important for evolution – usually, the instructions for proteins can’t drift too far because any change might eliminate essential functions for that cell.  If there’s a second copy, though, the duplicate can mutate and eventually gain some new function.

About two billion years ago, some organisms developed a rudimentary form of photosynthesis.  They could turn sunlight into self!  The energy from our sun’s photons was used to combine carbon dioxide and water into sugar. And sugar can be used to store energy, and to build new types of structures.

Photosynthesis also releases oxygen as a biproduct.  From the perspective of the organisms living then, photosynthesis poisoned the entire atmosphere – a sudden rise in our atmosphere’s oxygen concentration caused many species to go extinct.  But we humans never could have come about without all that oxygen.

Perhaps that’s a small consolation, given that major corporations are currently poisoning our atmosphere with carbon dioxide.  Huge numbers of species might go extinct – including, possibly, ourselves – but something else would have a chance to live here after we have passed.

In addition to poisoning the atmosphere, photosynthesis introduced a new form of competition.  Warmth spreads diffusely – on the early Earth, it was often sheer chance whether one organism would have an advantage over any other.  If you can photosynthesize, though, you want to be the highest organism around.  If you’re closer to the sun, you get the first chance to nab incoming photons.

That’s the evolutionary pressure that induced plants to evolve.  Plants combined sugars into rigid structures so that they could grow upwards.  Height helps when your main goal in life is to snatch sunlight.

Animation by At09kg on Wikipedia.

Nothing can live without curtailing the chances of other living things.  Whenever a plant absorbs a photon, it reduces the energy available for other plants growing below.

Plants created the soil by trapping dirt and dust, and soil lets them store water for later use.  But there is only so much desalinated water.  Roots reach outward: “I drink your milkshake!”, each could exclaim.

For a heterotroph, the brutality of our world is even more clear.  Our kind – including amoebas, fungi, and all animals – can only survive by eating others.  We are carbon recyclers.  Sugar and protein refurbishers.  We take the molecular machines made by photosynthesizing organisms … chop them apart … and use the pieces to create ourselves.

Some heterotrophs are saprophages – eaters of the dead.  But most survive only by destroying the lives of others.

For the earliest heterotrophs, to eat was to kill.  But, why worry?  Why, after all, is life special?  Each photosynthesizing organism was already churning through our universe’s finite quantity of order in its attempt to grow.  They took in material from their environment and rearranged it.  So did the heterotrophs – they ingested and rearranged. Like all living things, they consumed order and excreted chaos.

The heterotrophs were extinguishing life, but life is just a pattern that repeats itself.  A living thing is a metabolic machine that self-copies.  From a thermodynamic perspective, only the energetics of the process distinguish life from a crystal.  Both are patterns that grow, but when a crystal grows, it makes matter more stable than its environment – life makes matter less stable as it’s incorporated into the pattern.

Your ability to read this essay is a legacy of the heterotrophs’ more violent descendants.  The earliest multicellular heterotrophs were filter feeders – they passively consumed whatever came near.

But then, between 500 and 600 million years ago, animals began to hunt and kill.  They would actively seek life to extinguish.  To do this, they needed to think – neurons first arose among these hunters.

Not coincidentally, this is also the time that animals first developed hard shells, sharp spines, armored plates – defenses to stop others from eating them.

The rigid molecules that allow plants to grow tall, like cellulose, are hard to digest.  So the earliest hunters probably began by killing other animals.

With every meal, you join the long legacy of animals that survived only by extinguishing the lives of others.  With every thought, you draw upon the legacy of our forebear’s ruthless hunt.

Even if you’re vegan, your meals kill.  Like us, plants have goals.  It’s a matter of controversy whether they can perceive – perhaps they don’t know that they have goals – but plants will constantly strive to grow, to collect sunlight and water while they can, and many will actively resist being eaten.

But it makes no sense to value the world if you don’t value yourself.  Maybe you feel sad that you can’t photosynthesize … maybe you’d search out a patch of barren, rocky ground so that you’d absorb only photons that would otherwise be “wasted” … but, in this lifetime, you have to eat.  Otherwise you’d die.  And I personally think that any moral philosophy that advocates suicide is untenable.  That’s a major flaw with utilitarianism – rigid devotion to the idea of maximizing happiness for all would suggest that you, as another organism that’s taking up space, constantly killing, and sapping our universe’s limited supply of order, simply shouldn’t be here.

At its illogical extreme, utilitarianism suggests that either you conquer the world (if you’re the best at feeling happy) or kill yourself (if you’re not).

We humans are descended from carnivores.  Our ancestors were able to maintain such large brains only by cooking and eating meat.  Our bodies lack an herbivore’s compliment of enzymes that would allow us to convert grass and leaves into the full compliment of proteins that we need.

And we owe the very existence of our brains to the hunts carried out by even more ancient ancestors.  If they hadn’t killed, we couldn’t think.

Just because we were blessed by a legacy of violence, though, doesn’t mean we have to perpetuate that violence.  We can benefit from past harms and resolve to harm less in the present and future.

Writing was first developed by professional scribes.  Scientific progress was the province of wealthy artisans.  None of the progress of our culture would have been possible if huge numbers of people weren’t oppressed – food that those people grew was taken from them and distributed by kings to a small number of privileged scribes, artisans, philosophers, and layabouts. 

When humans lived as hunters and gatherers, their societies were generally equitable.  People might die young from bacterial infections, dehydration, or starvation, but their lives were probably much better than the lives of the earliest farmers.  After we discovered agriculture, our diets became less varied and our lives less interesting.  Plus, it’s easier to oppress a land-bound farmer than a nomadic hunter.  Stationary people paid tribute to self-appointed kings.

This misery befell the vast majority of our world’s population, and persisted for thousands of years.  But the world we have now couldn’t have come about any other way.  It’s horrific, but, for humans to reach our current technologies, we needed oppression.  Food was taken from those who toiled and given to those who hadn’t. 

Mostly those others created nothing of value … but some of them made writing, and mathematics, and rocket ships.

Although the development of writing required oppression, it’s wrong to oppress people now.  It was wrong then, too … but we can’t go back and fix things.

Although the origin of your brain required violence, I likewise think we ought to minimize the violence we enact today.  We can’t help all the animals who were hurt in the long journey that made our world the place it is now.  And we can’t stop killing – there’s no other way for heterotrophs like us to live.

To be vegan, though, is to reckon with those costs.  To feel a sense of wonder at all the world pays for us to be here.  And, in gratitude, to refrain from asking that it pay more than we need.

On vengeance and Ahmed Saadawi’s ‘Frankenstein in Baghdad.’

On vengeance and Ahmed Saadawi’s ‘Frankenstein in Baghdad.’

We are composite creatures, the edifice of our minds perched atop accumulated strata of a lifetime of memories.  Most people, I imagine, have done wrong; remembrance of our lapses is part of who we are.  And most of us have been hurt; those grievances also shape our identities.

We struggle to be good, despite having been born into an amoral universe and then subjected to innumerable slights or traumas as we aged.

Goodness is a nebulous concept, however.  There’s no external metric that indicates what we should do.  For instance: if we are subject to an injustice, is it better to forgive or to punish the transgressor?

There are compelling arguments for both sides, and for each position you could base your reasoning on philosophy, psychology, physiology, evolutionary biology …

Intellect and reasoning can’t identify what we should do.

A wide variety of cooperative species will swiftly and severely punish transgressions in order to maintain social order.  Misbehavior among naked mole rats is generally resolved through bullying and violence, which ensures the colony does not lapse into decadence.  (As with humans, shared adversity like hunger generally compels threat-free cooperation.)

Archaeologists suggest that the belief in vengeful gods was coupled to the development of complex human societies.  The Code of Hammurabi prescribed immediate, brutal retribution for almost any misdeed.

The compulsion to punish people who have hurt us arises from deep within our brains.

But punishment invites further punishment.  Every act of revenge can lead to yet another act of revenge – the Hatfield and McCoy families carried on their feud for nearly thirty years.

Punishment is fueled by anger, and anger poisons our bodies.  On a purely physiological level, forgiving others allows us to heal.  The psychological benefits seem to be even more pronounced.

But forgiveness is hard.  Sometimes people do terrible things.  After her mother was killed, my spouse had to spend her entire afternoon prep period on the phone with a family member and the prosecutor, convincing them not to seek the death penalty.

The attack had been recorded by security cameras.  Apparently it was horrifying to watch.  The assailant’s defense lawyer stated publicly that it was “the most provable murder case I have ever seen.

And incidents in which dark-skinned men hurt white women are precisely those for which prosecutors typically seek the death penalty; after my mother-in-law’s death, the only national news sites that wrote about the case were run by far-right white supremacists trying to incite more hatred and violence toward innocent black people.  (I’m including no links to these, obviously.)

At the time, I was working on a series of poems about teaching in jail. 

Correction (pt. ii)

My wife’s mother was murdered Saturday –

outside at four a.m., scattering birdseed,

smoking a cigarette, shucking schizophrenic

nothings into the unlistening air.

Then a passing man tossed off a punch,

knocking her to the ground.

He stomped upon her skull

till there was no more her

within that battered brain.

Doctors intubated the corpse &

kept it oxygenated by machine,

monitoring each blip of needless heart

for days

until my wife convinced

a charitable neurologist

to let the mindless body rest.

That same afternoon

I taught another class in jail

for men who hurt someone else’s mother,

daughter, or son.

The man who murdered,

privacy-less New York inmate #14A4438

with black hair & brown eyes,

had been to prison twice,

in 2002 & 2014,

caught each time

with paltry grams of crack cocaine.

Our man received a massive dose

of state-sponsored therapy:

nine years of penitence.

Nearly a decade of correction.

Does Victor Frankenstein share the blame

for the murders of his creation,

the man he quicked but did not love?

Or can we walk into a maternity ward

and point:

that one, nursing now, will be a beast.

Are monsters born or made?

My mother-in-law is dead, & our man is inside again,

apprehended after “spontaneous utterances,”

covered in blood, photographed with

a bandage between his eyes.

And we, in our mercy,

will choose whether

our creation

deserves

to die.

#

Victor Frankenstein becoming disgusted at his creation. Fronts-piece to the 1831 edition.

I have always stood firmly on the side of Frankenstein’s creation.  Yes, he began to kill, but misanthropy was thrust upon him.  The creature was ethical and kind at first, but the rest of the world ruthlessly mistreated him.  Victor Frankenstein abandoned him in the laboratory; he befriended a blind man, but then the man’s children chased him away.

Victor Frankenstein’s fiancée did not deserve to be strangled – except insofar as we share blame for the crimes of those we love – but I understand the wellspring of the creature’s rage.

In Ahmed Saadawi’s Frankenstein in Baghdad, a junk dealer’s attempt to honor the anonymous victims of Iraq’s many bombings gives rise to a spirit of vengeance.  The junk dealer acts upon a grisly idea – most victims could not receive proper funerals because their bodies were scattered or incinerated by the blasts.  But what if many stray pieces were collected?  An charred arm from Tuesday’s explosion; a ribcage and lower jawbone from Wednesday’s; two different victims’ legs from Thursday’s.  The city is so wracked by violence that there are plenty of body parts to choose from.  And then the junk dealer could take his creation to the police and say, Look!  Here is a body, victim of the attacks.  Here is a dead man we can honor properly.

In truth, the junk dealer’s plan was never terribly well thought out.  Once he completes the corpse, he realizes that using his creation as a locus for lamentation would be no better than all the empty coffins.

And then the corpse springs to life, seeking vengeance on any and all who wronged its component parts.  In the creature’s words (as translated by Jonathan Wright):

“My list of people to seek revenge on grew longer as my old body parts fell off and my assistants added parts from my new victims, until one night I realized that under these circumstances I would face an open-ended list of targets that would never end.

“Time was my enemy, because there was never enough of it to accomplish my mission, and I started hoping that the killing in the streets would stop, cutting off my supply of victims and allowing me to melt away.

“But the killing had only begun.  At least that’s how it seemed from the balconies in the building I was living in, as dead bodies littered the streets like rubbish.”

Soon, the creature realizes that the people he attacks are no different from the dead victims that he is composed of.  He can chase after the terrorist organizations that orchestrate suicide bombings, but the people in those organizations are also seeking revenge for their dead allies.  The chain of causality is so tangled that no one is clearly responsible.

Car bombing in Baghdad. Image from Wikimedia.

United States forces have been inadvertently killing innocent civilians ever since invading Iraq … an attack that was launched in retribution for the actions of a small group of Afghani terrorists.

Some people thought that this sounded reasonable at the time.

To seek vengeance, we need someone to blame.  But who should I blame for my mother-in-law’s death?  The man who assaulted her?  That’s certainly the conclusion that the white supremacist news sites want me to reach.  But I sincerely doubt that this poor man would have hurt her if a prosecutor hadn’t ripped him from his friends and family, condemning him to ten years within the nightmarish violence of America’s prisons, all for participating in a small-scale version of the exact same economic transaction that allowed Merck to become a $160-billion-dollar valued company.

Do I blame the racist white legislators who imposed such draconian punishments on the possession of the pure amine form of cocaine, all while celebrating their pale-skinned buddies who snerked up the hydrochloride salt form?

Do I blame myself?  As a citizen of this country – a wealthy citizen, no less, showered with un-earned privilege – I am complicit in the misfortunes that my nation imposes on others.  Even when I loathe the way this nation acts, by benefiting from its sins, I too share responsibility.

I have inherited privilege … which means that I also deserve to inherit blame, even for horrors perpetrated well before I was born.

Forgiveness is hard, but revenge would send us chasing an endless cycle of complicity.  The creature in Frankenstein in Baghdad is flummoxed:

In his mind he still had a long list of the people he was supposed to kill, and as fast as the list shrank it was replenished with new names, making avenging these lives an endless task.  Or maybe he would wake up one day to discover that there was no one left to kill, because the criminals and the victims were entangled in a way that was more complicated than ever before.

“There are no innocents who are completely innocent or criminals who are completely criminal.”  This sentence drilled its way into his head like a bullet out of the blue.  He stood in the middle of the street and looked up at the sky, waiting for the final moment when he would disintegrate into his original components.  This was the realization that would undermine his mission – because every criminal he had killed was also a victim.  The victim proportion in some of them might even be higher than the criminal proportion, so he might inadvertently be made up of the most innocent parts of the criminals’ bodies.

“There are no innocents who are completely innocent or criminals who are completely criminal.”

Header image: an illustration of Frankenstein at work in his laboratory.

On Ann Leckie’s ‘The Raven Tower.’

On Ann Leckie’s ‘The Raven Tower.’

At the beginning of Genesis, God said, Let there be light: and there was light.

“Creation” by Suus Wansink on Flickr.

In her magisterial new novel The Raven Tower, Ann Leckie continues with this simple premise: a god is an entity whose words are true.

A god might say, “The sky is green.”  Well, personally I remember it being blue, but I am not a god.  Within the world of The Raven Tower, after the god announces that the sky is green, the sky will become green.  If the god is sufficiently powerful, that is.  If the god is too weak, then the sky will stay blue, which means the statement is not true, which means that the thing who said “The sky is green” is not a god.  It was a god, sure, but now it’s dead.

Poof!

And so the deities learn to be very cautious with their language, enumerating cases and provisions with the precision of a contemporary lawyer drafting contractual agreements (like the many “individual arbitration” agreements that you’ve no doubt assented to, which allow corporations to strip away your legal rights as a citizen of this country.  But, hey, I’m not trying to judge – I have signed those lousy documents, too.  It’s difficult to navigate the modern world without stumbling across them).

A careless sentence could doom a god.

But if a god were sufficiently powerful, it could say anything, trusting that its words would reshape the fabric of the universe.  And so the gods yearn to become stronger — for their own safety in addition to all the other reasons that people seek power.

In The Raven Tower, the only way for gods to gain strength is through human faith.  When a human prays or conducts a ritual sacrifice, a deity grows stronger.  But human attention is finite (which is true in our own world, too, as demonstrated so painfully by our attention-sapping telephones and our attention-monopolizing president).

Image from svgsilh.com.

And so, like pre-monopoly corporations vying for market share, the gods battle.  By conquering vast kingdoms, a dominant god could receive the prayers of more people, allowing it to grow even stronger … and so be able to speak more freely, inured from the risk that it will not have enough power to make its statements true.

If you haven’t yet read The Raven Tower, you should.  The theological underpinnings are brilliant, the characters compelling, and the plot so craftily constructed that both my spouse and I stayed awake much, much too late while reading it.

#

In The Raven Tower, only human faith feeds gods.  The rest of the natural world is both treated with reverence – after all, that bird, or rock, or snake might be a god – and yet also objectified.  There is little difference between a bird and a rock, either of which might provide a fitting receptacle for a god but neither of which can consciously pray to empower a god.

Image by Stephencdickson on Wikimedia Commons.

Although our own world hosts several species that communicate in ways that resemble human language, in The Raven Tower the boundary between human and non-human is absolute.  Within The Raven Tower, this distinction feels totally sensible – after all, that entire world was conjured through Ann Leckie’s assiduous use of human language.

But many people mistakenly believe that they are living in that fantasy world.

In the recent philosophical treatise Thinking and Being, for example, Irad Kimhi attempts to describe what is special about thought, particularly thoughts expressed in a metaphorical language like English, German, or Greek.  (Kimhi neglects mathematical languages, which is at times unfortunate.  I’ve written previously about how hard it is to translate certain concepts from mathematics into metaphorical languages like we speak with, and Kimhi fills many pages attempting to precisely the concept of “compliments” from set theory, which you could probably understand within moments by glancing at a Wikipedia page.)

Kimhi does use English assiduously, but I’m dubious that a metaphorical language was the optimal tool for the task he set himself.  And his approach was further undermined by flawed assumptions.  Kimhi begins with a “Law of Contradiction,” in which he asserts, following Aristotle, that it is impossible for a thing simultaneously to be and not to be, and that no one can simultaneously believe a thing to be and not to be.

Maybe these assumptions seemed reasonable during the time of Aristotle, but we now know that they are false.

Many research findings in quantum mechanics have shown that it is possible for a thing simultaneously to be and not to be.  An electron can have both up spin and down spin at the same moment, even though these two spin states are mutually exclusive (the states are “absolute compliments” in the terminology of set theory).  This seemingly contradictory state of both being and not being is what allows quantum computing to solve certain types of problems much faster than standard computers.

And, as a rebuttal for the psychological formulation, we have the case of free will.  Our brains, which generate consciousness, are composed of ordinary matter.  Ordinary matter evolves through time according to a set of known, predictable rules.  If the matter composing your brain was non-destructively scanned at sufficient resolution, your future behavior could be predicted.  Accurate prediction would demonstrate that you do not have free will.

And yet it feels impossible not to believe in the existence of free will.  After all, we make decisions.  I perceive myself to be choosing the words that I type.

I sincerely, simultaneously believe that humans both do and do not have free will.  And I assume that most other scientists who have pondered this question hold the same pair of seemingly contradictory beliefs.

The “Law of Contradiction” is not a great assumption to begin with.  Kimhi also objectifies nearly all conscious life upon our planet:

The consciousness of one’s thinking must involve the identification of its syncategorematic difference, and hence is essentially tied up with the use of language.

A human thinker is also a determinable being.  This book presents us with the task of trying to understand our being, the being of human beings, as that of determinable thinkers.

The Raven Tower is a fantasy novel.  Within that world, it was reasonable that there would be a sharp border separating humans from all other animals.  There are also warring gods, magical spells, and sacred objects like a spear that never misses or an amulet that makes people invisible.

But Kimhi purports to be writing about our world.

In Mama’s Last Hug, biologist Frans de Waal discusses many more instances of human thinkers brazenly touting their uniqueness.  If I jabbed a sharp piece of metal through your cheek, it would hurt.  But many humans claimed that this wouldn’t hurt a fish. 

The fish will bleed.  And writhe.  Its body will produce stress hormones.  But humans claimed that the fish was not actually in pain.

They were wrong.

Image by Catherine Matassa.

de Waal writes that:

The consensus view is now that fish do feel pain.

Readers may well ask why it has taken so long to reach this conclusion, but a parallel case is even more baffling.  For the longest time, science felt the same about human babies.  Infants were considered sub-human organisms that produced “random sounds,” smiles simply as a result of “gas,” and couldn’t feel pain. 

Serious scientists conducted torturous experiments on human infants with needle pricks, hot and cold water, and head restraints, to make the point that they feel nothing.  The babies’ reactions were considered emotion-free reflexes.  As a result, doctors routinely hurt infants (such as during circumcision or invasive surgery) without the benefit of pain-killing anesthesia.  They only gave them curare, a muscle relaxant, which conveniently kept the infants from resisting what was being done to them. 

Only in the 1980s did medical procedures change, when it was revealed that babies have a full-blown pain response with grimacing and crying.  Today we read about these experiments with disbelief.  One wonders if their pain response couldn’t have been noticed earlier!

Scientific skepticism about pain applies not just to animals, therefore, but to any organism that fails to talk.  It is as if science pays attention to feelings only if they come with an explicit verbal statement, such as “I felt a sharp pain when you did that!”  The importance we attach to language is just ridiculous.  It has given us more than a century of agnosticism with regard to wordless pain and consciousness.

As a parent, I found it extremely difficult to read the lecture de Waal cites, David Chamberlain’s “Babies Don’t Feel Pain: A Century of Denial in Medicine.”

From this lecture, I also learned that I was probably circumcised without anesthesia as a newborn.  Luckily, I don’t remember this procedure, but some people do.  Chamberlain describes several such patients, and, with my own kids, I too have been surprised by how commonly they’ve remembered and asked about things that happened before they had learned to talk.

Vaccination is painful, too, but there’s a difference – vaccination has a clear medical benefit, both for the individual and a community.  Our children have been fully vaccinated for their ages.  They cried for a moment, but we comforted them right away.

But we didn’t subject them to any elective surgical procedures, anesthesia or no.

In our world, even creatures that don’t speak with metaphorical language have feelings.

But Leckie does include a bridge between the world of The Raven Tower and our own.  Although language does not re-shape reality, words can create empathy.  We validate other lives as meaningful when we listen to their stories. 

The narrator of The Raven Tower chooses to speak in the second person to a character in the book, a man who was born with a body that did not match his mind.  Although human thinkers have not always recognized this truth, he too has a story worth sharing.

On suboptimal optimization.

On suboptimal optimization.

I’ve been helping a friend learn the math behind optimization so that she can pass a graduation-requirement course in linear algebra. 

Optimization is a wonderful mathematical tool.  Biochemists love it – progression toward an energy minimum directs protein folding, among other physical phenomena.  Economists love it – whenever you’re trying to make money, you’re solving for a constrained maximum.  Philosophers love it – how can we provide the most happiness for a population?  Computer scientists love it – self-taught translation algorithms use this same methodology (I still believe that you could mostly replace Ludwig Wittgenstein’s Philosophical Investigations with this New York Times Magazine article on machine learning and a primer on principal component analysis).

But, even though optimization problems are useful, the math behind them can be tricky.  I’m skeptical that this mathematical technique is essential for everyone who wants a B.A. to grasp – my friend, for example, is a wonderful preschool teacher who hopes to finally finish a degree in child psychology.  She would have graduated two years ago except that she’s failed this math class three times.

I could understand if the university wanted her to take statistics, as that would help her understand psychology research papers … and the science underlying contemporary political debates … and value-added models for education … and more.  A basic understanding of statistics might make people better citizens.

Whereas … linear algebra?  This is a beautiful but counterintuitive field of mathematics.  If you’re interested in certain subjects – if you want to become a physicist, for example – you really should learn this math.  A deep understanding of linear algebra can enliven your study of quantum mechanics.

The summary of quantum mechanics: animation by Templaton.

Then again, Werner Heisenberg, who was a brilliant physicist, had a limited grasp on linear algebra.  He made huge contributions to our understanding of quantum mechanics, but his lack of mathematical expertise occasionally held him back.  He never quite understood the implications of the Heisenberg Uncertainty Principle, and he failed to provide Adolph Hitler with an atomic bomb.

In retrospect, maybe it’s good that Heisenberg didn’t know more linear algebra.

While I doubt that Heisenberg would have made a great preschool teacher, I don’t think that deficits in linear algebra were deterring him from that profession.  After each evening that I spend working with my friend, I do feel that she understands matrices a little better … but her ability to nurture children isn’t improving.

And yet.  Somebody in an office decided that all university students here need to pass this class.  I don’t think this rule optimizes the educational outcomes for their students, but perhaps they are maximizing something else, like the registration fees that can be extracted.

Optimization is a wonderful mathematical tool, but it’s easy to misuse.  Numbers will always do what they’re supposed to, but each such problem begins with a choice.  What exactly do you hope to optimize?

Choose the wrong thing and you’ll make the world worse.

#

Figure 1 from Eykholt et al., 2018.

Most automobile companies are researching self-driving cars.  They’re the way of the future!  In a previous essay, I included links to studies showing that unremarkable-looking graffiti could confound self-driving cars … but the issue I want to discuss today is both more mundane and more perfidious.

After all, using graffiti to make a self-driving car interpret a stop sign as “Speed Limit 45” is a design flaw.  A car that accelerates instead of braking in that situation is not operating as intended.

But passenger-less self-driving cars that roam the city all day, intentionally creating as many traffic jams as possible?  That’s a feature.  That’s what self-driving cars are designed to do.

A machine designed to create traffic jams?

Despite my wariness about automation and algorithms run amok, I hadn’t considered this problem until I read Adam Millard-Ball’s recent research paper, “The Autonomous Vehicle Parking Problem.” Millard-Ball begins with a simple assumption: what if a self-driving car is designed to maximize utility for its owner?

This assumption seems reasonable.  After all, the AI piloting a self-driving car must include an explicit response to the trolley problem.  Should the car intentionally crash and kill its passenger in order to save the lives of a group of pedestrians?  This ethical quandary is notoriously tricky to answer … but a computer scientist designing a self-driving car will probably answer, “no.” 

Otherwise, the manufacturers won’t sell cars.  Would you ride in a vehicle that was programmed to sacrifice you?

Luckily, the AI will not have to make that sort of life and death decision often.  But here’s a question that will arise daily: if you commute in a self-driving car, what should the car do while you’re working?

If the car was designed to maximize public utility, perhaps it would spend those hours serving as a low-cost taxi.  If demand for transportation happened to be lower than the quantity of available, unoccupied self-driving cars, it might use its elaborate array of sensors to squeeze into as small a space as possible inside a parking garage.

But what if the car is designed to benefit its owner?

Perhaps the owner would still want for the car to work as a taxi, just as an extra source of income.  But some people – especially the people wealthy enough to afford to purchase the first wave of self-driving cars – don’t like the idea of strangers mucking around in their vehicles.  Some self-driving cars would spend those hours unoccupied.

But they won’t park.  In most cities, parking costs between $2 and $10 per hour, depending on whether it’s street or garage parking, whether you purchase a long-term contract, etc. 

The cost to just keep driving is generally going to be lower than $2 per hour.  Worse, this cost is a function of the car’s speed.  If the car is idling at a dead stop, it will use approximately 0.1 gallon per hour, costing 25 cents per hour at today’s prices.  If the car is traveling at 30 mph without breaks, it will use approximately 1 gallon per hour, costing $2.50 per hour.

To save money, the car wants to stay on the road … but it wants for traffic to be as close to a standstill as possible.

Luckily for the car, this is an easy optimization problem.  It can consult its onboard GPS to find nearby areas where traffic is slow, then drive over there.  As more and more self-driving cars converge on the same jammed streets, they’ll slow traffic more and more, allowing them to consume the workday with as little motion as possible.

Photo by walidhassanein on Flickr.

Pity the person sitting behind the wheel of an occupied car on those streets.  All the self-driving cars will be having a great time stuck in that traffic jam: we’re saving money!, they get to think.  Meanwhile the human is stuck swearing at empty shells, cursing a bevy of computer programmers who made their choices months or years ago.

And all those idling engines exhale carbon dioxide.  But it doesn’t cost money to pollute, because one political party’s worth of politicians willfully ignore the fact that capitalism, by philosophical design, requires we set prices for scarce resources … like clean air, or habitable planets.

On artificial intelligence and solitary confinement.

On artificial intelligence and solitary confinement.

512px-Ludwig_WittgensteinIn Philosophical Investigations (translated by G. E. M. Anscombe), Ludwig Wittgenstein argues that something strange occurs when we learn a language.  As an example, he cites the problems that could arise when you point at something and describe what you see:

The definition of the number two, “That is called ‘two’ “ – pointing to two nuts – is perfectly exact.  But how can two be defined like that?  The person one gives the definition to doesn’t know what one wants to call “two”; he will suppose that “two” is the name given to this group of nuts!

I laughed aloud when I read this statement.  I borrowed Philosophical Investigations a few months after the birth of our second child, and I had spent most of his first day pointing at various objects in the hospital maternity ward and saying to him, “This is red.”  “This is red.”

“This is red.”

Of course, the little guy didn’t understand language yet, so he probably just thought, the warm carry-me object is babbling again.

IMG_5919
Red, you say?

Over time, though, this is how humans learn.  Wittgenstein’s mistake here is to compress the experience of learning a language into a single interaction (philosophers have a bad habit of forgetting about the passage of time – a similar fallacy explains Zeno’s paradox).  Instead of pointing only at two nuts, a parent will point to two blocks – “This is two!” and two pillows – “See the pillows?  There are two!” – and so on.

As a child begins to speak, it becomes even easier to learn – the kid can ask “Is this two?”, which is an incredibly powerful tool for people sufficiently comfortable making mistakes that they can dodge confirmation bias.

y648(When we read the children’s story “In a Dark Dark Room,” I tried to add levity to the ending by making a silly blulululu sound to accompany the ghost, shown to the left of the door on this cover. Then our youngest began pointing to other ghost-like things and asking, “blulululu?”  Is that skeleton a ghost?  What about this possum?)

When people first programmed computers, they provided definitions for everything.  A ghost is an object with a rounded head that has a face and looks very pale.  This was a very arduous process – my definition of a ghost, for instance, is leaving out a lot of important features.  A rigorous definition might require pages of text. 

Now, programmers are letting computers learn the same way we do.  To teach a computer about ghosts, we provide it with many pictures and say, “Each of these pictures has a ghost.”  Just like a child, the computer decides for itself what features qualify something for ghost-hood.

In the beginning, this process was inscrutable.  A trained algorithm could say “This is a ghost!”, but it couldn’t explain why it thought so.

From Philosophical Investigations: 

Screen Shot 2018-03-22 at 8.40.41 AMAnd what does ‘pointing to the shape’, ‘pointing to the color’ consist in?  Point to a piece of paper.  – And now point to its shape – now to its color – now to its number (that sounds queer). – How did you do it?  – You will say that you ‘meant’ a different thing each time you pointed.  And if I ask how that is done, you will say you concentrated your attention on the color, the shape, etc.  But I ask again: how is that done?

After this passage, Wittgenstein speculates on what might be going through a person’s head when pointing at different features of an object.  A team at Google working on automated image analysis asked the same question of their algorithm, and made an output for the algorithm to show what it did when it “concentrated its attention.” 

Here’s a beautiful image from a recent New York Times article about the project, “Google Researchers Are Learning How Machines Learn.”  When the algorithm is specifically instructed to “point to its shape,” it generates a bizarre image of an upward-facing fish flanked by human eyes (shown bottom center, just below the purple rectangle).  That is what the algorithm is thinking of when it “concentrates its attention” on the vase’s shape.

new york times image.jpg

At this point, we humans could quibble.  We might disagree that the fish face really represents the platonic ideal of a vase.  But at least we know what the algorithm is basing its decision on.

Usually, that’s not the case.  After all, it took a lot of work for Google’s team to make their algorithm spit out images showing what it was thinking about.  With most self-trained neural networks, we know only its success rate – even the designers will have no idea why or how it works.

Which can lead to some stunningly bizarre failures.

artificial-intelligence-2228610_1280It’s possible to create images that most humans recognize as one thing, and that an image-analysis algorithm recognizes as something else.  This is a rather scary opportunity for terrorism in a world of self-driving cars; street signs could be defaced in such a way that most human onlookers would find the graffiti unremarkable, but an autonomous car would interpret in a totally new way.

In the world of criminal justice, inscrutable algorithms are already used to determine where police officers should patrol.  The initial hope was that this system would be less biased – except that the algorithm was trained on data that came from years of racially-motivated enforcement.  Minorities are still more likely to be apprehended for equivalent infractions.

And a new artificial intelligence algorithm could be used to determine whether a crime was “gang related.”  The consequences of error can be terrible, here: in California, prisoners could be shunted to solitary for decades if they were suspected of gang affiliation.  Ambiguous photographs on somebody’s social media site were enough to subject a person to decades of torture.

Solitary_Confinement_(4692414179)When an algorithm thinks that the shape of a vase is a fish flanked by human eyes, it’s funny.  But it’s a little less comedic when an algorithm’s mistake ruins somebody’s life – if an incident is designated as a “gang-related crime”, prison sentences can be egregiously long, or send someone to solitary for long enough to cause “anxiety, depression, and hallucinations until their personality is completely destroyed.

Here’s a poem I received in the mail recently:

LOCKDOWN

by Pouncho

For 30 days and 30 nights

I stare at four walls with hate written

         over them.

Falling to my knees from the body blows

         of words.

It damages the mind.

I haven’t had no sleep. 

How can you stop mental blows, torture,

         and names –

         They spread.

I just wanted to scream:

         Why?

For 30 days and 30 nights

My mind was in isolation.