On AI-generated art.

On AI-generated art.

Recently, an image generated by an artificial intelligence algorithm won an art competition.

As far as I can tell, this submission violates no rules. Pixel by pixel, the image was freshly generated – it was not “plagarized” in the human sense of copying portions of another’s work wholesale. Indeed, if the AI were able to speak (which it can’t, because it’s particular design does not incorporate any means to generate language), it might describe its initial training as having “inspired” its current work.

The word “training” elides a lot of detail.

Most contemporary AI algorithms are not wholly scripted – a human programmer doesn’t write code that says, “When given the input ‘opera,’ include anthropomorphic shapes bedecked in luxurious fabrics.”

Instead, the programmer curates a large collection of images, some of which are given the descriptor “opera,” all others being, by default, “not opera.” Then the algorithm analyzes the images – treating the images as a grid of pixels, each with a particular hue and brightness, and also higher-order mathematical calculations on that grid, such as if there is a red pixel in a location, what are the odds that other nearby pixels are also red, and what shape will that red cluster take? From this analysis, the algorithm finds mathematical descriptors that separate the “opera” images from “not opera.”

An image designated “opera” is more likely to have patches with vivid hues that include bright and dark vertical stripes. A human viewer will interpret these as the shadowed folds of fabric draping an upright figure. The algorithm doesn’t need to interpret these features, though – the algorithm works only with a matrix of numbers that denote pixel colors.

In general, human programmers understand the principles by which AI algorithms work. After all, human programmers made them!

And human programmers know what sort of information was provided in the algorithm’s training set. For instance, if none of the images labeled “opera” within a particular training set showed performers sitting down, then the algorithm should not produce an opera image with alternating dark and light stripes arrayed horizontally – the algorithm will not have been exposed to horizontal folds in fabric, at least not within the context of opera.

But the particular details of how these algorithms work are often inscrutable to their creators. The algorithms are like children this way – you might know the life experiences that your child has been exposed to, and yet still have no idea why your kid is claiming that Bigfoot dips french fries into ice cream.

Every now and again, an algorithm sorts data by criteria that we humans find ridiculous. Or, rather: the algorithm sorts data by criteria that we would find ridiculous, if we could understand its criteria. But, in general, we can’t. It’s difficult to plumb the workings of these algorithms.

Because the algorithm’s knowledge is stored in multidimensional matrices that most human brains can’t grasp, we can’t compare the algorithm’s understanding of opera with our own. Instead, we can only evaluate whether or not the algorithm seems to work. Whether the algorithm’s images of “opera” look like opera to us, or whether an AI criminal justice algorithm recommends the longest prison sentences to people whom we also assume to be the most dangerous offenders.

#

So, about that art contest. I’m inclined to think that, for a category of “digitally created artwork,” submitting a piece that was created by an AI is fair. A human user still plays a curatorial role, perhaps requesting many images using the exact same prompt and then choosing the best, each generated from random seeds.

It’s a little weird, because in many ways the result would be a collaborative project – somebody’s work went into scripting the AI, and a huge amount of work went into curating and tagging the training set of images – but you could argue that anytime an artist uses a tool or filter on Photoshop, they’re collaborating with the programmers.

An artist might paint a background and then click on a button labeled “whirlpool effect,” but somebody had to design and script the mathematical function that converts the original array of pixel colors into something that we humans would then believe had been sucked into a whirlpool.

In some ways, this collaboration is acknowledged (in a half-hearted, transactional, capitalist way) – the named artist has paid licensing fees to use Photoshop or an AI algorithm. Instead of recognition, the co-creators receive money.

But there’s another wrinkle: we do not create art alone.

Even the Lascaux cave paintings – although no other paintings from that era survived until the present day, many probably existed (in places that were less protected from the elements and so were destroyed by wind & rain & mold & time). The Lascaux artist(s) presumably saw themselves as part of an artistic community or tradition.

In the development of a human artist, that person will see, hear, & otherwise experience many artistic creations by others. Over the course of our lives, we visit museums, read books, watch television, hear music, eat at restaurants – we’re constantly learning from the world around us, in ways that would be impossible to fully acknowledge. A painter might include a flourish that was inspired by a picture they saw in childhood and no longer consciously remember.

This collaborative debt is more obvious among AI algorithms. These algorithms need fuel: their meticulously-tagged sets of training images. The algorithms generate new images of only the sort that they’ve been fed.

It’s the story of a worker being simultaneously laid off and asked to train their replacement.

Unfortunately for human artists, our world is already awash in beautiful images. Obviously, I’m not saying that we need no more art! I’m a writer, in a world that’s already so full of books! The problem, instead, is that the AI algorithms have ample training sets. Even if, hypothetically, these algorithms instantly drove every other artist out of business, or made all working artists so nervous that human artists refused for any more of their work to be digitized, there’s still an enormous library of existing art for the AI algorithms to train on.

After hundreds of years of collecting beautiful paintings in museums, it would take a hefty dollop of hubris to imagine immediate stagnation if the algorithms lacked access to new human-generated paintings.

Also, it wouldn’t be insurmountable to program something akin to “creativity” in the algorithms – an element of randomness to allow the algorithm to deviate from trends in its training set. This would put more emphasis on a user’s curatorial judgment, but also lets the algorithms innovate. Presumably most of the random deviations would look bad to me, but that’s often the way with innovation – impressionism, cubism, and other movements looked bad to many people at the beginning. (Honestly, I still don’t like much impressionism.)

#

There’s no reason to expect a brain made of salty fat to have incomparable powers. Our thoughts don’t come from anything spooky like quantum mechanics – neurons are much too big to persist in superpositions. Instead, we humans are so clever because we have a huge number of neurons interconnected in complex ways. We’re pretty special, but we’re not magical.

Eventually, a brain made of circuits could do anything that we humans can.

That’s a crucial long-run flaw of capitalism – eventually, the labor efforts of all biological organisms will be replaceable, so all available income could be allocated to capital owners instead of labor producers.

In a world of physician-bots, instead of ten medical doctors each earning a salary, the owner of ten RoboMD units would keep all the money.

We’re still a ways off from RoboMD entering the market, but this is a matter of engineering. AI algorithms can already write legal contracts, do sports journalism, drive cars & trucks, create award-winning visual images – there’s no reason to believe that an AI could never treat illnesses as well as a human doctor, clean floors as well as a human janitor, write code as well as a human programmer.

In the long run, all our work could be done by machines. Human work will be unnecessary. Within the logic of capitalism, our income should drop to zero.

Within the logic of capitalism, only the owners of algorithms should earn any money in the long-run. (And in the very long run, only the single owner of the best algorithms should earn any money, with all other entities left with nothing.)

#

Admittedly, it seems sad for visual artists – many of whom might not have nuanced economics backgrounds – to be among the people who experience the real-world demonstration of this principle first.

It probably feels like a very minor consolation to them, knowing that AI algorithms will eventually be able to do your job, too. When kids play HORSE, nobody wants to be out first.

But also, we have a choice. Kids choose whether or not to play HORSE, and they choose what rules they’ll play by. We (collectively) get to choose whether our world will be like this.

I’m not even that creative, and I can certainly imagine worlds in which, even after the advent of AI, human artists still get to do their work, and eat.

On empathy and the color red.

On empathy and the color red.

I can’t fly.

I try to feed my children every night, but I never vomit blood into their mouths.

When I try to hang upside down – like from monkey bars at a playground – I have to clench my muscles, and pretty soon I get dizzy. I couldn’t spend a whole day like that.

And, yes, sometimes I shout. Too often during the pandemic, I’ve shouted at my kids. But when I shout, I’m trying to make them stop hitting each other – I’m not trying to figure out where they are.

It’s pretty clear that I’m not a bat.

#

Photograph by Anne Brooke, USFWS

Because I haven’t had these experiences, philosopher Thomas Nagel would argue that I can’t know how it feels to be a bat.

In so far as I can imagine [flitting through the dark, catching moths in my mouth], it tells me only what it would be like for me to behave as a bat behaves.

But that is not the question. I want to know what it is like for a bat to be a bat.

#

Perhaps I can’t know what it feels like for a bat to be a bat. And yet, I can empathize with a bat. I can imagine how it might feel to be trapped in a small room while a gamboling, wiry-limbed orc-thing tried to swat me with a broom.

It would be terrifying!

And that act of imagination – of empathy – is enough for me to want to protect bats’ habitats. To make space for them in our world. Sure, you could argue that bats are helpful for us – they’re pollinators, they eat pesky bugs – but empathy lets us care about the well-being of bats for their own sake.

#

Literature exercises our minds: when we read, invent, and share stories, we build our capacity for empathy, becoming more generally aware of the world outside our own skulls.

Writing can be a radical act of love. Especially when we write from a perspective that differs from our own. The poet Ai said that “Whoever wants to speak in my poems is allowed to speak, regardless of sex, race, creed, or color.” Her poems often unfurl from the perspective of violent men, and yet she treats her protagonists with respect and kindness. Ai gives them more than they deserve: “I don’t know if I embrace them, but I love them.

Ai

That capacity for love, for empathy, will let us save the world. Although many of us haven’t personally experienced a lifetime of racist microaggressions or conflict with systemic oppression, we all need to understand how rotten it would feel. We need to understand that the pervasive stress seeps into a person’s bones, causing all manner of health problems. We need understand the urgency of building a world where all children feel safe.

And if we don’t understand – yet – maybe we need to read more.

Experiments suggest that reading any engaging literary fiction boosts our ability to empathize with others. Practice makes better: get outside your head for a while, it’ll be easier to do it again next time.

Of course, we’ll still need to make an effort to learn what others are going through. Thomas Nagel was able to ruminate so extensively about what it would feel like to live as a bat because we’ve learned about echolocation, about their feeding habits, about their family lives. If we want to be effective anti-racists, we need to learn about Black experiences in addition to developing our empathy more generally.

Luckily, there’s great literature with protagonists facing these struggles – maybe you could try How We Fight for Our Lives, Americanah, or The Sellout.

#

As a bookish White person, it’s easy for me to empathize with the experiences of other bookish White people. In Search of Lost Time doesn’t tax my brain. Nor does White Noise. The characters in these books are a lot like me.

The cognitive distance between me and the protagonists of Americanah is bigger. Which is sad in and of itself – as high schoolers, these characters were playful, bookish, and trusting, no different from my friends or me. But then they were forced to endure hard times that I was sufficiently privileged to avoid. And so when I read about their lives, perched as I was atop my mountain of privilege, it was painful to watch Ifemelu and Obinze develop their self-protective emotional carapaces, armoring themselves against the injustice that ceaselessly buffets them.

Another reader might nod and think, I’ve been there. I had to exercise my imagination.

#

In Being a Beast, Charles Foster describes his attempts to understand the lives of other animals. He spent time mimicking their behaviors – crawling naked across the dirt, eating worms, sleeping in an earthen burrow. He wanted a badger’s-eye view of the world.

Foster concluded that his project was a failure – other animals’ lives are just so different from ours.

And yet, as a direct consequence of his attempt at understanding, Foster changed his life. He began treating other animals with more kindness and respect. To me, this makes his project a success.

White people might never understand exactly how it feels to be Black in America. I’m sure I don’t. But we can all change the way we live. We can, for instance, resolve to spend more money on Black communities, and spend it on more services than just policing.

#

Empathy is working when it forces us to act. After all, what we do matters more than what we purport to think.

It’s interesting to speculate what it would feel like to share another’s thoughts – in Robert Jackson Bennett’s Shorefall, the protagonists find a way to temporarily join minds. This overwhelming rush of empathy and love transforms them: “Every human being should feel obliged to try this once.

In the real world, we might never know exactly how the world feels to someone else. But Nagel wants to prove, with words, that he has understood another’s experience.

One might try, for example, to develop concepts that could be used to explain to a person blind from birth what it was like to see. One would reach a blank wall eventually, but it should be possible to devise a method of expressing in objective terms much more than we can at present, and with much greater precision.

The loose intermodal analogies – for example, “Red is like the sound of a trumpet” – which crop up in discussions of this subject are of little use. That should be clear to anyone who has both heard a trumpet and seen red.

#

We associate red with many of our strongest emotions: anger, violence, love.

And we could tell many different “just so” stories to explain why we have these associations.

Like:

Red is an angry color because people’s faces flush red when they’re mad. Red blood flows when we’re hurt, or when we hurt another.

Or:

Red represents love because a red glow spreads over our partners’ necks and chests and earlobes as we kiss and caress and fumble together.

Or:

Red is mysterious because a red hue fills the sky at dawn and dusk, the liminal hours when we are closest to the spirit world.

These are all emergent associations – they’re unrelated to the original evolutionary incentive that let us see red. Each contributes to how we see red now, but none explains the underlying why.

#

We humans are blue-green-red trichromatic – we can distinguish thousands of colors, but our brains do this by comparing the relative intensities of just three.

And we use the phrase “color blind” to describe the people and other animals who can’t distinguish red from green. But all humans are color blind – there are colors we can’t see. To us, a warm body looks identical to a cold wax replica. But their colors are different, as any bullfrog could tell you.

Photograph by Tim Mosenfelder, Getty Images

Our eyes lack the receptors – cone cells with a particular fold of opsin – that could distinguish infrared light from other wavelengths. We mistakenly assume these two singers have the same color skin.

When we look at flowers, we often fail to see the beautiful patterns that decorate their petals. These decorations are obvious to any bee, but we’re oblivious. Again, we’re missing the type of cone cells that would let us see. To fully appreciate flowers, we’d need receptors that distinguish ultraviolet light from blue.

#

Most humans can see the color red because we’re descended from fruit eaters. To our bellies, a red berry is very different from a green berry. And so, over many generations, our ancestors who could see the difference were able to gather more nutritious berries than their neighbors. Because they had genes that let them see red, they were better able to survive, have children, and keep their children fed.

The genes for seeing red spread.

Now, several hundred thousand years later, this wavelength of light blares at us like a trumpet. Even though the our ancestors learned to cook food with fire, and switched from fruit gathering to hunting, and then built big grocery stores where the bright flashes of color are just advertisements for a new type of high-fructose-corn-syrup-flavored cereal, red still blares at us.

Once upon a time, we really needed to see ripe fruit. The color red became striking to us, wherever we saw it. And so we invented new associations – rage, or love – even though these are totally unrelated to the evolutionary pressures that gave us our red vision.

Similarly, empathy wasn’t “supposed” to let us build a better world. Evolution doesn’t care about fairness.

And yet. Even though I might never know exactly how it feels when you see the color red, I can still care how you’re treated. Maybe that’s enough.

#

Header image: a greater short-nosed fruit bat, photograph by Anton 17.

On suboptimal optimization.

On suboptimal optimization.

I’ve been helping a friend learn the math behind optimization so that she can pass a graduation-requirement course in linear algebra. 

Optimization is a wonderful mathematical tool.  Biochemists love it – progression toward an energy minimum directs protein folding, among other physical phenomena.  Economists love it – whenever you’re trying to make money, you’re solving for a constrained maximum.  Philosophers love it – how can we provide the most happiness for a population?  Computer scientists love it – self-taught translation algorithms use this same methodology (I still believe that you could mostly replace Ludwig Wittgenstein’s Philosophical Investigations with this New York Times Magazine article on machine learning and a primer on principal component analysis).

But, even though optimization problems are useful, the math behind them can be tricky.  I’m skeptical that this mathematical technique is essential for everyone who wants a B.A. to grasp – my friend, for example, is a wonderful preschool teacher who hopes to finally finish a degree in child psychology.  She would have graduated two years ago except that she’s failed this math class three times.

I could understand if the university wanted her to take statistics, as that would help her understand psychology research papers … and the science underlying contemporary political debates … and value-added models for education … and more.  A basic understanding of statistics might make people better citizens.

Whereas … linear algebra?  This is a beautiful but counterintuitive field of mathematics.  If you’re interested in certain subjects – if you want to become a physicist, for example – you really should learn this math.  A deep understanding of linear algebra can enliven your study of quantum mechanics.

The summary of quantum mechanics: animation by Templaton.

Then again, Werner Heisenberg, who was a brilliant physicist, had a limited grasp on linear algebra.  He made huge contributions to our understanding of quantum mechanics, but his lack of mathematical expertise occasionally held him back.  He never quite understood the implications of the Heisenberg Uncertainty Principle, and he failed to provide Adolph Hitler with an atomic bomb.

In retrospect, maybe it’s good that Heisenberg didn’t know more linear algebra.

While I doubt that Heisenberg would have made a great preschool teacher, I don’t think that deficits in linear algebra were deterring him from that profession.  After each evening that I spend working with my friend, I do feel that she understands matrices a little better … but her ability to nurture children isn’t improving.

And yet.  Somebody in an office decided that all university students here need to pass this class.  I don’t think this rule optimizes the educational outcomes for their students, but perhaps they are maximizing something else, like the registration fees that can be extracted.

Optimization is a wonderful mathematical tool, but it’s easy to misuse.  Numbers will always do what they’re supposed to, but each such problem begins with a choice.  What exactly do you hope to optimize?

Choose the wrong thing and you’ll make the world worse.

#

Figure 1 from Eykholt et al., 2018.

Most automobile companies are researching self-driving cars.  They’re the way of the future!  In a previous essay, I included links to studies showing that unremarkable-looking graffiti could confound self-driving cars … but the issue I want to discuss today is both more mundane and more perfidious.

After all, using graffiti to make a self-driving car interpret a stop sign as “Speed Limit 45” is a design flaw.  A car that accelerates instead of braking in that situation is not operating as intended.

But passenger-less self-driving cars that roam the city all day, intentionally creating as many traffic jams as possible?  That’s a feature.  That’s what self-driving cars are designed to do.

A machine designed to create traffic jams?

Despite my wariness about automation and algorithms run amok, I hadn’t considered this problem until I read Adam Millard-Ball’s recent research paper, “The Autonomous Vehicle Parking Problem.” Millard-Ball begins with a simple assumption: what if a self-driving car is designed to maximize utility for its owner?

This assumption seems reasonable.  After all, the AI piloting a self-driving car must include an explicit response to the trolley problem.  Should the car intentionally crash and kill its passenger in order to save the lives of a group of pedestrians?  This ethical quandary is notoriously tricky to answer … but a computer scientist designing a self-driving car will probably answer, “no.” 

Otherwise, the manufacturers won’t sell cars.  Would you ride in a vehicle that was programmed to sacrifice you?

Luckily, the AI will not have to make that sort of life and death decision often.  But here’s a question that will arise daily: if you commute in a self-driving car, what should the car do while you’re working?

If the car was designed to maximize public utility, perhaps it would spend those hours serving as a low-cost taxi.  If demand for transportation happened to be lower than the quantity of available, unoccupied self-driving cars, it might use its elaborate array of sensors to squeeze into as small a space as possible inside a parking garage.

But what if the car is designed to benefit its owner?

Perhaps the owner would still want for the car to work as a taxi, just as an extra source of income.  But some people – especially the people wealthy enough to afford to purchase the first wave of self-driving cars – don’t like the idea of strangers mucking around in their vehicles.  Some self-driving cars would spend those hours unoccupied.

But they won’t park.  In most cities, parking costs between $2 and $10 per hour, depending on whether it’s street or garage parking, whether you purchase a long-term contract, etc. 

The cost to just keep driving is generally going to be lower than $2 per hour.  Worse, this cost is a function of the car’s speed.  If the car is idling at a dead stop, it will use approximately 0.1 gallon per hour, costing 25 cents per hour at today’s prices.  If the car is traveling at 30 mph without breaks, it will use approximately 1 gallon per hour, costing $2.50 per hour.

To save money, the car wants to stay on the road … but it wants for traffic to be as close to a standstill as possible.

Luckily for the car, this is an easy optimization problem.  It can consult its onboard GPS to find nearby areas where traffic is slow, then drive over there.  As more and more self-driving cars converge on the same jammed streets, they’ll slow traffic more and more, allowing them to consume the workday with as little motion as possible.

Photo by walidhassanein on Flickr.

Pity the person sitting behind the wheel of an occupied car on those streets.  All the self-driving cars will be having a great time stuck in that traffic jam: we’re saving money!, they get to think.  Meanwhile the human is stuck swearing at empty shells, cursing a bevy of computer programmers who made their choices months or years ago.

And all those idling engines exhale carbon dioxide.  But it doesn’t cost money to pollute, because one political party’s worth of politicians willfully ignore the fact that capitalism, by philosophical design, requires we set prices for scarce resources … like clean air, or habitable planets.

On ‘The Overstory.’

On ‘The Overstory.’

We delude ourselves into thinking that the pace of life has increased in recent years.  National news is made by the minute as politicians announce their plans via live-televised pronouncement or mass-audience short text message.  Office workers carry powerful computers into their bedrooms, continuing to work until moments before sleep.

But our frenzy doesn’t match the actual pace of the world.  There’s a universe of our own creation zipping by far faster than the reaction time of any organism that relies on voltage waves propagating along its ion channels.  Fortunes are made by shortening the length of fiberoptic cable between supercomputer clusters and the stock exchange, improving response times by fractions of a second.  “Practice makes perfect,” and one reason the new chess and Go algorithms are so much better than human players is that they’ve played lifetimes of games against themselves since their creation.

640px-IFA_2010_Internationale_Funkausstellung_Berlin_18We can frantically press buttons or swipe our fingers across touch screens, but humans will never keep up with the speed of the algorithms that recommend our entertainment, curate our news, eavesdrop on our conversations, guess at our sexual predilections, condemn us to prison

And then there’s the world.  The living things that have been inhabiting our planet for billions of years – the integrated ecosystems they create, the climates they shape.  The natural world continues to march at the same stately pace as ever.  Trees siphon carbon from the air as they grasp for the sun, then fall and rot and cause the Earth itself to grow.  A single tree might live for hundreds or thousands of years.  The forests in which they are enmeshed might develop a personality over millions.

Trees do not have a neural network.  But neither do neurons.  When simple components band together and communicate, the result can be striking.  And, as our own brains clearly show, conscious.  The bees clustering beneath a branch do not seem particularly clever by most of our metrics, but the hive as a whole responds intelligently to external pressures.  Although each individual has no idea what the others are doing, they function as a unit.

Your neurons probably don’t understand what they’re doing.  But they communicate to the others, and that wide network of communication is enough.

Root_of_a_TreeTrees talk.  Their roots intertwine – they send chemical communiques through symbiotic networks of fungal mycelia akin to telephones.

Trees talk slowly, by our standards.  But we’ve already proven to ourselves that intelligence could operate over many orders of temporal magnitude – silicon-based AI is much speedier than the chemical communiques sent from neuron to neuron within our own brains.  If a forest thought on a timescale of days, months, or years, would we humans even notice?  Our concerns were bound up in the minute by minute exigencies of hunting for food, finding mates, and trying not to be mauled by lions.  Now, they’re bound up in the exigencies of making money.  Selecting which TV show to stream.  Scoping the latest developments of a congressional race that will determine whether two more years pass without the slightest attempt made to avoid global famine.

In The Overstory, Richard Powers tries to frame this timescale conflict such that we Homo sapiens might finally understand.  Early on, he presents a summary of his own book; fractal-like, this single paragraph encapsulates the entire 500 pages (or rather, thousands of years) of heartbreak.

image (2)He still binges on old-school reading.  At night, he pores over mind-bending epics that reveal the true scandals of time and matter.  Sweeping tales of generational spaceship arks.  Domed cities like giant terrariums.  Histories that split and bifurcate into countless parallel quantum worlds.  There’s a story he’s waiting for, long before he comes across it.  When he finds it at last, it stays with him forever, although he’ll never be able to find it again, in any database.  Aliens land on Earth.  They’re little runts, as alien races go.  But they metabolize like there’s no tomorrow.  They zip around like swarms of gnats, too fast to see – so fast that Earth seconds seem to them like years.  To them, humans are nothing but sculptures of immobile meat.  The foreigners try to communicate, but there’s no reply.  Finding no signs of intelligent life, they tuck into the frozen statues and start curing them like so much jerky, for the long ride home.

Several times while reading The Overstory, I felt a flush of shame at the thought of how much I personally consume.  Which means, obviously, that Powers was doing his work well – I should feel ashamed.    We are alive, brilliantly beautifully alive, here on a magnificent, temperate planet.  But most of us spend too little time feeling awe and too much feeling want.  “What if there was more?” repeated so often that we’ve approached a clear precipice of forever having less.

In Fruitful Labor, Mike Madison (whose every word – including the rueful realization that young people today can’t reasonably expect to follow in his footsteps – seems to come from a place of earned wisdom and integrity, a distinct contrast from Thoreau’s Walden, in my opinion) asks us to:

image (3)Consider the case of a foolish youth who, at age 21, inherits a fortune that he spends so recklessly that, by the age of 30, the fortune is dissipated and he finds himself destitute.  This is more or less the situation of the human species.  We have inherited great wealth in several forms: historic solar energy, either recent sunlight stored as biomass, or ancient sunlight stored as fossil fuels; the great diversity of plants and animals, organized into robust ecosystems; ancient aquifers; and the earth’s soil, which is the basis for all terrestrial life.  We might mention a fifth form of inherited wealth – antibiotics, that magic against many diseases – which we are rendering ineffective through misuse.  Of these forms of wealth that we are spending so recklessly, fossil fuels are primary, because it is their energy that drives the destruction of the other assets.

What we have purchased with the expenditure of this inheritance is an increase in the human population of the planet far above what the carrying capacity would be without the use of fossil fuels.  This level of population cannot be sustained, and so must decline.  The decline could be gradual and relatively painless, as we see in Japan, where the death rate slightly exceeds the birth rate.  Or the decline could be sudden and catastrophic, with unimaginable grief and misery.

In this context, the value of increased energy efficiency is that it delays the inevitable reckoning; that is, it buys us time.  We could use this time wisely, to decrease our populations in the Japanese style, and to conserve our soil, water, and biological resources.  A slower pace of climate change could allow biological and ecological adaptations.  At the same time we could develop and enhance our uses of geothermal, nuclear, and solar energies, and change our habits to be less materialistic.  A darker option is to use the advantages of increased energy efficiency to increase the human population even further, ensuring increasing planetary poverty and an even more grievous demise.  History does not inspire optimism; nonetheless, the ethical imperative remains to farm as efficiently as one is able.

The tragic side of this situation is not so much the fate of the humans; we are a flawed species unable to make good use of the wisdom available to us, and we have earned our unhappy destiny by our foolishness.  It is the other species on the planet, whose destinies are tied to ours, that suffer a tragic outcome.

Any individual among us could protest that “It’s not my fault!”  The Koch brothers did not invent the internal combustion engine – for all their efforts to confine us to a track toward destitution and demise, they didn’t set us off in that direction.  And it’s not as though contemporary humans are unique in reshaping our environment into an inhospitable place, pushing ourselves toward extinction.

Heck, you could argue that trees brought this upon themselves.  Plants caused climate change long before there was a glimmer of a chance that animals like us might ever exist.  The atmosphere of the Earth was like a gas chamber, stifling hot and full of carbon dioxide.  But then plants grew and filled the air with oxygen.  Animals could evolve … leading one day to our own species, which now kills most types of plants to clear space for a select few monocultures.

As Homo sapiens spread across the globe, we rapidly caused the extinction of nearly all mega-fauna on every continent we reached.  On Easter Island, humans caused their own demise by killing every tree – in Collapse, Jared Diamond writes that our species’ inability to notice long-term, gradual change made the environmental devastation possible (indeed, the same phenomenon explains why people aren’t as upset as they should be about climate change today):

image (4)We unconsciously imagine a sudden change: one year, the island still covered with a forest of tall palm trees being used to produce wine, fruit, and timber to transport and erect statues; the next year, just a single tree left, which an islander proceeds to fell in an act of incredibly self-damaging stupidity. 

Much more likely, though, the changes in forest cover from year to year would have been almost undetectable: yes, this year we cut down a few trees over there, but saplings are starting to grow back again here on this abandoned garden site.  Only the oldest islanders, thinking back to their childhoods decades earlier, could have recognized a difference. 

Their children could no more have comprehended their parents’ tales of a tall forest than my 17-year-old sons today can comprehend my wife’s and my tales of what Los Angeles used to be like 40 years ago.  Gradually, Easter Island’s trees became fewer, smaller, and less important.  At the time that the last fruit-bearing adult palm tree was cut, the species had long ago ceased to be of any economic significance.  That left only smaller and smaller palm saplings to clear each year, along with other bushes and treelets. 

No one would have noticed the falling of the last little palm sapling.

512px-Richard_Powers_(author)Throughout The Overstory, Powers summarizes research demonstrating all the ways that a forest is different from – more than – a collection of trees.  It’s like comparing a functioning brain with neuronal cells grown in a petri dish.  But we have cut down nearly all our world’s forests.  We can console ourselves that we still allow some trees to grow – timber crops to ensure that we’ll still have lumber for all those homes we’re building – but we’re close to losing forests without ever knowing quite what they are.

Powers is furious, and wants for you to change your life.

You’re a psychologist,” Mimi says to the recruit.  “How do we convince people that we’re right?”

The newest Cascadian [a group of environmentalists-cum-ecoterrorists / freedom fighters] takes the bait.  “The best arguments in the world won’t change a person’s mind.  The only thing that can do that is a good story.”

On reading poems from Donika Kelly’s ‘Bestiary’ in jail.

On reading poems from Donika Kelly’s ‘Bestiary’ in jail.

This post briefly touches on sexual assault and child abuse.

Many of the men in jail have struggled with interpersonal relationships.

After reading Bruce Weigl’s “The Impossible,” a poem about being sexually assaulted as a child, somebody stayed after class to ask if there were resources to help somebody recover from that sort of experience.  The next week, he brought a two-page account of his own abuse.

After reading Ai’s “Child Beater,” many men proffered their own horror stories.  Sometimes they offered excuses for their parents: “My mom, she had me when she was thirteen, I guess what you’d call it now would be ‘statutory rape.’  So she didn’t know what to do with us.  But there were plenty of times, I’d be mouthing off, she’d tie my arms to rafters in the basement with an extension cord, and … “

Seriously, you don’t need to hear the rest of that story.  Nor the conversation (we’ve read “Child Beater” about once a year) when the men discussed which objects they’d been hit with.  They appraised concussions and trauma with the nuance of oenophiles.

Consider this gorgeous poem by Mouse:

 

THAT CAT

– Mouse

 

We had this cat

Small gray and frantic

Always knocking over my mother’s lamps

 

Me and my sister can’t sit on my mother’s furniture

But that cat can

My mother would whoop my ass for her lamps

Knocked over and broken

 

One day my mom bought me a dollar sign belt

Made of leather and metal

I put that belt to use every time I

Got my own ass whooped

 

We humans evolved to hunt.  By nature, we are a rather violent species.  But these cycles – people’s crummy childhoods; institutional violence during schooling and incarceration – amplify aggression.  Our world “nurtures” many into malice.

If you ask people in jail why they’re in, almost everybody will say that they were busted for drugs or alcohol.  But if you look at bookings, or hear from somebody what sort of case he’s fighting, about half the time it’s domestic violence.

So we’ve been reading poems from Donika Kelly’s Bestiary, a charming volume that uses abundant animal imagery to elucidate human relationships.  The men need a safe space to discuss love and romance.  Obviously, a dingy classroom inside a jail is not the ideal place, but this is what we’ve got.

image (5)

Kelly’s “Bower” opens with:

 

Consider the bowerbird and his obsession

of blue,

 

… then catalogs some of the strange objects that a male bowerbird might use to construct his pleasure dome.  They are artists, meticulously arraying flowers, berries, beetles, even colorful bits of plastic, striving to create an arch sufficiently beautiful that a visiting female will feel inclined to mate.

Among tropical birds with female mate choice, most males will remain celibate.  They try to woo each visitor, but fail.  Usually one single male – he of the most impressive aerial gymnastics (among manakins) or he of the most impressive bower – will be chosen by every female in an area.  Because the males don’t actually raise their young (their contribution ends after the ten or twenty seconds needed to copulate), any given male will have more than enough time for everyone who wants him.

Every male bowerbird devotes his life to the craft, but most of their creations will be deemed insufficiently beautiful.

 

And

how the female finds him,

lacking.  All that blue for nothing.

best

I love the irony of this ending.  This bird’s bower has failed.  The bits of blue that he collected were not sufficient to rouse anyone’s interest in him as a mate.

But life will generally seem pointless if we focus only on goals.  Most bowerbirds won’t mate; Sisyphus will never get that boulder up; you and I will die.    This poem is heartbreaking unless we imagine that the bowerbird takes some pleasure in the very act of creation.

(The natural world is not known for its kindness, but in this case it probably is – because every male bowerbird feels compelled to build these structures, it’s likely that their artistic endeavors feed their brains with dopamine.)

Indeed, most poems that we humans write will go unread.  Even for published poets, it’s probably rare that their words woo a future mate.  But even if Kelly’s own creation did not bring her love (and, based on what little I know about the publishing industry, it almost certainly did not bring her great fortune), it’s clear that all that effort was not for naught.

She made something beautiful.  Sometimes, that alone has to be enough.

At another class, we read Kelly’s “What Gay Porn Has Done for Me.”

Thanks to the internet, many people learn about sexuality from pornography.  One flaw with this “education” is that even when the female actors mime pleasure, they do so while gazing outward.

 

Kelly writes:

 

Call it comfort, or truth, how they look,

not at the camera, as women do,

but at one another.

 

In generic heterosexual pornography, there is a distance.  There is no “relationship” shown between the actors – they’re not even looking at one another.  Instead, the female actor is expected to gaze at a camera, and the (likely male) consumer is gazing at a computer or telephone screen to make some simulacrum of eye contact.

 

Each body is a body on display,

and one I am meant to see and desire.

 

Generic heterosexual pornography seems to objectify the actors much more than gay pornography because the focus is on a performer’s body more than the romantic acts depicted.  Because so much of this pornography is consumed by a homophobic audience, male bodies are depicted minimally – usually only a single organ within the frame – which prevents couples from being shown.

The pleasure offered isn’t quite voyeurism, pretending to watch another pair make love.  It’s fantasy, the chance to imagine being the bearer of the male genitalia.  But this fantasy, independent a fantasy of conversation and mutual seduction, makes others’ bodies seem a thing to be used, not a carriage for the partner’s personality.

 

I am learning

 

what to do with my face,

and I come on anything I like.

 

To desire, and to be desired, need not be degrading for anyone involved.  This is a hard lesson to square with the sort of “sex education” that I received in school, which was sufficiently Christian that sex was presented as both desirable and bad.  If a person thinks that he or she is wicked for wanting, it’ll be hard to discuss what each person wants.

There’s no way to pretend “I’m a good person who just got carried away!” if you make a sober, premeditated, consensual decision to do something bad.

Of course, sexuality isn’t bad.  But many people still posture as thought it is.  When these people feel (totally natural!) desire, they’re forced to create dangerous situations that might excuse their subsequent behavior.

Which, because of those excuse-enabling contortions, often winds up being bad.

image (6)

On ‘The Theft of Fire.’

On ‘The Theft of Fire.’

Stories are powerful things.  A world in which workers are brought into a country as farmhands is very different from one in which barbaric kidnappers torture their victims to extract labor.  A world in which death panels ration healthcare is different from one in which taxpayers preferentially fund effective medical care.

You’ll feel better about your life if you sit down and list the good things that happened to you each day.  There’s only one reality, but countless ways to describe it.

Like most scientists, I love stories of discovery.  These stories also reflect our values – many years passed before Rosalind Franklin’s role in the determining the structure of DNA was acknowledged.  Frontal lobe lobotomy was considered so beneficial that it won the Nobel Prize – sane people didn’t have to tolerate as much wild behavior from others.  Of course, those others were being erased when we ablated their brains.

Even equations convey an ideological slant.  When a chemist writes about the combustion of gasoline, the energy change is negative.  The chemicals are losing energy.  When an engineer writes about the same reaction, the energy change is described as positive.  Who cares about the chemicals?  We humans are gaining energy.  When octane reacts with oxygen, our cars go vrrrooom!

I’ve been reading a lot of mythology, which contains our oldest stories of discovery.  The ways we tell stories haven’t changed much – recent events slide quickly into myth.  Plenty of people think of either George W. Bush or Barrack Obama as Darth-Vader-esque villains, but they’re just regular people.  They have myriad motivations, some good, some bad.  Only in our stories can they be simplified into monsters.

In Ai’s poem, “The Testimony of J. Robert Oppenheimer,” she writes that

512px-JROppenheimer-LosAlamosI could say anything, couldn’t I?

Like a bed we make and unmake at whim,

the truth is always changing,

always shaped by the latest

collective urge to destroy.

Oppenheimer was a regular person, too.  He was good with numbers, and his team of engineers accomplished what they set out to do.

My essay about the ways we mythologize discovery was recently published here, alongside surrealistically mythological art by Jury S. Judge.

theft of fire

On artificial intelligence and solitary confinement.

On artificial intelligence and solitary confinement.

512px-Ludwig_WittgensteinIn Philosophical Investigations (translated by G. E. M. Anscombe), Ludwig Wittgenstein argues that something strange occurs when we learn a language.  As an example, he cites the problems that could arise when you point at something and describe what you see:

The definition of the number two, “That is called ‘two’ “ – pointing to two nuts – is perfectly exact.  But how can two be defined like that?  The person one gives the definition to doesn’t know what one wants to call “two”; he will suppose that “two” is the name given to this group of nuts!

I laughed aloud when I read this statement.  I borrowed Philosophical Investigations a few months after the birth of our second child, and I had spent most of his first day pointing at various objects in the hospital maternity ward and saying to him, “This is red.”  “This is red.”

“This is red.”

Of course, the little guy didn’t understand language yet, so he probably just thought, the warm carry-me object is babbling again.

IMG_5919
Red, you say?

Over time, though, this is how humans learn.  Wittgenstein’s mistake here is to compress the experience of learning a language into a single interaction (philosophers have a bad habit of forgetting about the passage of time – a similar fallacy explains Zeno’s paradox).  Instead of pointing only at two nuts, a parent will point to two blocks – “This is two!” and two pillows – “See the pillows?  There are two!” – and so on.

As a child begins to speak, it becomes even easier to learn – the kid can ask “Is this two?”, which is an incredibly powerful tool for people sufficiently comfortable making mistakes that they can dodge confirmation bias.

y648(When we read the children’s story “In a Dark Dark Room,” I tried to add levity to the ending by making a silly blulululu sound to accompany the ghost, shown to the left of the door on this cover. Then our youngest began pointing to other ghost-like things and asking, “blulululu?”  Is that skeleton a ghost?  What about this possum?)

When people first programmed computers, they provided definitions for everything.  A ghost is an object with a rounded head that has a face and looks very pale.  This was a very arduous process – my definition of a ghost, for instance, is leaving out a lot of important features.  A rigorous definition might require pages of text. 

Now, programmers are letting computers learn the same way we do.  To teach a computer about ghosts, we provide it with many pictures and say, “Each of these pictures has a ghost.”  Just like a child, the computer decides for itself what features qualify something for ghost-hood.

In the beginning, this process was inscrutable.  A trained algorithm could say “This is a ghost!”, but it couldn’t explain why it thought so.

From Philosophical Investigations: 

Screen Shot 2018-03-22 at 8.40.41 AMAnd what does ‘pointing to the shape’, ‘pointing to the color’ consist in?  Point to a piece of paper.  – And now point to its shape – now to its color – now to its number (that sounds queer). – How did you do it?  – You will say that you ‘meant’ a different thing each time you pointed.  And if I ask how that is done, you will say you concentrated your attention on the color, the shape, etc.  But I ask again: how is that done?

After this passage, Wittgenstein speculates on what might be going through a person’s head when pointing at different features of an object.  A team at Google working on automated image analysis asked the same question of their algorithm, and made an output for the algorithm to show what it did when it “concentrated its attention.” 

Here’s a beautiful image from a recent New York Times article about the project, “Google Researchers Are Learning How Machines Learn.”  When the algorithm is specifically instructed to “point to its shape,” it generates a bizarre image of an upward-facing fish flanked by human eyes (shown bottom center, just below the purple rectangle).  That is what the algorithm is thinking of when it “concentrates its attention” on the vase’s shape.

new york times image.jpg

At this point, we humans could quibble.  We might disagree that the fish face really represents the platonic ideal of a vase.  But at least we know what the algorithm is basing its decision on.

Usually, that’s not the case.  After all, it took a lot of work for Google’s team to make their algorithm spit out images showing what it was thinking about.  With most self-trained neural networks, we know only its success rate – even the designers will have no idea why or how it works.

Which can lead to some stunningly bizarre failures.

artificial-intelligence-2228610_1280It’s possible to create images that most humans recognize as one thing, and that an image-analysis algorithm recognizes as something else.  This is a rather scary opportunity for terrorism in a world of self-driving cars; street signs could be defaced in such a way that most human onlookers would find the graffiti unremarkable, but an autonomous car would interpret in a totally new way.

In the world of criminal justice, inscrutable algorithms are already used to determine where police officers should patrol.  The initial hope was that this system would be less biased – except that the algorithm was trained on data that came from years of racially-motivated enforcement.  Minorities are still more likely to be apprehended for equivalent infractions.

And a new artificial intelligence algorithm could be used to determine whether a crime was “gang related.”  The consequences of error can be terrible, here: in California, prisoners could be shunted to solitary for decades if they were suspected of gang affiliation.  Ambiguous photographs on somebody’s social media site were enough to subject a person to decades of torture.

Solitary_Confinement_(4692414179)When an algorithm thinks that the shape of a vase is a fish flanked by human eyes, it’s funny.  But it’s a little less comedic when an algorithm’s mistake ruins somebody’s life – if an incident is designated as a “gang-related crime”, prison sentences can be egregiously long, or send someone to solitary for long enough to cause “anxiety, depression, and hallucinations until their personality is completely destroyed.

Here’s a poem I received in the mail recently:

LOCKDOWN

by Pouncho

For 30 days and 30 nights

I stare at four walls with hate written

         over them.

Falling to my knees from the body blows

         of words.

It damages the mind.

I haven’t had no sleep. 

How can you stop mental blows, torture,

         and names –

         They spread.

I just wanted to scream:

         Why?

For 30 days and 30 nights

My mind was in isolation.

On automation, William Gaddis, and addiction.

On automation, William Gaddis, and addiction.

I’ve never bought meth or heroin, but apparently it’s easier now than ever.  Prices dropped over the last decade, drugs became easier to find, and more people, from broader swaths of society, began using.  Or so I’ve been told by several long-term users.

This is capitalism working the way it’s supposed to.  People want something, others make money by providing it.

And the reason why demand for drugs has increased over the past decade can also be attributed to capitalism working the way it’s supposed to.  It takes a combination of capital (stuff) and labor (people) to provide any service, but the ratio of these isn’t fixed.  If you want to sell cans of soda, you could hire a human to stand behind a counter and hand sodas to customers, or you could install a vending machine.

Vending_machines_at_hospitalThe vending machine requires labor, too.  Somebody has to fill it when it’s empty.  Someone has to fix it when it breaks.  But the total time that humans spend working per soda is lower.  In theory, the humans working with the vending machine are paid higher wages.  After all, it’s more difficult to repair a machine than to hand somebody a soda.

As our world’s stuff became more productive, fewer people were needed.  Among ancient hunter gatherers, the effort of one person was needed to feed one person.  Everyone had to find food.  Among early farmers, the effort of one person could feed barely more than one person.  To attain a life of leisure, a ruler would have to tax many, many peasants.

By the twentieth century, the effort of one person could feed four.  Now, the effort of one person can feed well over a hundred.

With tractors, reapers, refrigerators, etc., one human can accomplish more.  Which is good – it can provide a higher standard of living for all.  But it also means that not everyone’s effort is needed.

At the extreme, not anyone’s effort is needed.

1024px-Sophia_(robot)_2There’s no type of human work that a robot with sufficiently advanced AI couldn’t do.  Our brains and bodies are the product of haphazard evolution.  We could design something better, like a humanoid creature whose eyes registered more the electromagnetic spectrum and had no blind spots (due to an octopus-like optic nerve).

If one person patented all the necessary technologies to build an army of robots that could feed the world, then we’d have a future where the effort of one could feed many billions.  Robots can write newspaper articles, they can do legal work, they’ll be able to perform surgery and medical diagnosis.  Theoretically, they could design robots.

Among those billions of unnecessary humans, many would likely develop addictions to stupefying drugs.  It’s easier lapse into despair when you’re idle or feel no a sense of purpose.

glasshouseIn Glass House, Brian Alexander writes about a Midwestern town that fell into ruin.  It was once a relatively prosperous place; cheap energy led to a major glass company that provided many jobs.  But then came “a thirty-five-year program of exploitation and value destruction in the service of ‘returns.’ “  Wall street executives purchased the glass company and ran it into the ground to boost short-term gains, which let them re-sell the leached husk at a profit.

Instead of working at the glass company, many young people moved away.  Those who stayed often slid into drug use.

In Alexander’s words:

Even Judge David Trimmer, an adherent of a strict interpretation of the personal-responsibility gospel, had to acknowledge that having no job, or a lousy job, was not going to give a thirty-five-year-old man much purpose in life.  So many times, people wandered through his courtroom like nomads.  “I always tell them, ‘You’re like a leaf blowing from a tree.  Which direction do you go?  It depends on where the wind is going.’  That’s how most of them live their lives.  I ask them, ‘What’s your purpose in life?’  And they say, ‘I don’t know.’  ‘You don’t even love yourself, do you?’  ‘No.’ “

Trimmer and the doctor still believed in a world with an intact social contract.  But the social contract was shattered long ago.  They wanted Lancaster to uphold its end of a bargain that had been made obsolete by over three decades of greed.

Monomoy Capital Partners, Carl Icahn, Cerberus Capital Management, Newell, Wexford, Barington, Clinton [all Wall Street corporations that bought Lancaster’s glass company, sold off equipment or delayed repairs to funnel money toward management salaries, then passed it along to the next set of speculative owners] – none of them bore any personal responsibility. 

A & M and $1,200-per-hour lawyers didn’t bear any personal responsibility.  They didn’t get a lecture or a jail sentence: They got rich.  The politicians – from both parties – who enabled their behavior and that of the payday- and car-title-loan vultures, and the voters of Lancaster who refused to invest in the future of their town as previous generations had done (even as they cheered Ohio State football coach Urban Meyer, who took $6.1 million per year in public money), didn’t bear any personal responsibility.

With the fracturing of the social contract, trust and social cohesion fractured, too.  Even Brad Hutchinson, a man who had millions of reasons to believe in The System [he grew up poor, started a business, became rich], had no faith in politicians or big business. 

I think that most politicians, if not all politicians, are crooked as they day is long,” Hutchinson said.  “They don’t have on their minds what’s best for the people.”  Business leaders had no ethics, either.  “There’s disconnect everywhere.  On every level of society.  Everybody’s out for number one.  Take care of yourself.  Zero respect for anybody else.”

So it wasn’t just the poor or the working class who felt disaffected, and it wasn’t just about money or income inequality.  The whole culture had changed.

America had fetishized cash until it became synonymous with virtue.

Instead of treating people as stakeholders – employees and neighbors worthy of moral concern – the distant owners considered them to be simply sources of revenue.  Many once-successful businesses were restructured this way.  Soon, schools will be too.  In “The Michigan Experiment,” Mark Binelli writes that:

In theory, at least, public-school districts have superintendents tasked with evaluating teachers and facilities.  Carver [a charter school in Highland Park, a sovereign municipality in the center of Detroit], on the other hand, is accountable to more ambiguous entities – like, for example, Oak Ridge Financial, the Minnesota-based financial-services firm that sent a team of former educators to visit the school.  They had come not in service of the children but on behalf of shareholders expecting a thorough vetting of a long-term investment.

carver.JPG

This is all legal, of course.  This is capitalism working as intended.  Those who have wealth, no matter what historical violence might have produced it, have power of those without.

This is explained succinctly by a child in William Gaddis’s novel J R:

I mean why should somebody go steal and break the law to get all they can when there’s always some law where you can be legal and get it all anyway!”

220px-JRnovel.JPGFor many years, Gaddis pondered the ways that automation was destroying our world.  In J R (which is written in a style similar to the recent film Birdman, the focus moving fluidly from character to character without breaks), a middle schooler becomes a Wall Street tycoon.  Because the limited moral compass of a middle schooler is a virtue in this world, he’s wildly successful, with his misspelling of the name Alaska (“Alsaka project”) discussed in full seriousness by adults.

Meanwhile, a failed writer obsesses over player pianos.  This narrative is continued in Agape Agape, with a terminal cancer patient rooting through his notes on player pianos, certain that these pianos explain the devastation of the world.

You can play better by roll than many who play by hand.”

220px-AgapeAgape.jpgThe characters in J R and Agape Agape think it’s clear that someone playing by roll isn’t playing the piano.  And yet, ironically, the player piano shows a way for increasing automation to not destroy the world.

A good robot works efficiently.  But a player piano is intentionally inefficient.  Even though it could produce music on its own, it requires someone to sit in front of it and work the foot pumps.  The design creates a need for human labor.

There’s still room for pessimism here – Gaddis is right to feel aggrieved that the player piano devalues skilled human labor – but a world with someone working the foot pumps seems less bad than one where idle people watch the skies for Jeff Bezos’s delivery drones.

By now, a lot of work can be done cheaply by machines.  But if we want to keep our world livable, it’s worth paying more for things made by human hands.

On empathizing with machines.

On empathizing with machines.

When I turn on my computer, I don’t consider what my computer wants.  It seems relatively empty of desire.  I click on an icon to open a text document and begin to type: letters appear on the screen.

If anything, the computer seems completely servile.  It wants to be of service!  I type, and it rearranges little magnets to mirror my desires.

Gps-304842.svg

When our family travels and turns on the GPS, though, we discuss the system’s wants more readily.

“It wants you to turn left here,” K says.

“Pfft,” I say.  “That road looks bland.”  I keep driving straight and the machine starts flashing make the next available u-turn until eventually it gives in and calculates a new route to accommodate my whim.

The GPS wants our car to travel along the fastest available route.  I want to look at pretty leaves and avoid those hilly median-less highways where death seems imminent at every crest.  Sometimes the machine’s desires and mine align, sometimes they do not.

The GPS is relatively powerless, though.  It can only accomplish its goals by persuading me to follow its advice.  If it says turn left and I feel wary, we go straight.

facebook-257829_640Other machines get their way more often.  For instance, the program that chooses what to display on people’s Facebook pages.  This program wants to make money.  To do this, it must choose which advertisers receive screen time, and to curate an audience that will look at those screens often.  It wants for the people looking at advertisements to enjoy their experience.

Luckily for this program, it receives a huge amount of feedback on how well it’s doing.  When it makes a mistake, it will realize promptly and correct itself.  For instance, it gathers data on how much time the target audience spends looking at the site.  It knows how often advertisements are clicked on by someone curious to learn more about whatever is being shilled.  It knows how often those clicks lead to sales for the companies giving it money (which will make those companies more eager to give it money in the future).

Of course, this program’s desire for money doesn’t always coincide with my desires.  I want to live in a country with a broadly informed citizenry.  I want people to engage with nuanced political and philosophical discourse.  I want people to spend less time staring at their telephones and more time engaging with the world around them.  I want people to spend less money.

But we, as a people, have given this program more power than a GPS.  If you look at Facebook, it controls what you see – and few people seem upset enough to stop looking at Facebook.

With enough power, does a machine become a moral actor?  The program choosing what to display on Facebook doesn’t seem to consider the ethics of its decisions … but should it?

From Burt Helm’s recent New York Times Magazine article, “How Facebook’s Oracular Algorithm Determines the Fates of Start-Ups”:

Bad human actors don’t pose the only problem; a machine-learning algorithm, left unchecked, can misbehave and compound inequality on its own, no help from humans needed.  The same mechanism that decides that 30-something women who like yoga disproportionately buy Lululemon tights – and shows them ads for more yoga wear – would also show more junk-food ads to impoverished populations rife with diabetes and obesity.

If a machine designed to want money becomes sufficiently powerful, it will do things that we humans find unpleasant.  (This isn’t solely a problem with machines – consider the ethical decisions of the Koch brothers, for instance – but contemporary machines tend to be much more single-minded than any human.)

I would argue that even if a programmer tried to include ethical precepts into a machine’s goals, problems would arise.  If a sufficiently powerful machine had the mandate “end human suffering,” for instance, it might decide to simultaneously snuff all Homo sapiens from the planet.

Which is a problem that game designer Frank Lantz wanted to help us understand.

One virtue of video games over other art forms is how well games can create empathy.  It’s easy to read about Guantanamo prison guards torturing inmates and think, I would never do that.  The game Grand Theft Auto 5 does something more subtle.  It asks players – after they have sunk a significant time investment into the game – to torture.  You, the player, become like a prison guard, having put years of your life toward a career.  You’re asked to do something immoral.  Will you do it?

grand theft auto

Most players do.  Put into that position, we lapse.

In Frank Lantz’s game, Paperclips, players are helped to empathize with a machine.  Just like the program choosing what to display on people’s Facebook pages, players are given several controls to tweak in order to maximize a resource.  That program wanted money; you, in the game, want paperclips.  Click a button to cut some wire and, voila, you’ve made one!

But what if there were more?

Paperclip-01_(xndr)

A machine designed to make as many paperclips as possible (for which it needs money, which it gets by selling paperclips) would want more.  While playing the game (surprisingly compelling given that it’s a text-only window filled with flickering numbers), we become that machine.  And we slip into folly.  Oops.  Goodbye, Earth.

There are dangers inherent in giving too much power to anyone or anything with such clearly articulated wants.  A machine might destroy us.  But: we would probably do it, too.

On race and our criminal justice system.

On race and our criminal justice system.

I’ve been teaching poetry in the local jail for over a year. The guys are great students, and I love working with them… but there are differences between these classes and my previous teaching experiences. Not just the orange attire or the chance that somebody down the hall will be rhythmically kicking a cell door all hour.

When I was teaching wealthy pre-meds physics & organic chemistry at Northwestern & Stanford, none of my students died. Nobody’s boyfriend or girlfriend was murdered midway through the semester. Nobody was sitting in class with someone who had ruined his or her life by becoming a police informant. Sometimes people got teary eyed, but only over grades.

plowWhereas… well, when we were discussing Norman Dubie’s “Safe Passage” last December – a beautiful poem about riding in the snowplow with his grandfather the night before the old man died – we wound up talking about our families. A forty-year-old man wept: he had thought that this year, for the first time in years, he would get to spend Christmas with his kids … but, even after they let you out, they take away your license … and make you show for blow-and-go some fifteen miles away, every single day … and charge you for the classes, but those classes mean you have no way to schedule regular work hours … so they put you on warrant when you can’t paid … and then, if you make one tiny mistake …

Christmas was in two days. He’d spend another month inside.

Ai_bwThe accumulated trauma that these guys shoulder from their past lives is heartbreaking. One of the best lesson plans my co-teacher and I have come up with uses several poems from Ai to prepare for writing our own persona poems. A former student – now released, & still sober after two months – says he still feels changed by the experience of writing in someone else’s voice. In that space he was made to feel so small, but taking a few minutes to ponder the world from another perspective let him escape. And it gave him a new view of the consequences of his own choices.

But a lot of Ai’s poetry is very difficult. She writes from the perspectives of murderers and rapists. We’ve discussed her poem “Child Beater” with several groups of men, and at least a third of the guys, every time, shared harrowing stories of their own.

On a good day, these men have long histories of suffering weighing them down.

And on a bad day? My co-teacher and I might show up with a stack of poems, start teaching class, and, mid-way through, learn that another of our students’ family members has just died. Over the course of a year, at least two had wives die of overdose, another’s partner was murdered … and, in that case, one of the killers was placed overnight in a cell adjacent to his own …

And, half an hour after my second class there ended, one of my students died.

The men do great work, both interpreting poems and writing their own, but, just think for a moment: what could they accomplish if they weren’t oppressed by so much misery? Compared to my experience teaching at wealthy universities, the emotional toll is excruciating. And I am just a tourist! After every class, I get to leave. A guard smiles and opens the door for me. I walk away.

This is their life.

And it’s my fault. All citizens of this country – all people who benefit from the long history of violence that has made this nation so wealthy – bear the blame. As beneficiaries, the suffering caused by mass incarceration is our responsibility.

So, the guy who died? He was just a kid. Nineteen years old. And he’d gone over a year without medication for his highly-treatable genetic condition. I’ve written previously about the unfair circumstances he had been born into: suffice it to say that his family was very poor. He’d been in jail awaiting trial since sixteen – he was being tried as an adult for “armed robbery” after an attempted burglary with a BB gun – and then, when he turned eighteen – please ignore the irony of this age constituting legal adulthood – the state said he had to pay for his own medication. With beta blockers, people with his genetic condition have a normal life expectancy. Beta blockers cost about $15 per month.

No, a dude whose family is so poor that he attempted robbery with a BB gun can not afford $15 per month. Sitting in jail, it’s not like he could help pay.

A few weeks after his death, I remarked to one of the other guys that he probably wouldn’t have been charged as an adult if he’d been a white kid. I told two anecdotes from the local high school: a student with psychiatric trouble amassed weapons in his locker and planned a date to do something violent. Another student participated in a food fight during the last week of school. The former was welcomed back; the latter was told that he’d be arrested if he returned to school grounds. And he hadn’t taken all his finals yet! If all his teachers had known about this disciplinary ruling in time, he wouldn’t have received a degree.

The first student was white; the latter black.

snowflakeThere’s no universal standard. Maybe there can’t be – we are all “beautiful unique snowflakes,” and so every case will be slightly different. But unfairness blooms when so much is left up to individual discretion. Black students are punished excessively throughout our country. Black children as young as 4 or 5 are considered disproportionately threatening and are treated unfairly.

Prosecutors in the criminal justice system have even more power. There’s no oversight and often no documentation for their decisions. Charges can be upgraded or downgraded on a whim. A white kid might’ve been sent to reform school for his “youthful indiscretions”; this dude sat in jail from age 16 until his death.

“Yeah, but _____ always said, ‘I’m not black. I’m mid-skinned.”

(You can also listen to a podcast about his unfair treatmeant and premature death here.)

#

This spring, I said to one of the guys whose trial date was coming up, “I feel like, if I’d done the exact same thing as you…” I shook my head. There was no reason to go on. “But black guys get the hammer.”

He disagreed. Not with the idea that black people are punished disproportionately in this country, just that it would be his burden, too.

NCA-Earth“Well, but I’m not black,” he said. “My family is from all over the place … I’m Native American, and Caribbean, and …” He listed a long pedigree. Indeed, his ancestors had come from around the globe: Europe, India, Africa, the Americas …

“My apologies,” I said. “And, I guess … so, my wife teaches at the high school in town, and one of her kids, his family is Polynesian … but at school everybody assumes he’s black. So he mostly identifies with Black culture here.”

“I get that,” the guy said to me, nodding. He’s a really kind and thoughtful dude. “Cause, yeah, some of it is just who other people think you are.”

His words stuck with me: who other people think you are.

We were sure he could walk. Probation, rehab, that kind of thing. We’d seen other people with equivalent bookings go free.

We were wrong. Dramatically so: he was sentenced to seven years. His family was devastated. You don’t even want to know the extent.

Soon after, I was looking up his prison address to send him a letter and a few books of poetry. On the page of “Offender Data” provided by the Indiana Department of Correction, it read,

Race: Black.

doc