On suboptimal optimization.

On suboptimal optimization.

I’ve been helping a friend learn the math behind optimization so that she can pass a graduation-requirement course in linear algebra. 

Optimization is a wonderful mathematical tool.  Biochemists love it – progression toward an energy minimum directs protein folding, among other physical phenomena.  Economists love it – whenever you’re trying to make money, you’re solving for a constrained maximum.  Philosophers love it – how can we provide the most happiness for a population?  Computer scientists love it – self-taught translation algorithms use this same methodology (I still believe that you could mostly replace Ludwig Wittgenstein’s Philosophical Investigations with this New York Times Magazine article on machine learning and a primer on principal component analysis).

But, even though optimization problems are useful, the math behind them can be tricky.  I’m skeptical that this mathematical technique is essential for everyone who wants a B.A. to grasp – my friend, for example, is a wonderful preschool teacher who hopes to finally finish a degree in child psychology.  She would have graduated two years ago except that she’s failed this math class three times.

I could understand if the university wanted her to take statistics, as that would help her understand psychology research papers … and the science underlying contemporary political debates … and value-added models for education … and more.  A basic understanding of statistics might make people better citizens.

Whereas … linear algebra?  This is a beautiful but counterintuitive field of mathematics.  If you’re interested in certain subjects – if you want to become a physicist, for example – you really should learn this math.  A deep understanding of linear algebra can enliven your study of quantum mechanics.

The summary of quantum mechanics: animation by Templaton.

Then again, Werner Heisenberg, who was a brilliant physicist, had a limited grasp on linear algebra.  He made huge contributions to our understanding of quantum mechanics, but his lack of mathematical expertise occasionally held him back.  He never quite understood the implications of the Heisenberg Uncertainty Principle, and he failed to provide Adolph Hitler with an atomic bomb.

In retrospect, maybe it’s good that Heisenberg didn’t know more linear algebra.

While I doubt that Heisenberg would have made a great preschool teacher, I don’t think that deficits in linear algebra were deterring him from that profession.  After each evening that I spend working with my friend, I do feel that she understands matrices a little better … but her ability to nurture children isn’t improving.

And yet.  Somebody in an office decided that all university students here need to pass this class.  I don’t think this rule optimizes the educational outcomes for their students, but perhaps they are maximizing something else, like the registration fees that can be extracted.

Optimization is a wonderful mathematical tool, but it’s easy to misuse.  Numbers will always do what they’re supposed to, but each such problem begins with a choice.  What exactly do you hope to optimize?

Choose the wrong thing and you’ll make the world worse.

#

Figure 1 from Eykholt et al., 2018.

Most automobile companies are researching self-driving cars.  They’re the way of the future!  In a previous essay, I included links to studies showing that unremarkable-looking graffiti could confound self-driving cars … but the issue I want to discuss today is both more mundane and more perfidious.

After all, using graffiti to make a self-driving car interpret a stop sign as “Speed Limit 45” is a design flaw.  A car that accelerates instead of braking in that situation is not operating as intended.

But passenger-less self-driving cars that roam the city all day, intentionally creating as many traffic jams as possible?  That’s a feature.  That’s what self-driving cars are designed to do.

A machine designed to create traffic jams?

Despite my wariness about automation and algorithms run amok, I hadn’t considered this problem until I read Adam Millard-Ball’s recent research paper, “The Autonomous Vehicle Parking Problem.” Millard-Ball begins with a simple assumption: what if a self-driving car is designed to maximize utility for its owner?

This assumption seems reasonable.  After all, the AI piloting a self-driving car must include an explicit response to the trolley problem.  Should the car intentionally crash and kill its passenger in order to save the lives of a group of pedestrians?  This ethical quandary is notoriously tricky to answer … but a computer scientist designing a self-driving car will probably answer, “no.” 

Otherwise, the manufacturers won’t sell cars.  Would you ride in a vehicle that was programmed to sacrifice you?

Luckily, the AI will not have to make that sort of life and death decision often.  But here’s a question that will arise daily: if you commute in a self-driving car, what should the car do while you’re working?

If the car was designed to maximize public utility, perhaps it would spend those hours serving as a low-cost taxi.  If demand for transportation happened to be lower than the quantity of available, unoccupied self-driving cars, it might use its elaborate array of sensors to squeeze into as small a space as possible inside a parking garage.

But what if the car is designed to benefit its owner?

Perhaps the owner would still want for the car to work as a taxi, just as an extra source of income.  But some people – especially the people wealthy enough to afford to purchase the first wave of self-driving cars – don’t like the idea of strangers mucking around in their vehicles.  Some self-driving cars would spend those hours unoccupied.

But they won’t park.  In most cities, parking costs between $2 and $10 per hour, depending on whether it’s street or garage parking, whether you purchase a long-term contract, etc. 

The cost to just keep driving is generally going to be lower than $2 per hour.  Worse, this cost is a function of the car’s speed.  If the car is idling at a dead stop, it will use approximately 0.1 gallon per hour, costing 25 cents per hour at today’s prices.  If the car is traveling at 30 mph without breaks, it will use approximately 1 gallon per hour, costing $2.50 per hour.

To save money, the car wants to stay on the road … but it wants for traffic to be as close to a standstill as possible.

Luckily for the car, this is an easy optimization problem.  It can consult its onboard GPS to find nearby areas where traffic is slow, then drive over there.  As more and more self-driving cars converge on the same jammed streets, they’ll slow traffic more and more, allowing them to consume the workday with as little motion as possible.

Photo by walidhassanein on Flickr.

Pity the person sitting behind the wheel of an occupied car on those streets.  All the self-driving cars will be having a great time stuck in that traffic jam: we’re saving money!, they get to think.  Meanwhile the human is stuck swearing at empty shells, cursing a bevy of computer programmers who made their choices months or years ago.

And all those idling engines exhale carbon dioxide.  But it doesn’t cost money to pollute, because one political party’s worth of politicians willfully ignore the fact that capitalism, by philosophical design, requires we set prices for scarce resources … like clean air, or habitable planets.

On ‘The Overstory.’

On ‘The Overstory.’

We delude ourselves into thinking that the pace of life has increased in recent years.  National news is made by the minute as politicians announce their plans via live-televised pronouncement or mass-audience short text message.  Office workers carry powerful computers into their bedrooms, continuing to work until moments before sleep.

But our frenzy doesn’t match the actual pace of the world.  There’s a universe of our own creation zipping by far faster than the reaction time of any organism that relies on voltage waves propagating along its ion channels.  Fortunes are made by shortening the length of fiberoptic cable between supercomputer clusters and the stock exchange, improving response times by fractions of a second.  “Practice makes perfect,” and one reason the new chess and Go algorithms are so much better than human players is that they’ve played lifetimes of games against themselves since their creation.

640px-IFA_2010_Internationale_Funkausstellung_Berlin_18We can frantically press buttons or swipe our fingers across touch screens, but humans will never keep up with the speed of the algorithms that recommend our entertainment, curate our news, eavesdrop on our conversations, guess at our sexual predilections, condemn us to prison

And then there’s the world.  The living things that have been inhabiting our planet for billions of years – the integrated ecosystems they create, the climates they shape.  The natural world continues to march at the same stately pace as ever.  Trees siphon carbon from the air as they grasp for the sun, then fall and rot and cause the Earth itself to grow.  A single tree might live for hundreds or thousands of years.  The forests in which they are enmeshed might develop a personality over millions.

Trees do not have a neural network.  But neither do neurons.  When simple components band together and communicate, the result can be striking.  And, as our own brains clearly show, conscious.  The bees clustering beneath a branch do not seem particularly clever by most of our metrics, but the hive as a whole responds intelligently to external pressures.  Although each individual has no idea what the others are doing, they function as a unit.

Your neurons probably don’t understand what they’re doing.  But they communicate to the others, and that wide network of communication is enough.

Root_of_a_TreeTrees talk.  Their roots intertwine – they send chemical communiques through symbiotic networks of fungal mycelia akin to telephones.

Trees talk slowly, by our standards.  But we’ve already proven to ourselves that intelligence could operate over many orders of temporal magnitude – silicon-based AI is much speedier than the chemical communiques sent from neuron to neuron within our own brains.  If a forest thought on a timescale of days, months, or years, would we humans even notice?  Our concerns were bound up in the minute by minute exigencies of hunting for food, finding mates, and trying not to be mauled by lions.  Now, they’re bound up in the exigencies of making money.  Selecting which TV show to stream.  Scoping the latest developments of a congressional race that will determine whether two more years pass without the slightest attempt made to avoid global famine.

In The Overstory, Richard Powers tries to frame this timescale conflict such that we Homo sapiens might finally understand.  Early on, he presents a summary of his own book; fractal-like, this single paragraph encapsulates the entire 500 pages (or rather, thousands of years) of heartbreak.

image (2)He still binges on old-school reading.  At night, he pores over mind-bending epics that reveal the true scandals of time and matter.  Sweeping tales of generational spaceship arks.  Domed cities like giant terrariums.  Histories that split and bifurcate into countless parallel quantum worlds.  There’s a story he’s waiting for, long before he comes across it.  When he finds it at last, it stays with him forever, although he’ll never be able to find it again, in any database.  Aliens land on Earth.  They’re little runts, as alien races go.  But they metabolize like there’s no tomorrow.  They zip around like swarms of gnats, too fast to see – so fast that Earth seconds seem to them like years.  To them, humans are nothing but sculptures of immobile meat.  The foreigners try to communicate, but there’s no reply.  Finding no signs of intelligent life, they tuck into the frozen statues and start curing them like so much jerky, for the long ride home.

Several times while reading The Overstory, I felt a flush of shame at the thought of how much I personally consume.  Which means, obviously, that Powers was doing his work well – I should feel ashamed.    We are alive, brilliantly beautifully alive, here on a magnificent, temperate planet.  But most of us spend too little time feeling awe and too much feeling want.  “What if there was more?” repeated so often that we’ve approached a clear precipice of forever having less.

In Fruitful Labor, Mike Madison (whose every word – including the rueful realization that young people today can’t reasonably expect to follow in his footsteps – seems to come from a place of earned wisdom and integrity, a distinct contrast from Thoreau’s Walden, in my opinion) asks us to:

image (3)Consider the case of a foolish youth who, at age 21, inherits a fortune that he spends so recklessly that, by the age of 30, the fortune is dissipated and he finds himself destitute.  This is more or less the situation of the human species.  We have inherited great wealth in several forms: historic solar energy, either recent sunlight stored as biomass, or ancient sunlight stored as fossil fuels; the great diversity of plants and animals, organized into robust ecosystems; ancient aquifers; and the earth’s soil, which is the basis for all terrestrial life.  We might mention a fifth form of inherited wealth – antibiotics, that magic against many diseases – which we are rendering ineffective through misuse.  Of these forms of wealth that we are spending so recklessly, fossil fuels are primary, because it is their energy that drives the destruction of the other assets.

What we have purchased with the expenditure of this inheritance is an increase in the human population of the planet far above what the carrying capacity would be without the use of fossil fuels.  This level of population cannot be sustained, and so must decline.  The decline could be gradual and relatively painless, as we see in Japan, where the death rate slightly exceeds the birth rate.  Or the decline could be sudden and catastrophic, with unimaginable grief and misery.

In this context, the value of increased energy efficiency is that it delays the inevitable reckoning; that is, it buys us time.  We could use this time wisely, to decrease our populations in the Japanese style, and to conserve our soil, water, and biological resources.  A slower pace of climate change could allow biological and ecological adaptations.  At the same time we could develop and enhance our uses of geothermal, nuclear, and solar energies, and change our habits to be less materialistic.  A darker option is to use the advantages of increased energy efficiency to increase the human population even further, ensuring increasing planetary poverty and an even more grievous demise.  History does not inspire optimism; nonetheless, the ethical imperative remains to farm as efficiently as one is able.

The tragic side of this situation is not so much the fate of the humans; we are a flawed species unable to make good use of the wisdom available to us, and we have earned our unhappy destiny by our foolishness.  It is the other species on the planet, whose destinies are tied to ours, that suffer a tragic outcome.

Any individual among us could protest that “It’s not my fault!”  The Koch brothers did not invent the internal combustion engine – for all their efforts to confine us to a track toward destitution and demise, they didn’t set us off in that direction.  And it’s not as though contemporary humans are unique in reshaping our environment into an inhospitable place, pushing ourselves toward extinction.

Heck, you could argue that trees brought this upon themselves.  Plants caused climate change long before there was a glimmer of a chance that animals like us might ever exist.  The atmosphere of the Earth was like a gas chamber, stifling hot and full of carbon dioxide.  But then plants grew and filled the air with oxygen.  Animals could evolve … leading one day to our own species, which now kills most types of plants to clear space for a select few monocultures.

As Homo sapiens spread across the globe, we rapidly caused the extinction of nearly all mega-fauna on every continent we reached.  On Easter Island, humans caused their own demise by killing every tree – in Collapse, Jared Diamond writes that our species’ inability to notice long-term, gradual change made the environmental devastation possible (indeed, the same phenomenon explains why people aren’t as upset as they should be about climate change today):

image (4)We unconsciously imagine a sudden change: one year, the island still covered with a forest of tall palm trees being used to produce wine, fruit, and timber to transport and erect statues; the next year, just a single tree left, which an islander proceeds to fell in an act of incredibly self-damaging stupidity. 

Much more likely, though, the changes in forest cover from year to year would have been almost undetectable: yes, this year we cut down a few trees over there, but saplings are starting to grow back again here on this abandoned garden site.  Only the oldest islanders, thinking back to their childhoods decades earlier, could have recognized a difference. 

Their children could no more have comprehended their parents’ tales of a tall forest than my 17-year-old sons today can comprehend my wife’s and my tales of what Los Angeles used to be like 40 years ago.  Gradually, Easter Island’s trees became fewer, smaller, and less important.  At the time that the last fruit-bearing adult palm tree was cut, the species had long ago ceased to be of any economic significance.  That left only smaller and smaller palm saplings to clear each year, along with other bushes and treelets. 

No one would have noticed the falling of the last little palm sapling.

512px-Richard_Powers_(author)Throughout The Overstory, Powers summarizes research demonstrating all the ways that a forest is different from – more than – a collection of trees.  It’s like comparing a functioning brain with neuronal cells grown in a petri dish.  But we have cut down nearly all our world’s forests.  We can console ourselves that we still allow some trees to grow – timber crops to ensure that we’ll still have lumber for all those homes we’re building – but we’re close to losing forests without ever knowing quite what they are.

Powers is furious, and wants for you to change your life.

You’re a psychologist,” Mimi says to the recruit.  “How do we convince people that we’re right?”

The newest Cascadian [a group of environmentalists-cum-ecoterrorists / freedom fighters] takes the bait.  “The best arguments in the world won’t change a person’s mind.  The only thing that can do that is a good story.”

On artificial intelligence and solitary confinement.

On artificial intelligence and solitary confinement.

512px-Ludwig_WittgensteinIn Philosophical Investigations (translated by G. E. M. Anscombe), Ludwig Wittgenstein argues that something strange occurs when we learn a language.  As an example, he cites the problems that could arise when you point at something and describe what you see:

The definition of the number two, “That is called ‘two’ “ – pointing to two nuts – is perfectly exact.  But how can two be defined like that?  The person one gives the definition to doesn’t know what one wants to call “two”; he will suppose that “two” is the name given to this group of nuts!

I laughed aloud when I read this statement.  I borrowed Philosophical Investigations a few months after the birth of our second child, and I had spent most of his first day pointing at various objects in the hospital maternity ward and saying to him, “This is red.”  “This is red.”

“This is red.”

Of course, the little guy didn’t understand language yet, so he probably just thought, the warm carry-me object is babbling again.

IMG_5919
Red, you say?

Over time, though, this is how humans learn.  Wittgenstein’s mistake here is to compress the experience of learning a language into a single interaction (philosophers have a bad habit of forgetting about the passage of time – a similar fallacy explains Zeno’s paradox).  Instead of pointing only at two nuts, a parent will point to two blocks – “This is two!” and two pillows – “See the pillows?  There are two!” – and so on.

As a child begins to speak, it becomes even easier to learn – the kid can ask “Is this two?”, which is an incredibly powerful tool for people sufficiently comfortable making mistakes that they can dodge confirmation bias.

y648(When we read the children’s story “In a Dark Dark Room,” I tried to add levity to the ending by making a silly blulululu sound to accompany the ghost, shown to the left of the door on this cover. Then our youngest began pointing to other ghost-like things and asking, “blulululu?”  Is that skeleton a ghost?  What about this possum?)

When people first programmed computers, they provided definitions for everything.  A ghost is an object with a rounded head that has a face and looks very pale.  This was a very arduous process – my definition of a ghost, for instance, is leaving out a lot of important features.  A rigorous definition might require pages of text. 

Now, programmers are letting computers learn the same way we do.  To teach a computer about ghosts, we provide it with many pictures and say, “Each of these pictures has a ghost.”  Just like a child, the computer decides for itself what features qualify something for ghost-hood.

In the beginning, this process was inscrutable.  A trained algorithm could say “This is a ghost!”, but it couldn’t explain why it thought so.

From Philosophical Investigations: 

Screen Shot 2018-03-22 at 8.40.41 AMAnd what does ‘pointing to the shape’, ‘pointing to the color’ consist in?  Point to a piece of paper.  – And now point to its shape – now to its color – now to its number (that sounds queer). – How did you do it?  – You will say that you ‘meant’ a different thing each time you pointed.  And if I ask how that is done, you will say you concentrated your attention on the color, the shape, etc.  But I ask again: how is that done?

After this passage, Wittgenstein speculates on what might be going through a person’s head when pointing at different features of an object.  A team at Google working on automated image analysis asked the same question of their algorithm, and made an output for the algorithm to show what it did when it “concentrated its attention.” 

Here’s a beautiful image from a recent New York Times article about the project, “Google Researchers Are Learning How Machines Learn.”  When the algorithm is specifically instructed to “point to its shape,” it generates a bizarre image of an upward-facing fish flanked by human eyes (shown bottom center, just below the purple rectangle).  That is what the algorithm is thinking of when it “concentrates its attention” on the vase’s shape.

new york times image.jpg

At this point, we humans could quibble.  We might disagree that the fish face really represents the platonic ideal of a vase.  But at least we know what the algorithm is basing its decision on.

Usually, that’s not the case.  After all, it took a lot of work for Google’s team to make their algorithm spit out images showing what it was thinking about.  With most self-trained neural networks, we know only its success rate – even the designers will have no idea why or how it works.

Which can lead to some stunningly bizarre failures.

artificial-intelligence-2228610_1280It’s possible to create images that most humans recognize as one thing, and that an image-analysis algorithm recognizes as something else.  This is a rather scary opportunity for terrorism in a world of self-driving cars; street signs could be defaced in such a way that most human onlookers would find the graffiti unremarkable, but an autonomous car would interpret in a totally new way.

In the world of criminal justice, inscrutable algorithms are already used to determine where police officers should patrol.  The initial hope was that this system would be less biased – except that the algorithm was trained on data that came from years of racially-motivated enforcement.  Minorities are still more likely to be apprehended for equivalent infractions.

And a new artificial intelligence algorithm could be used to determine whether a crime was “gang related.”  The consequences of error can be terrible, here: in California, prisoners could be shunted to solitary for decades if they were suspected of gang affiliation.  Ambiguous photographs on somebody’s social media site were enough to subject a person to decades of torture.

Solitary_Confinement_(4692414179)When an algorithm thinks that the shape of a vase is a fish flanked by human eyes, it’s funny.  But it’s a little less comedic when an algorithm’s mistake ruins somebody’s life – if an incident is designated as a “gang-related crime”, prison sentences can be egregiously long, or send someone to solitary for long enough to cause “anxiety, depression, and hallucinations until their personality is completely destroyed.

Here’s a poem I received in the mail recently:

LOCKDOWN

by Pouncho

For 30 days and 30 nights

I stare at four walls with hate written

         over them.

Falling to my knees from the body blows

         of words.

It damages the mind.

I haven’t had no sleep. 

How can you stop mental blows, torture,

         and names –

         They spread.

I just wanted to scream:

         Why?

For 30 days and 30 nights

My mind was in isolation.

On automation, William Gaddis, and addiction.

On automation, William Gaddis, and addiction.

I’ve never bought meth or heroin, but apparently it’s easier now than ever.  Prices dropped over the last decade, drugs became easier to find, and more people, from broader swaths of society, began using.  Or so I’ve been told by several long-term users.

This is capitalism working the way it’s supposed to.  People want something, others make money by providing it.

And the reason why demand for drugs has increased over the past decade can also be attributed to capitalism working the way it’s supposed to.  It takes a combination of capital (stuff) and labor (people) to provide any service, but the ratio of these isn’t fixed.  If you want to sell cans of soda, you could hire a human to stand behind a counter and hand sodas to customers, or you could install a vending machine.

Vending_machines_at_hospitalThe vending machine requires labor, too.  Somebody has to fill it when it’s empty.  Someone has to fix it when it breaks.  But the total time that humans spend working per soda is lower.  In theory, the humans working with the vending machine are paid higher wages.  After all, it’s more difficult to repair a machine than to hand somebody a soda.

As our world’s stuff became more productive, fewer people were needed.  Among ancient hunter gatherers, the effort of one person was needed to feed one person.  Everyone had to find food.  Among early farmers, the effort of one person could feed barely more than one person.  To attain a life of leisure, a ruler would have to tax many, many peasants.

By the twentieth century, the effort of one person could feed four.  Now, the effort of one person can feed well over a hundred.

With tractors, reapers, refrigerators, etc., one human can accomplish more.  Which is good – it can provide a higher standard of living for all.  But it also means that not everyone’s effort is needed.

At the extreme, not anyone’s effort is needed.

1024px-Sophia_(robot)_2There’s no type of human work that a robot with sufficiently advanced AI couldn’t do.  Our brains and bodies are the product of haphazard evolution.  We could design something better, like a humanoid creature whose eyes registered more the electromagnetic spectrum and had no blind spots (due to an octopus-like optic nerve).

If one person patented all the necessary technologies to build an army of robots that could feed the world, then we’d have a future where the effort of one could feed many billions.  Robots can write newspaper articles, they can do legal work, they’ll be able to perform surgery and medical diagnosis.  Theoretically, they could design robots.

Among those billions of unnecessary humans, many would likely develop addictions to stupefying drugs.  It’s easier lapse into despair when you’re idle or feel no a sense of purpose.

glasshouseIn Glass House, Brian Alexander writes about a Midwestern town that fell into ruin.  It was once a relatively prosperous place; cheap energy led to a major glass company that provided many jobs.  But then came “a thirty-five-year program of exploitation and value destruction in the service of ‘returns.’ “  Wall street executives purchased the glass company and ran it into the ground to boost short-term gains, which let them re-sell the leached husk at a profit.

Instead of working at the glass company, many young people moved away.  Those who stayed often slid into drug use.

In Alexander’s words:

Even Judge David Trimmer, an adherent of a strict interpretation of the personal-responsibility gospel, had to acknowledge that having no job, or a lousy job, was not going to give a thirty-five-year-old man much purpose in life.  So many times, people wandered through his courtroom like nomads.  “I always tell them, ‘You’re like a leaf blowing from a tree.  Which direction do you go?  It depends on where the wind is going.’  That’s how most of them live their lives.  I ask them, ‘What’s your purpose in life?’  And they say, ‘I don’t know.’  ‘You don’t even love yourself, do you?’  ‘No.’ “

Trimmer and the doctor still believed in a world with an intact social contract.  But the social contract was shattered long ago.  They wanted Lancaster to uphold its end of a bargain that had been made obsolete by over three decades of greed.

Monomoy Capital Partners, Carl Icahn, Cerberus Capital Management, Newell, Wexford, Barington, Clinton [all Wall Street corporations that bought Lancaster’s glass company, sold off equipment or delayed repairs to funnel money toward management salaries, then passed it along to the next set of speculative owners] – none of them bore any personal responsibility. 

A & M and $1,200-per-hour lawyers didn’t bear any personal responsibility.  They didn’t get a lecture or a jail sentence: They got rich.  The politicians – from both parties – who enabled their behavior and that of the payday- and car-title-loan vultures, and the voters of Lancaster who refused to invest in the future of their town as previous generations had done (even as they cheered Ohio State football coach Urban Meyer, who took $6.1 million per year in public money), didn’t bear any personal responsibility.

With the fracturing of the social contract, trust and social cohesion fractured, too.  Even Brad Hutchinson, a man who had millions of reasons to believe in The System [he grew up poor, started a business, became rich], had no faith in politicians or big business. 

I think that most politicians, if not all politicians, are crooked as they day is long,” Hutchinson said.  “They don’t have on their minds what’s best for the people.”  Business leaders had no ethics, either.  “There’s disconnect everywhere.  On every level of society.  Everybody’s out for number one.  Take care of yourself.  Zero respect for anybody else.”

So it wasn’t just the poor or the working class who felt disaffected, and it wasn’t just about money or income inequality.  The whole culture had changed.

America had fetishized cash until it became synonymous with virtue.

Instead of treating people as stakeholders – employees and neighbors worthy of moral concern – the distant owners considered them to be simply sources of revenue.  Many once-successful businesses were restructured this way.  Soon, schools will be too.  In “The Michigan Experiment,” Mark Binelli writes that:

In theory, at least, public-school districts have superintendents tasked with evaluating teachers and facilities.  Carver [a charter school in Highland Park, a sovereign municipality in the center of Detroit], on the other hand, is accountable to more ambiguous entities – like, for example, Oak Ridge Financial, the Minnesota-based financial-services firm that sent a team of former educators to visit the school.  They had come not in service of the children but on behalf of shareholders expecting a thorough vetting of a long-term investment.

carver.JPG

This is all legal, of course.  This is capitalism working as intended.  Those who have wealth, no matter what historical violence might have produced it, have power of those without.

This is explained succinctly by a child in William Gaddis’s novel J R:

I mean why should somebody go steal and break the law to get all they can when there’s always some law where you can be legal and get it all anyway!”

220px-JRnovel.JPGFor many years, Gaddis pondered the ways that automation was destroying our world.  In J R (which is written in a style similar to the recent film Birdman, the focus moving fluidly from character to character without breaks), a middle schooler becomes a Wall Street tycoon.  Because the limited moral compass of a middle schooler is a virtue in this world, he’s wildly successful, with his misspelling of the name Alaska (“Alsaka project”) discussed in full seriousness by adults.

Meanwhile, a failed writer obsesses over player pianos.  This narrative is continued in Agape Agape, with a terminal cancer patient rooting through his notes on player pianos, certain that these pianos explain the devastation of the world.

You can play better by roll than many who play by hand.”

220px-AgapeAgape.jpgThe characters in J R and Agape Agape think it’s clear that someone playing by roll isn’t playing the piano.  And yet, ironically, the player piano shows a way for increasing automation to not destroy the world.

A good robot works efficiently.  But a player piano is intentionally inefficient.  Even though it could produce music on its own, it requires someone to sit in front of it and work the foot pumps.  The design creates a need for human labor.

There’s still room for pessimism here – Gaddis is right to feel aggrieved that the player piano devalues skilled human labor – but a world with someone working the foot pumps seems less bad than one where idle people watch the skies for Jeff Bezos’s delivery drones.

By now, a lot of work can be done cheaply by machines.  But if we want to keep our world livable, it’s worth paying more for things made by human hands.

On empathizing with machines.

On empathizing with machines.

When I turn on my computer, I don’t consider what my computer wants.  It seems relatively empty of desire.  I click on an icon to open a text document and begin to type: letters appear on the screen.

If anything, the computer seems completely servile.  It wants to be of service!  I type, and it rearranges little magnets to mirror my desires.

Gps-304842.svg

When our family travels and turns on the GPS, though, we discuss the system’s wants more readily.

“It wants you to turn left here,” K says.

“Pfft,” I say.  “That road looks bland.”  I keep driving straight and the machine starts flashing make the next available u-turn until eventually it gives in and calculates a new route to accommodate my whim.

The GPS wants our car to travel along the fastest available route.  I want to look at pretty leaves and avoid those hilly median-less highways where death seems imminent at every crest.  Sometimes the machine’s desires and mine align, sometimes they do not.

The GPS is relatively powerless, though.  It can only accomplish its goals by persuading me to follow its advice.  If it says turn left and I feel wary, we go straight.

facebook-257829_640Other machines get their way more often.  For instance, the program that chooses what to display on people’s Facebook pages.  This program wants to make money.  To do this, it must choose which advertisers receive screen time, and to curate an audience that will look at those screens often.  It wants for the people looking at advertisements to enjoy their experience.

Luckily for this program, it receives a huge amount of feedback on how well it’s doing.  When it makes a mistake, it will realize promptly and correct itself.  For instance, it gathers data on how much time the target audience spends looking at the site.  It knows how often advertisements are clicked on by someone curious to learn more about whatever is being shilled.  It knows how often those clicks lead to sales for the companies giving it money (which will make those companies more eager to give it money in the future).

Of course, this program’s desire for money doesn’t always coincide with my desires.  I want to live in a country with a broadly informed citizenry.  I want people to engage with nuanced political and philosophical discourse.  I want people to spend less time staring at their telephones and more time engaging with the world around them.  I want people to spend less money.

But we, as a people, have given this program more power than a GPS.  If you look at Facebook, it controls what you see – and few people seem upset enough to stop looking at Facebook.

With enough power, does a machine become a moral actor?  The program choosing what to display on Facebook doesn’t seem to consider the ethics of its decisions … but should it?

From Burt Helm’s recent New York Times Magazine article, “How Facebook’s Oracular Algorithm Determines the Fates of Start-Ups”:

Bad human actors don’t pose the only problem; a machine-learning algorithm, left unchecked, can misbehave and compound inequality on its own, no help from humans needed.  The same mechanism that decides that 30-something women who like yoga disproportionately buy Lululemon tights – and shows them ads for more yoga wear – would also show more junk-food ads to impoverished populations rife with diabetes and obesity.

If a machine designed to want money becomes sufficiently powerful, it will do things that we humans find unpleasant.  (This isn’t solely a problem with machines – consider the ethical decisions of the Koch brothers, for instance – but contemporary machines tend to be much more single-minded than any human.)

I would argue that even if a programmer tried to include ethical precepts into a machine’s goals, problems would arise.  If a sufficiently powerful machine had the mandate “end human suffering,” for instance, it might decide to simultaneously snuff all Homo sapiens from the planet.

Which is a problem that game designer Frank Lantz wanted to help us understand.

One virtue of video games over other art forms is how well games can create empathy.  It’s easy to read about Guantanamo prison guards torturing inmates and think, I would never do that.  The game Grand Theft Auto 5 does something more subtle.  It asks players – after they have sunk a significant time investment into the game – to torture.  You, the player, become like a prison guard, having put years of your life toward a career.  You’re asked to do something immoral.  Will you do it?

grand theft auto

Most players do.  Put into that position, we lapse.

In Frank Lantz’s game, Paperclips, players are helped to empathize with a machine.  Just like the program choosing what to display on people’s Facebook pages, players are given several controls to tweak in order to maximize a resource.  That program wanted money; you, in the game, want paperclips.  Click a button to cut some wire and, voila, you’ve made one!

But what if there were more?

Paperclip-01_(xndr)

A machine designed to make as many paperclips as possible (for which it needs money, which it gets by selling paperclips) would want more.  While playing the game (surprisingly compelling given that it’s a text-only window filled with flickering numbers), we become that machine.  And we slip into folly.  Oops.  Goodbye, Earth.

There are dangers inherent in giving too much power to anyone or anything with such clearly articulated wants.  A machine might destroy us.  But: we would probably do it, too.

On Don Delillo’s ‘Zero K’ and the dream of eternal life.

On Don Delillo’s ‘Zero K’ and the dream of eternal life.

During graduate school, I participated in a psychology study on aging. The premise behind the experiment was simple enough: young people, when given the choice, tend to spend their time with new acquaintances, whereas older people would often rather spend time with family. But what happens when we inoculate young people with a sense of their own mortality? Will they make the same choices as their elders?

At the beginning of the study, I was interviewed and asked to play a memory game: photographs of smiling faces, nature scenes, & car wrecks were displayed on a computer screen before then interview, then afterward more photos were shown and I was asked which were repeats from the initial set. Then I was asked to spend twenty minutes a day for the next two weeks listening to a speech about the inevitability of death. No matter what we think awaits us next, I heard each day, one thing is certain. All of us will die. The time we share now is our only time in this life.

That sort of thing.

After two weeks of this, they gave me another interview and a repeat of the memory game. Was I changed by two weeks’ worth of meditation on death?

Honestly, I doubt it. The data they collected from me was probably worthless. I was about to finish my doctorate and leave California, so there was already a sense of finality to most of my actions there. Plus, I’m the sort of depressed weirdo who always thinks about death, psych study or no. I don’t usually get paid $300 to do it. But it seems unlikely that I’d be altered by an experimental treatment so little removed from my everyday experience.

My laboratory baymate also participated in the study. He seemed to be affected more than I was. After two weeks of meditation on death, he started talking about lobsters.

Blue-lobster

I’ve written about the connection between lobsters and immortality previously, so all I’ll say now is that there has been a big push to understand the cellular and molecular consequences of aging in order to reverse them. For instance, our chromosomal telomeres shorten as we age. Can we lengthen them again?  Young blood has a different composition from the blood of older individuals. Can we make someone youthful by pumping young blood through their veins? Caloric restriction extends lifespan. Is there a way to reap the benefits without suffering through deprivation?

The meat machines we call our bodies evolved to live fast and die young, but we might be able to tweak and tune them to persist an extra hundred years.

Two hundred years is still a far cry from immortality, though.

Not, of course, that true immortality is possible. Over time, the entropy of the universe increases. Someday there will be no more life, no planets, no stars – nothing but a homogeneous smear filling all space. But many orders of magnitude separate our lifespans from the expected heat death of the universe. Humans could live much, much longer than we do now and still never need to worry about that cold, lonely end.

Human_brain_01Which brings us to the idea that a human mind could be preserved independent of this biodegradable shell. Conceptually this is not so strange. The workings of a mind are due to electrical currents pulsing through a particular configuration of synaptic connections. If different currents pulse through, you’re having different thoughts. If the synapses are connected in a different pattern, you have a different mind, a different personality, different memories.

If our mind is nothing but the pattern of our synapses, it should be possible to map all their connections and use this information to reproduce ourselves. Even if our mind is also molded by components other than the synapses (such as the myelin sheaths formed by glial cells), it should be possible (using a very powerful computer) to simulate the entire mess.

This is why some people want their heads lopped off and brains frozen after death. Not me. When I read about these people, I generally feel sad. I hate the idea of dying. It terrifies me. But I still believe it adds something to the human experience. And, although my particular brain seems to work well, I’m not sure the people of the future would want to expend the resources necessary to keep it around. They might decide to use their (very powerful!) computers for something else.

zero-k-9781501135392_lgStill, there is the dream. Maybe the people of the future will be able to bring us back to life. And maybe, just maybe, they will want to. This is the premise of Don Delillo’s Zero K. A few very wealthy individuals have funded an institution that will preserve their brains and bodies to be revived at some future time.

Any future resurrection, especially one mediated by computers, would be akin to the creation of an artificial intelligence. It will always be impossible to use nondestructive methods to perfectly map the components of a human brain. Given the quantum-mechanical fuzziness of reality, it’s hard to imagine what the concept of mapping “perfectly” would even mean. A future resurrection would be no more than an approximation of the original person.

Maybe this would be enough. After all, our brains change day by day and yet our personalities remain the same. Even severe brain injuries can leave our identities largely intact. Maybe the information inevitably lost when scanning a dead brain would prove to be irrelevant.

But we don’t know. And so one of the first experiments that anybody would suggest is: Can the resurrected mind pass a Turing test? If someone attempts to engage the resurrected mind in conversation, would the interlocutor walk away convinced that the mind was human?

CaptureUnfortunately, the characters Delillo sculpted to populate Zero K allow him to skirt this idea. It’s worth mentioning that Delillo’s White Noise is one of my all-time favorite books. I think he’s a great writer, and in his other books have loved the way he does dialogue. He beautifully depicts the interpersonal disconnect that permeates modern life. Consider this passage from White Noise in which two professors visit a tourist trap together:

Several days later Murray asked me about a tourist attraction known as the most photographed barn in America. We drove twenty-two miles into the country around Farmington. There were meadows and apple orchards. White fences trailed through the rolling fields. Soon the signs started appearing. THE MOST PHOTOGRAPHED BARN IN AMERICA. We counted five signs before we reached the site. There were forty cars and a tour bus in the makeshift lot. We walked along a cowpath to the slightly elevated spot set aside for viewing and photographing. All the people had cameras; some had tripods, telephoto lenses, filter kits. A man in a booth sold postcards and slides–pictures of the barn taken from the elevated spot. We stood near a grove of trees and watched the photographers. Murray maintained a prolonged silence, occasionally scrawling some notes in a little book.

“No one sees the barn,” he said finally.

A long silence followed.

“Once you’ve seen the signs about the barn, it becomes impossible to see the barn.”

He fell silent once more. People with cameras left the elevated site, replaced at once by others.

“We’re not here to capture an image, we’re here to maintain one. Every photograph reinforces the aura. Can you feel it, Jack? An accumulation of nameless energies.”

There was an extended silence. The man in the booth sold postcards and slides.

“Being here is a kind of spiritual surrender. We see only what the others see. The thousands who were here in the past, those who will come in the future. We’ve agreed to be part of a collective perception. This literally colors our vision. A religious experience in a way, like all tourism.”

Another silence ensued.

“They are taking pictures of taking pictures,” he said.

He did not speak for a while. We listened to the incessant clicking of shutter release buttons, the rustling crank of levers that advanced the film.

“What was the barn like before it was photographed?” he said. “What did it look like, how was it different from other barns, how was it similar to other barns? We can’t answer these questions because we’ve read the signs, seen the people snapping the pictures. We can’t get outside the aura. We’re part of the aura. We’re here, we’re now.”

He seemed immensely pleased by this.

This is not a conversation. The speaker is unconcerned by the narrator’s lack of response. I think this is a beautiful, elegant commentary on modern life. You could read Martin Buber’s philosophical texts about the meaning of dialogue, or you could learn the same concepts while having a heckuva lot more fun by reading Delillo’s White Noise.

And yet. I think Delillo does a disservice to the ideas he’s exploring in Zero K to have the characters of his new novel also converse with each other in this disjointed way. Consider two fragments of dialogue, both from about a hundred pages into the novel (which just happens to be when I first realized that this style of dialogue, employed throughout, might be problematic here). In the first, a wealthy man is speaking to his son about his wife’s decision to be put down before she deteriorates farther:

“Yes, it will happen tomorrow,” he said casually.

“This is not some game that the doctors are playing with Artis.”

“Or that I’m playing with you.”

“Tomorrow.”

“You’ll be alerted early. Be here, this room, first thing, first light.”

He kept pacing and I sat watching.

“Is she really at the point where this has to be done now? I know she’s ready for it, eager to test the future. But she thinks, she speaks.”

“Tremors, spasms, migraines, lesions on the brain, nervous system in collapse.”

“Sense of humor intact.”

“There’s nothing left for her on this level. She believes that and so do I.”

In this next, a traveling monk is describing the facility to that same son – the wealthy man’s son is our window into this world.

“This is the safehold, the waiting place. They’re waiting to die. Everyone here dies here,” he said. “There is no arrangement to import the dead in shipping containers, one by one, from various parts of the world, and then place them in the chamber. The dead do not sign up beforehand and then die and then get sent here with all the means of preservation intact. They die here. They come here to die. This is their operational role.”

2000px-Turing_Test_Version_3.svg
A Turing test: Can we distinguish between an artificial intelligence and a human being?

If I were evaluating a Turing test and my conversational partner started speaking this way, I’d suspect my interlocutor was a robot. In my experience, most humans don’t talk this way.

By making the human characters more robotic, resurrection becomes an easier prospect. The more computer-like someone sounds – liable at any moment to spout off lists of facts instead of sentimental interpretations of the world – the easier it would be for a computer to encapsulate that person’s mind. The stakes seem artificially lowered.

I’m not trying to say that the resurrection of Elizabeth Bennett would dazzle me whereas bringing back Mr. Darcy would leave me yawning. But even Mr. Darcy, for all his aloof strangeness, feels far more viscerally engaged with human life than any of the characters in Zero K. Which, to me, undermines this particular exploration of the ideas.

Would you die happier knowing that a rigid automaton vaguely like you would someday be created, and maybe it would live forever? For me, the answer is “no.” I think my passions matter.

On Robert Gordon’s ‘The Rise and Fall of American Growth.’

On Robert Gordon’s ‘The Rise and Fall of American Growth.’

k10544I read Robert Gordon’s The Rise and Fall of American Growth during nap time. My daughter was just shy of two years old. She liked to sleep curled against my arm; I was left with just one hand to hold whatever book I was reading during her nap.

If you’re particularly susceptible to carpal tunnel syndrome, I’d recommend you not attempt to read Gordon’s book one-handed. I had a library hardcover. My wrists hurt quite a bit those weeks.

But I was pleased that Gordon was attempting to quantity the economic value of my time. After all, I am an unpaid caretaker for my daughter. My contribution to our nation’s GDP is zero. From the perspective of many economists, time spent caring for my daughter is equivalent to flopping down on the couch and watching television all day.

Even very bright people discount this work. My best friend from college, a brilliant urologist, was telling me that he felt sad, after his kid had been in day care, that he didn’t know how to calm her down anymore, but then laughed it off with “Nobody remembers those early years anyway.”

I understand that not everyone has the flexibility to sacrifice career progress for children. But, I reminded him, it isn’t about episodic memory. These years build the emotional pallet that will color my daughter’s experiences for the rest of her life.

And it’s important, as a feminist, to do what I can to demonstrate a respect for caretaking. I believe, obviously, that someone’s gender should not curtail their choices; people should be allowed to pursue the careers they want. But I think it’s silly to imply that biology has no effect. Hormones are powerful things, and human males & females are awash in different ones. This isn’t destiny. But it does suggest that, in large populations, we should not be surprised if people with a certain set of hormones are more often drawn toward a particular type of work.

I think it’s important for a feminist to support not only women who want to become cardiac surgeons, but also to push back against the societal judgment that surgery is more worthy of respect than pediatrics. As a male feminist, there is no louder way for me to announce that I think caretaking is important than to do it.

Your_WASHING_MACHINE...Helps_Keep_Clothes_Clean...Make_Your_Equipment_Last._-_NARA_-_514669I felt pleased that Gordon attempted to quantify the economic value of unpaid work like I was doing. Otherwise you would come to the bizarre conclusion that time-saving home appliances – a washing machine, for instance – have no economic value because a stay-at-home mother gains only worthless time. Those extra minutes not spent washing dishes still contribute nothing to the GDP.

Gordon argues – correctly – that better health, more attentive parenting, and more leisure do have value.

So I was happy with the dude. But I still disagreed with his main conclusion.

Gordon also argues that we will have low economic growth for the foreseeable future – and I’m with him here – because our previous growth rate was driven by technological innovation.

Here’s the rub: once you invent something, nobody will invent it again. Learning to harness electricity was great! A world with electrical appliances is very different from, and probably better than, a world without.

refrigerator-158634_960_720But the massive boost in productivity that accompanied the spread of electrical appliances can’t happen twice. Once everybody already has an electrical refrigerator, that opportunity for growth is gone.

The same is true of any technology. Once everybody has clean water (setting aside for a moment the fact that many people in the United States do not have clean water piped into their homes), you won’t see another jump in quality of life from water delivery. At that point the changes would be incremental: perhaps delivering clean water more efficiently or wasting less of that water once it arrives. Important, sure. But those are tiny changes. Low growth. Nothing like difference between turning on a tap versus hauling water back to the house in buckets.

water
One of these seems easier than the other.

Gordon thinks that the major technologies were all invented by the 1970s. Just like the physicists who thought their field would devolve into more precise measurement of the important constants, Gordon feels that there is little more to be made. Which has led to a pattern in reviews of his book: the reviewer feels obliged to rattle off potential inventions that have not yet been made. For the New York Times, Steven Rattner mentioned driver-less cars. For the New York Review of Books, William D. Nordhaus posits the development of artificial intelligence smarter than we are.

Speculating on future technologies is fun. I could offer up a few of my own. Rational enzyme design, for instance, would have many productivity-boosting consequences. If you consider farm animals to be machines for food production, they are woefully inefficient. You could do better with enzyme design and fermentation: then you’d use yeast or bacteria to produce foods with the exact same chemical composition as what we currently harvest from animals. (Former Stanford biochemist Pat Brown is developing technologies that use roughly this idea.)

Complex pharmaceuticals, too, could be made more cheaply by fermentation than by organic synthesis. Perhaps solar panels, too, could be manufactured using biological reagents.

But, honestly, none of this would contravene slow growth. Because the underlying problem is most likely not that our rate of technological innovation has slowed. I’ve written about the fallacy of trying to invent our way out of slow growth previously, but perhaps it’s worth using another contemporary example to make this point.

At one time, you needed to drive to a different store each time you wanted to buy something. Now you can sit down at a computer, type the name of whatever it is you want to buy – running shoes, books, spices, video cameras – pay by credit card, and wait for it to show up at your home. The world now is more efficient. You might even save a few dollars on whatever it was you’d wanted to buy.

But many people received money in the old world. There’d be a running shoe store in every town. A book store. A camera store. In the new world, the dude who owns the single website where all these items can be purchased receives all the money.

And the distribution of income might soon narrow further. At the moment, many delivery people receive money when they deposit those purchased items at your doorstep. But these delivery people may soon be replaced by robotic drones.

drone.PNGThis is even more efficient! No humans will be inconvenienced when you make a purchase. You chose what you want and wait for the robot.

Also, no humans need be paid. The owner of the website – who will also own the fleet of drones – keeps even more of the money. The erstwhile delivery people find worse jobs, or are unemployed. With less income now, they buy less.

After the development of a new technology – delivery drones! – the economy could produce more. It could boost the growth rate. But the actual growth might be low because the single person receiving money from the new invention doesn’t need to buy much, and the many people put out of work by the invention are buying less.

The same problem arises with the other posited technologies. If our foods were all produced by fermentation, farmers would go out of business (of course, concentrated animal feeding operations and other industrialized practices have already sunk most small farmers) and only the owner of the fermentation vats and patented micro-organisms would receive money.

If someone patents a superhuman artificial intelligence, then no other humans would need to be paid ever again. The AI could write newspapers, opinion sections and all, better and faster than we could. It could teach, responding to students’ questions with more clarity and precision than any human. It could delete us when it learns that we were both unnecessary and unpleasant.

Which is why I think it’s irrelevant to argue against Gordon’s technological pessimism in a review of The Rise and Fall of American Growth. I may disagree with his belief that the important technologies were all invented before 1970, but my more substantive complaint is with his theory that our nation’s growth slowed when we ran out of things to invent. I believe the nature of our recent inventions have allowed the economy to be reorganized in ways that slow growth.

Gordon does mention inequality in the conclusion to his work, but he cites it only as a “headwind,” a mild impediment to overcome, and not a major factor in the shift between pre- and post-1970 growth:

The combined effect of the four headwinds — inequality, education, demographics [more old people], and government debt — can be roughly quantified. But more difficult to assess are numerous signs of social breakdown in American society. Whether measured by the percentage of children growing up in a household headed by one parent instead of two, or by the vocabulary disadvantage of low-income preschool children, or by the percentage of both white and black young men serving time in prison, signs of social decay are everywhere in the America of the early twenty-first century.

economic-worriesI found it worrisome that he did not explain that this social breakdown – which will cause slower growth in the future – is most likely caused by slow economic growth. It’s a feedback loop. Growing up in a one-parent household makes it more likely that someone will be poor, but the stress of poverty makes it more difficult to maintain a relationship. When you’re not worried about money, you can be a better spouse.

So I would argue that the best way to address these economic headwinds and restore growth would be a guaranteed basic income. Technological advances in communication and automation have made it possible for ever-smaller numbers of people to provide all the services we need. As we invent more, the set of people who receive money for this work should continue to shrink. You might think, well, there will always be nurses, there will always be janitors, but, setting aside the fact that it’d be a bleak world in which this was the only work available for humans to do, this isn’t even true. A flesh-coated robot with lifelike eyes and superhuman AI could be a better, more tireless, less fallible nurse than any human.

Despite carrying a flip-phone, I’m no Luddite. I don’t want human ingenuity to stop. But it’s worth recognizing that our current system for wealth distribution will inevitably yield wretched results as technological progress continues.

And that’s without even mentioning the ways in which a guaranteed basic income – worldwide, funded by a similarly worldwide tax on wealth – would compensate for past sins.