On suboptimal optimization.

On suboptimal optimization.

I’ve been helping a friend learn the math behind optimization so that she can pass a graduation-requirement course in linear algebra. 

Optimization is a wonderful mathematical tool.  Biochemists love it – progression toward an energy minimum directs protein folding, among other physical phenomena.  Economists love it – whenever you’re trying to make money, you’re solving for a constrained maximum.  Philosophers love it – how can we provide the most happiness for a population?  Computer scientists love it – self-taught translation algorithms use this same methodology (I still believe that you could mostly replace Ludwig Wittgenstein’s Philosophical Investigations with this New York Times Magazine article on machine learning and a primer on principal component analysis).

But, even though optimization problems are useful, the math behind them can be tricky.  I’m skeptical that this mathematical technique is essential for everyone who wants a B.A. to grasp – my friend, for example, is a wonderful preschool teacher who hopes to finally finish a degree in child psychology.  She would have graduated two years ago except that she’s failed this math class three times.

I could understand if the university wanted her to take statistics, as that would help her understand psychology research papers … and the science underlying contemporary political debates … and value-added models for education … and more.  A basic understanding of statistics might make people better citizens.

Whereas … linear algebra?  This is a beautiful but counterintuitive field of mathematics.  If you’re interested in certain subjects – if you want to become a physicist, for example – you really should learn this math.  A deep understanding of linear algebra can enliven your study of quantum mechanics.

The summary of quantum mechanics: animation by Templaton.

Then again, Werner Heisenberg, who was a brilliant physicist, had a limited grasp on linear algebra.  He made huge contributions to our understanding of quantum mechanics, but his lack of mathematical expertise occasionally held him back.  He never quite understood the implications of the Heisenberg Uncertainty Principle, and he failed to provide Adolph Hitler with an atomic bomb.

In retrospect, maybe it’s good that Heisenberg didn’t know more linear algebra.

While I doubt that Heisenberg would have made a great preschool teacher, I don’t think that deficits in linear algebra were deterring him from that profession.  After each evening that I spend working with my friend, I do feel that she understands matrices a little better … but her ability to nurture children isn’t improving.

And yet.  Somebody in an office decided that all university students here need to pass this class.  I don’t think this rule optimizes the educational outcomes for their students, but perhaps they are maximizing something else, like the registration fees that can be extracted.

Optimization is a wonderful mathematical tool, but it’s easy to misuse.  Numbers will always do what they’re supposed to, but each such problem begins with a choice.  What exactly do you hope to optimize?

Choose the wrong thing and you’ll make the world worse.

#

Figure 1 from Eykholt et al., 2018.

Most automobile companies are researching self-driving cars.  They’re the way of the future!  In a previous essay, I included links to studies showing that unremarkable-looking graffiti could confound self-driving cars … but the issue I want to discuss today is both more mundane and more perfidious.

After all, using graffiti to make a self-driving car interpret a stop sign as “Speed Limit 45” is a design flaw.  A car that accelerates instead of braking in that situation is not operating as intended.

But passenger-less self-driving cars that roam the city all day, intentionally creating as many traffic jams as possible?  That’s a feature.  That’s what self-driving cars are designed to do.

A machine designed to create traffic jams?

Despite my wariness about automation and algorithms run amok, I hadn’t considered this problem until I read Adam Millard-Ball’s recent research paper, “The Autonomous Vehicle Parking Problem.” Millard-Ball begins with a simple assumption: what if a self-driving car is designed to maximize utility for its owner?

This assumption seems reasonable.  After all, the AI piloting a self-driving car must include an explicit response to the trolley problem.  Should the car intentionally crash and kill its passenger in order to save the lives of a group of pedestrians?  This ethical quandary is notoriously tricky to answer … but a computer scientist designing a self-driving car will probably answer, “no.” 

Otherwise, the manufacturers won’t sell cars.  Would you ride in a vehicle that was programmed to sacrifice you?

Luckily, the AI will not have to make that sort of life and death decision often.  But here’s a question that will arise daily: if you commute in a self-driving car, what should the car do while you’re working?

If the car was designed to maximize public utility, perhaps it would spend those hours serving as a low-cost taxi.  If demand for transportation happened to be lower than the quantity of available, unoccupied self-driving cars, it might use its elaborate array of sensors to squeeze into as small a space as possible inside a parking garage.

But what if the car is designed to benefit its owner?

Perhaps the owner would still want for the car to work as a taxi, just as an extra source of income.  But some people – especially the people wealthy enough to afford to purchase the first wave of self-driving cars – don’t like the idea of strangers mucking around in their vehicles.  Some self-driving cars would spend those hours unoccupied.

But they won’t park.  In most cities, parking costs between $2 and $10 per hour, depending on whether it’s street or garage parking, whether you purchase a long-term contract, etc. 

The cost to just keep driving is generally going to be lower than $2 per hour.  Worse, this cost is a function of the car’s speed.  If the car is idling at a dead stop, it will use approximately 0.1 gallon per hour, costing 25 cents per hour at today’s prices.  If the car is traveling at 30 mph without breaks, it will use approximately 1 gallon per hour, costing $2.50 per hour.

To save money, the car wants to stay on the road … but it wants for traffic to be as close to a standstill as possible.

Luckily for the car, this is an easy optimization problem.  It can consult its onboard GPS to find nearby areas where traffic is slow, then drive over there.  As more and more self-driving cars converge on the same jammed streets, they’ll slow traffic more and more, allowing them to consume the workday with as little motion as possible.

Photo by walidhassanein on Flickr.

Pity the person sitting behind the wheel of an occupied car on those streets.  All the self-driving cars will be having a great time stuck in that traffic jam: we’re saving money!, they get to think.  Meanwhile the human is stuck swearing at empty shells, cursing a bevy of computer programmers who made their choices months or years ago.

And all those idling engines exhale carbon dioxide.  But it doesn’t cost money to pollute, because one political party’s worth of politicians willfully ignore the fact that capitalism, by philosophical design, requires we set prices for scarce resources … like clean air, or habitable planets.

On artificial intelligence and solitary confinement.

On artificial intelligence and solitary confinement.

512px-Ludwig_WittgensteinIn Philosophical Investigations (translated by G. E. M. Anscombe), Ludwig Wittgenstein argues that something strange occurs when we learn a language.  As an example, he cites the problems that could arise when you point at something and describe what you see:

The definition of the number two, “That is called ‘two’ “ – pointing to two nuts – is perfectly exact.  But how can two be defined like that?  The person one gives the definition to doesn’t know what one wants to call “two”; he will suppose that “two” is the name given to this group of nuts!

I laughed aloud when I read this statement.  I borrowed Philosophical Investigations a few months after the birth of our second child, and I had spent most of his first day pointing at various objects in the hospital maternity ward and saying to him, “This is red.”  “This is red.”

“This is red.”

Of course, the little guy didn’t understand language yet, so he probably just thought, the warm carry-me object is babbling again.

IMG_5919
Red, you say?

Over time, though, this is how humans learn.  Wittgenstein’s mistake here is to compress the experience of learning a language into a single interaction (philosophers have a bad habit of forgetting about the passage of time – a similar fallacy explains Zeno’s paradox).  Instead of pointing only at two nuts, a parent will point to two blocks – “This is two!” and two pillows – “See the pillows?  There are two!” – and so on.

As a child begins to speak, it becomes even easier to learn – the kid can ask “Is this two?”, which is an incredibly powerful tool for people sufficiently comfortable making mistakes that they can dodge confirmation bias.

y648(When we read the children’s story “In a Dark Dark Room,” I tried to add levity to the ending by making a silly blulululu sound to accompany the ghost, shown to the left of the door on this cover. Then our youngest began pointing to other ghost-like things and asking, “blulululu?”  Is that skeleton a ghost?  What about this possum?)

When people first programmed computers, they provided definitions for everything.  A ghost is an object with a rounded head that has a face and looks very pale.  This was a very arduous process – my definition of a ghost, for instance, is leaving out a lot of important features.  A rigorous definition might require pages of text. 

Now, programmers are letting computers learn the same way we do.  To teach a computer about ghosts, we provide it with many pictures and say, “Each of these pictures has a ghost.”  Just like a child, the computer decides for itself what features qualify something for ghost-hood.

In the beginning, this process was inscrutable.  A trained algorithm could say “This is a ghost!”, but it couldn’t explain why it thought so.

From Philosophical Investigations: 

Screen Shot 2018-03-22 at 8.40.41 AMAnd what does ‘pointing to the shape’, ‘pointing to the color’ consist in?  Point to a piece of paper.  – And now point to its shape – now to its color – now to its number (that sounds queer). – How did you do it?  – You will say that you ‘meant’ a different thing each time you pointed.  And if I ask how that is done, you will say you concentrated your attention on the color, the shape, etc.  But I ask again: how is that done?

After this passage, Wittgenstein speculates on what might be going through a person’s head when pointing at different features of an object.  A team at Google working on automated image analysis asked the same question of their algorithm, and made an output for the algorithm to show what it did when it “concentrated its attention.” 

Here’s a beautiful image from a recent New York Times article about the project, “Google Researchers Are Learning How Machines Learn.”  When the algorithm is specifically instructed to “point to its shape,” it generates a bizarre image of an upward-facing fish flanked by human eyes (shown bottom center, just below the purple rectangle).  That is what the algorithm is thinking of when it “concentrates its attention” on the vase’s shape.

new york times image.jpg

At this point, we humans could quibble.  We might disagree that the fish face really represents the platonic ideal of a vase.  But at least we know what the algorithm is basing its decision on.

Usually, that’s not the case.  After all, it took a lot of work for Google’s team to make their algorithm spit out images showing what it was thinking about.  With most self-trained neural networks, we know only its success rate – even the designers will have no idea why or how it works.

Which can lead to some stunningly bizarre failures.

artificial-intelligence-2228610_1280It’s possible to create images that most humans recognize as one thing, and that an image-analysis algorithm recognizes as something else.  This is a rather scary opportunity for terrorism in a world of self-driving cars; street signs could be defaced in such a way that most human onlookers would find the graffiti unremarkable, but an autonomous car would interpret in a totally new way.

In the world of criminal justice, inscrutable algorithms are already used to determine where police officers should patrol.  The initial hope was that this system would be less biased – except that the algorithm was trained on data that came from years of racially-motivated enforcement.  Minorities are still more likely to be apprehended for equivalent infractions.

And a new artificial intelligence algorithm could be used to determine whether a crime was “gang related.”  The consequences of error can be terrible, here: in California, prisoners could be shunted to solitary for decades if they were suspected of gang affiliation.  Ambiguous photographs on somebody’s social media site were enough to subject a person to decades of torture.

Solitary_Confinement_(4692414179)When an algorithm thinks that the shape of a vase is a fish flanked by human eyes, it’s funny.  But it’s a little less comedic when an algorithm’s mistake ruins somebody’s life – if an incident is designated as a “gang-related crime”, prison sentences can be egregiously long, or send someone to solitary for long enough to cause “anxiety, depression, and hallucinations until their personality is completely destroyed.

Here’s a poem I received in the mail recently:

LOCKDOWN

by Pouncho

For 30 days and 30 nights

I stare at four walls with hate written

         over them.

Falling to my knees from the body blows

         of words.

It damages the mind.

I haven’t had no sleep. 

How can you stop mental blows, torture,

         and names –

         They spread.

I just wanted to scream:

         Why?

For 30 days and 30 nights

My mind was in isolation.

On parenting and short-term memory loss.

On parenting and short-term memory loss.

A deep undercurrent of misogyny courses through much of the world’s mythology.  In the Mahabharata (the Indian epic that includes the Bhagavad Gita), the hero’s wife is gambled away by her husband as just another possession after he’d lost his jewels, money, and chariot.  She is forced to strip in the middle of the casino; happily, divine intervention provides her with endless layers of garments.

Screen Shot 2018-03-21 at 2.14.51 PM.png

In the Ramayana, the hero’s wife is banished by her husband because her misery in exile is preferable to the townsfolk’s malicious rumors.  She’d been kidnapped, so the townsfolk assumed she’d been raped and was therefore tarnished.

image

In Emily Wilson’s translation of The Odyssey, a woman asks a visiting bard to sing something else when he launches into a description of the calamitous escapade that whisked away her husband. But the woman’s son intervenes:

Sullen Telemachus said, “Mother, no,

you must not criticize the loyal bard

for singing as it pleases him to sing. 

 

         Go in and do your work.

Stick to the loom and distaff.  Tell your slaves

to do their chores as well.  It is for men

to talk, especially me.”

image (1)

In Women and Power, Mary Beard says of this scene:

There is something faintly ridiculous about this wet-behind-the-ears lad shutting up the savvy, middle-aged Penelope.  But it is a nice demonstration that right where written evidence for Western culture starts, women’s voices are not being heard in the public sphere.  More than that, as Homer has it, an integral part of growing up, as a man, is learning to take control of public utterance and to silence the female of the species.

image (2)In What the Qur’an Meant and Why It Matters, Garry Wills writes that:

Belief in women’s inferiority is a long and disheartening part of each [Abrahamic] tradition’s story.  For almost all of Jewish history, no woman could become a rabbi.  For almost all of Christian history, no woman could become a priest.  For almost all of Muslim history, no woman could become a prophet (though scores of men did) or an imam (thousands of men did).

Wills then cites the passage of the Qur’an describing the proper way to validate contracts.  From Abdel Haleem’s translation:

Call in two men as witnesses.  If two men are not there, then call one man and two women out of those you approve as witnesses, so that if one of the two women should forget the other can remind her.  Let the witnesses not refuse when they are summoned. 

Clearly, this is derogatory toward women.  But the phrase “if one of the women should forget, the other can remind her” made me think about why disrespectful attitudes toward women were rampant in so many cultures.

I think that, in the society where the Qur’an was composed, women would be more likely to forget the details of a contract.  But the problem isn’t biological – I would argue that attentive parents of young children are more forgetful than other people.  The parent’s gender is irrelevant here.  My own memory was always excellent – during college I was often enrolled in time and a half the standard number of courses, never took notes, and received almost all A’s – but when I’m taking care of my kids, it’s a miracle if I can hold a complex thought in mind for more than a few seconds.

People talk to me, I half-listen while also answering my kids’ questions, doling out snacks, saying no, no book now, wait till we get home, and then my conversation with the grown-up will end and I’ll realize that I have no idea what we just talked about.

Hopefully it wasn’t important.

Parenting obliterates my short-term memory, even though I have it easy.  I rarely worry about other parents intentionally poisoning my children, for instance.  In The Anthropology of Childhood, David Lancy discusses

image (3)the prevalence of discord within families – especially those that practice polygyny.  [Polygyny is one man marrying several women, as was practiced by the people who composed the Qur’an.]  This atmosphere can be poisonous for children – literally.

Lancy then quotes a passage from Beverly Strassmann’s “Polygyny as a risk factor for child mortality among the Dogon”:

It was widely assumed that co-wives often fatally poisoned each other’s children.  I witnessed special dance rituals intended by husbands to deter this behavior.  Co-wife aggression is documented in court cases with confessions and convictions for poisoning  … sorcery might have a measurable demographic impact – [given] the extraordinarily high mortality of males compared with females.  Males are said to be the preferred targets because daughters marry out of patrilineage whereas sons remain to compete for land.  Even if women do not poison each other’s children, widespread hostility of the mother’s co-wife must be a source of stress.

Even when we don’t have to ward off sorcery or murder, parents of young children have shorter attention spans than other people.  A kid is often grabbing my leg, or tugging on my hand, or yelling fthhhaaaddda until I turn to look and watch him bellyflop onto a cardboard box.

OLYMPUS DIGITAL CAMERA
Seriously, they are exhausting.

Once my two children grow up, I should regain my memory.  But during most of human evolution, mortality rates were so high that families always had small children.  And, unfortunately, our species often established misogynistic patriarchies that believed women alone should do all the work of parenting.

There are a few species, like penguins, in which males and females contribute almost equally to the task of caring for young.  But it’s more common for a single parent to get stuck doing most of the work.  According to game theory, this makes sense – as soon as one party has put in a little bit more effort than the other, that party has more to lose, and so the other has an increased incentive to shirk.  Drawn out over many generations, this can produce creatures like us primates, in which males are often shabby parents.

This is bad for children (in an aside, Lancy writes “I’m tempted to argue that any society with conspicuous gender parity is likely to be a paradise for children.”), bad for women, and bad for men.  Inequality hurts everyone – men in patriarchies get to skimp on parental contribution, but they have to live in a less happy, less productive world.

It’s reasonable for the Qur’an to imply that women are less attentive and less able to understand the intricacies of contracts, given that their husbands weren’t helping with the kids.  Caring for young children can be like a straitjacket on the brain.

In The Mermaid and the Minotaur, Dorothy Dinnerstein writes that:

image (4)if what we mean by “human nature” is the Homo sapiens physique, and the “fundamental pattern … [of] social organization” which apparently prevailed when that physique first took shape, then human nature involves the females in a strange bind:

Like the male, she is equipped with a large brain, competent hands, and upright posture.  She belongs to an intelligent, playful, exploratory species, inhabiting an expanding environment which it makes for itself and then adapts to.  She is the only female, so far as we know, capable of thinking up and bringing about a world wider than the one she sees around her (and her subversive tendency to keep trying to use this capacity is recorded, resentfully, in Eve and Pandora myths). 

She thus seems, of all females, the one least fitted to live in a world narrower than the one she sees around her.  And yet, for reasons inherent in her evolutionary history, she has been, of all females, the one most fated to do so.  Her young are born less mature than those of related mammals; they require more physical care for a relatively longer time; they have much more to learn before they can function without adult supervision.

It hurts to have talents that the world won’t let you use.  What good is a massive brain when your kid is just yelling for more Cheerios?

 

Maybe I’m not doing a good job of selling the idea that “you should pitch in and help with the children” to any potential new fathers out there.  It really does make a wreckage of your brain – but I’ve heard that this is temporary, and I’ve met plenty of parents of older children who seem perfectly un-addled.

And it doesn’t have to be fun to be worth doing.

Experiences during early development have ramifications for somebody’s wellbeing.  As children grow, they’ll forget narrative details from almost everything that happened during their first few years – but this time establishes the emotional pallet that colors the rest of their life.

It’s strange.  After all, most of the work of parenting is just doling out cereal, or answering questions about what life would be like if we stayed at the playground forever, or trying to guess how many different types of birds are chirping during the walk to school.  And yet a parent’s attitudes while doing those small things help shape a person.

 

When most older people look back on their lives, they’ll tell you that their happiest and most rewarding moments were spent interacting with their families.  By caring for your children when they’re young, you help determine the sort of person who’ll be in your family.  If you’re lucky enough to be so wealthy that you’ll still have food and shelter, parenting decisions matter more for future happiness than a few years’ salary.

The costs are high.  But equality, happiness, and establishing a culture of respect should matter to men as well as women.

The best way to show that you value something is to pitch in and do it.

On empathizing with machines.

On empathizing with machines.

When I turn on my computer, I don’t consider what my computer wants.  It seems relatively empty of desire.  I click on an icon to open a text document and begin to type: letters appear on the screen.

If anything, the computer seems completely servile.  It wants to be of service!  I type, and it rearranges little magnets to mirror my desires.

Gps-304842.svg

When our family travels and turns on the GPS, though, we discuss the system’s wants more readily.

“It wants you to turn left here,” K says.

“Pfft,” I say.  “That road looks bland.”  I keep driving straight and the machine starts flashing make the next available u-turn until eventually it gives in and calculates a new route to accommodate my whim.

The GPS wants our car to travel along the fastest available route.  I want to look at pretty leaves and avoid those hilly median-less highways where death seems imminent at every crest.  Sometimes the machine’s desires and mine align, sometimes they do not.

The GPS is relatively powerless, though.  It can only accomplish its goals by persuading me to follow its advice.  If it says turn left and I feel wary, we go straight.

facebook-257829_640Other machines get their way more often.  For instance, the program that chooses what to display on people’s Facebook pages.  This program wants to make money.  To do this, it must choose which advertisers receive screen time, and to curate an audience that will look at those screens often.  It wants for the people looking at advertisements to enjoy their experience.

Luckily for this program, it receives a huge amount of feedback on how well it’s doing.  When it makes a mistake, it will realize promptly and correct itself.  For instance, it gathers data on how much time the target audience spends looking at the site.  It knows how often advertisements are clicked on by someone curious to learn more about whatever is being shilled.  It knows how often those clicks lead to sales for the companies giving it money (which will make those companies more eager to give it money in the future).

Of course, this program’s desire for money doesn’t always coincide with my desires.  I want to live in a country with a broadly informed citizenry.  I want people to engage with nuanced political and philosophical discourse.  I want people to spend less time staring at their telephones and more time engaging with the world around them.  I want people to spend less money.

But we, as a people, have given this program more power than a GPS.  If you look at Facebook, it controls what you see – and few people seem upset enough to stop looking at Facebook.

With enough power, does a machine become a moral actor?  The program choosing what to display on Facebook doesn’t seem to consider the ethics of its decisions … but should it?

From Burt Helm’s recent New York Times Magazine article, “How Facebook’s Oracular Algorithm Determines the Fates of Start-Ups”:

Bad human actors don’t pose the only problem; a machine-learning algorithm, left unchecked, can misbehave and compound inequality on its own, no help from humans needed.  The same mechanism that decides that 30-something women who like yoga disproportionately buy Lululemon tights – and shows them ads for more yoga wear – would also show more junk-food ads to impoverished populations rife with diabetes and obesity.

If a machine designed to want money becomes sufficiently powerful, it will do things that we humans find unpleasant.  (This isn’t solely a problem with machines – consider the ethical decisions of the Koch brothers, for instance – but contemporary machines tend to be much more single-minded than any human.)

I would argue that even if a programmer tried to include ethical precepts into a machine’s goals, problems would arise.  If a sufficiently powerful machine had the mandate “end human suffering,” for instance, it might decide to simultaneously snuff all Homo sapiens from the planet.

Which is a problem that game designer Frank Lantz wanted to help us understand.

One virtue of video games over other art forms is how well games can create empathy.  It’s easy to read about Guantanamo prison guards torturing inmates and think, I would never do that.  The game Grand Theft Auto 5 does something more subtle.  It asks players – after they have sunk a significant time investment into the game – to torture.  You, the player, become like a prison guard, having put years of your life toward a career.  You’re asked to do something immoral.  Will you do it?

grand theft auto

Most players do.  Put into that position, we lapse.

In Frank Lantz’s game, Paperclips, players are helped to empathize with a machine.  Just like the program choosing what to display on people’s Facebook pages, players are given several controls to tweak in order to maximize a resource.  That program wanted money; you, in the game, want paperclips.  Click a button to cut some wire and, voila, you’ve made one!

But what if there were more?

Paperclip-01_(xndr)

A machine designed to make as many paperclips as possible (for which it needs money, which it gets by selling paperclips) would want more.  While playing the game (surprisingly compelling given that it’s a text-only window filled with flickering numbers), we become that machine.  And we slip into folly.  Oops.  Goodbye, Earth.

There are dangers inherent in giving too much power to anyone or anything with such clearly articulated wants.  A machine might destroy us.  But: we would probably do it, too.

On the history of time travel.

On the history of time travel.

From the beginning, artists understood that time travel either denies humans free will or else creates absurd paradoxes.

This conundrum arises whenever an object or information is allowed to travel backward through time.  Traveling forward is perfectly logical – after all, it’s little different from a big sleep, or being shunted into an isolation cell.  The world moves on but you do not… except for the steady depredations of age and the neurological damage that solitary confinement inevitably causes.

A lurch forward is no big deal.

But backward?

oedipusConsider one of the earlier time travel stories, the myth of Oedipus.  King Laius receives a prophecy foretelling doom.  He strives to create a paradox – using information from the future to prevent that future, in this case by offing his son – but fails.  This story falls into the “time travel denies humans free will” category.  Try as they might, the characters cannot help but create their tragic future.

James Gleick puts this succinctly in his recent New York Review essay discussing Denis Villeneuve’s Arrival and Ted Chiang’s “Story of Your Life.”  Gleick posits the existence of a “Book of Ages,” a tome describing every moment of the past, present, and future.  Could a reader flip to a page describing the current moment and choose to evade the dictates of the book?  In Gleick’s words,

the_red_book_-_liber_novusCan you do that?  Logically, no.  If you accept the premise, the story is unchanging.  Knowledge of the future trumps free will.

(I’m typing this essay on January 18th, and can’t help but note how crappy it is that the final verb in that sentence looks wrong with a lowercase “t.”  Sorry, ‘merica.  I hope you get better soon.)

timetravelGleick is the author of Time Travel: A History, in which he presents a broad survey of the various tales (primarily literature and film) that feature time travel.  In each tale Gleick discusses, time travel either saps free will (a la Oedipus) or else introduces inexplicable paradox (Marty slowly fading in Back to the Future as his parents’ relationship becomes less likely; scraps of the Terminator being used to invent the Terminator; a time-traveling escapee melting into a haggard cripple as his younger self is tortured in Looper.)

It’s not just artists who have fun worrying over these puzzles; over the years, more and more physicists and philosophers have gotten into the act.  Sadly, their ideas are often less well-reasoned than the filmmakers’.  Time Travel includes a long quotation from philosopher John Hospers (“We’re still in a textbook about analytical philosphy, but you can almost hear the author shouting,” Gleick interjects), in which Hospers argues that you can’t travel back in time to build the pyramids because you already know that they were built by someone else, followed by with the brief summary:

All Giza PyramidsAdmit it: you didn’t help build the pyramids.  That’s a fact, but is it a logical fact?  Not every logician finds these syllogisms self-evident.  Some things cannot be proved or disproved by logic.

Gleick uses this moment to introduce Godel’s Incompleteness Theorem (the idea that, in any formal system, we must include unprovable assumptions), whose author, Kurt Godel, also speculated about time travel (from Gleick: If the attention paid to CTCs [closed timelike curve] is disproportionate to their importance or plausibility, Stephen Hawkins knows why: “Scientists working in this field have to disguise their real interest by using technical terms like ‘closed timelike curves’ that are code for time travel.”  And time travel is sexy.  Even for a pathologically shy, borderline paranoid Austrian logician).

Alternatively, Hospers’ strange pyramid argument could’ve been followed by a discussion of Timecrimes [http://www.imdb.com/title/tt0480669/], the one paradox-less film in which a character travels backward through time but still has free will (at least, as much free will as you or I have).

But James Gleick’s Time Travel: A History doesn’t mention Timecrimes.  Obviously there are so many stories incorporating time travel that it’d be impossible to discuss them all, but leaving out Timecrimes is a tragedy!  This is the best time travel movie (of the past and present.  I can’t figure out how to make any torrent clients download the time travel movies of the future).

timecrimesTimecrimes is great.  It provides the best analysis of free will inside a sci-fi world of time travel.  But it’s not just for sci-fi nerds – the same ideas help us understand strange-seeming human activities like temporally-incongruous prayer (e.g., praying for the safety of a friend after you’ve already seen on TV that several unidentified people died when her apartment building caught fire.  By the time you kneel, she should either be dead or not.  And yet, we pray).

Timecrimes progresses through three distinct movements.  In the first, the protagonist believes himself to be in a world of time travel as paradox: a physicist has convinced him that with any deviation from the known timeline he might cause himself to cease to exist.  And so he mimics as best he can events that he remembers.  A masked man chased him with a knife, and so he chases his past self.

screenshottimecrimesIn the second movement, the protagonist realizes that the physicist was wrong.  There are no paradoxes, but he seems powerless to change anything.  He watched his wife fall to her death at the end of his first jaunt through time, so he is striving to alter the future… but his every effort fails.  Perhaps he has no free will, no real agency.  After all, he already remembers her death.  His memory exists in the form of a specific pattern of neural connections in his brain, and those neurons will not spontaneously rearrange.  His memory is real.  The future seems set.

But then there is a third movement: this is the reason Timecrimes surpasses all other time travel tales.  The protagonist regains a sense of free will within the constraints imposed by physics.

Yes, he saw his wife die.  How can he make his memory wrong?

Similarly, you’ve already learned that the Egyptians built the pyramids.  I’m pretty confident that none of the history books you’ve perused included a smiling picture of you with the caption “… but they couldn’t have done it without her.”  And yet, if you were to travel back to Egypt, would it really be impossible to help in such a way that no history books (which will be written in the future, but which your past self has already seen) ever report your contributions.

Indeed, an analogous puzzle is set before us every time we act.  Our brains are nothing more than gooey messes of molecules, constrained by the same laws of physics as everything else, so we shouldn’t have free will.  And yet: can we still act as though we do?

We must.  It’s either that or sit around waiting to die.

Because the universe sprung senselessly into existence, birthed by chance fluctuations during the long march of eternity… and then we appeared, billions of years later, through the valueless vagaries of evolution… our actions shouldn’t matter.  But: can we pretend they do?

I try.  We have to try.

On Facebook and fake news.

On Facebook and fake news.

With two credits left to finish his degree, a friend switched his major from philosophy to computer science.  One of his first assignments: build a website for a local business.  Rather than find someone needing this service, he decided to fabricate an empire.

I never knew whether he thought this would be easier.  In any case, he resolved to create the simulacrum of a small publishing company and asked me for help.  We wrote short biographies for approximately a dozen authors on the company’s roster, drafted excerpts from several books for each, designed book covers, and used Photoshop to paste our creations into conference halls, speaking at podiums and being applauded for their achievements.

This was in the fall of 2003, so we assumed that aspiring artists would also pursue a social media presence.  We created profiles for the authors on Myspace (the original incarnation of Facebook, loathe to admit fakery, would only let users register for an account using a university email address; the email accounts we’d made for our authors were all hosted through Hotmail and Yahoo).  My friend put profiles for several on dating websites.  He arranged trysts that the (imaginary) authors cancelled at the last minute.

My apologies to the men and women who were stood up by our creations.  I’d like to think that most real-world authors are less fickle.

Several years later, when my family began recording holiday albums in lieu of a photograph to mail to our friends and relatives, we named the project after the most successful of these authors… “success” here referring solely to popularity on the dating sites.  We figured that, because these entities were all constructs of our imaginations, this was the closest we’d ever come to a controlled experiment comparing the allure of different names.

a red one.jpg
It does still have a certain ring to it.

Eventually, my friend submitted his project.  By this time he’d kept up the profiles of our creations for about two months.  At first the authors were only friends with each other, but by then they’d begun to branch out, each participating in different online discussion groups, making a different set of connections to the world…

My friend received a failing grade.  None of the links to buy the authors’ books were functional.  He had thought this was a reasonable omission, since the full texts did not exist, but his professor was a stickler.

Still, I have to admit: faking is fun.

Profitable, too.  Not in my friend’s case, where he devoted prodigious quantities of effort toward a project that earned exceptionally low marks (he gave up on computer science at the end of that semester, and indeed changed his major thrice more before resigning himself to a philosophy degree and completing those last two credits).  But, for others?

From William Gaddis’s The Recognitions:

71ncmdhfzzlLong since, of course, in the spirit of that noblesse oblige which she personified, Paris had withdrawn from any legitimate connection with works of art, and directly increased her entourage of those living for Art’s sake.  One of these, finding himself on trial just two or three years ago, had made the reasonable point that a typical study of a Barbizon peasant signed with his own name brought but a few hundred francs, but signed Millet, ten thousand dollars; and the excellent defense that this subterfuge had not been practiced on Frenchmen, but on English and Americans “to whom you can sell anything” . . . here, in France, where everything was for sale.

Or, put more explicitly by Jean de la Bruyêre (& translated by Jean Stewart):

It is harder to make one’s name by means of a perfect work than to win praise for a second-rate one by means of a name one has already acquired.

Our world is saturated in information and art – to garner attention, it might seem necessary to pose as a trusted brand.

6641427981_0bc638f8e8_oOr, it seems, to peddle untruths so outlandish that they stand distinct from run-on-the-mill reality, which might be found anywhere.  This, it seems, was a profitable moneymaking scheme during the 2016 U.S. elections.  With a sufficiently catchy fabrication, anyone anywhere could dupe Facebook users and reap Google advertising dollars.

Which is frustrating, sure.  Networks created by ostensibly socially-conscious left-leaning Silicon Valley companies enabled a far-right political campaign built on lies.

But I would argue that the real problem with Facebook, in terms of distorting political discourse, isn’t the platform’s propensity for spreading lies.  The problem is Facebook itself, the working-as-properly attention waster.  Even when the material is real-ish – pointless lists, celebrity updates, and the like – it degrades the power to think.  The site is designed to be distracting.  After all, Facebook makes money through advertising.  Humans are most persuadable when harried & distracted – it’s while I’m in the grocery store holding a screaming toddler that I’m most likely to grab whatever item has a brightly-colored tag announcing its SALE! price instead of checking to see which offers the best value.  All the dopamine-releasing pings and pokes on Facebook keep users susceptible.

As described by computer scientist Cal Newport:

Consider that the ability to concentrate without distraction on hard tasks is becoming increasingly valuable in an increasingly complicated economy.  Social media weakens this skill because it’s engineered to be addictive.  The more you use social media in the way it’s designed to be used – persistently throughout your waking hours – the more your brain learns to crave a quick hit of stimulus at the slightest hint of boredom.

Once this Pavlovian connection is solidified, it becomes hard to give difficult tasks the unbroken concentration they require, and your brain simply won’t tolerate such a long period without a fix.

Big ideas take time.  And so we have a conundrum: how, in our world, can we devote the time and energy necessary to gain deep understanding?

Ideas that matter won’t always fit into 140 characters or less.  If our time spent flitting through the internet has deluded us into imagining they will, that is how we destroy our country, becoming a place where we spray Brawndo onto crops because electrolytes are “what plants crave.”

Or becoming a place that elects Donald Trump.

Or becoming a place populated by people who hate Donald Trump but think that their hate alone – or, excuse me, their impassioned hate plus their ironic Twitter posts – without getting off their asses to actually do something about all the suffering in the world, is enough.  There are very clear actions you could take to push back against climate change and mass incarceration.

Kafka could look at fish.  Can we read Rainer Maria Rilke’s “Archaic Torso of Apollo” without shame? Here:

rilke

On a guaranteed basic income.

On a guaranteed basic income.

For several months, a friend and I have volleyed emails about a sprawling essay on consciousness, free will, and literature.

Brain_powerThe essay will explore the idea that humans feel we have free will because our conscious mind grafts narrative explanations (“I did this because…”) onto our actions. It seems quite clear that our conscious minds do not originate all the choices that we then take credit for. With an electroencephalogram, you could predict when someone is about to raise an arm, for instance, before the person has even consciously decided to do so.

Which is still free will, of course. If we are choosing an action, it hardly matters whether our conscious or subconscious mind makes the choice. But then again, we might not be “free.” If an outside observer were able to scan a person’s brain to sufficient detail, all of that person’s future choices could probably be predicted (as long as our poor study subject is imprisoned in an isolation chamber). Our brains dictate our thoughts and choices, but these brains are composed of salts and such that follow the same laws of physics as all other matter.

That’s okay. It is almost certainly impossible that any outside observer could (non-destructively) scan a brain to sufficient detail. If quantum mechanical detail is implicated in the workings of our brains, it is definitely impossible: quantum mechanical information can’t be duplicated. Wikipedia has a proof of this “no cloning theorem” involving lots of bras and kets, but this is probably unreadable for anyone who hasn’t done much matrix math. An easier way to reason through it might be this: if you agree with the Heisenberg uncertainty principle, the idea that certain pairs of variables cannot be simultaneously measured to arbitrary precision, the no cloning theorem has to be true. Otherwise you could simply make many copies of a system and measure one variable precisely for each copy.

So, no one will ever be able to prove to me that I am not free. But let’s just postulate, for a moment, that the laws of physics that, so far, have correctly described the behavior of all matter outside my brain also correctly describe the movement of matter inside my brain. In which case, those inviolable laws of physics are dictating my actions as I type this essay. And yet, I feel free. Each word I type feels like a choice. My brain is constantly concocting a story that explains why I am choosing each word.

Does the same neural circuitry that deludes me into feeling free – that has evolved, it seems, to constantly sculpt narratives that make sense of our actions, the same way our dreams often burgeon to include details like a too hot room or a ringing telephone – also give me the ability to write fiction?

In other words, did free will spawn The Iliad?

iliad.JPG

The essay is obviously rather speculative. I’m incorporating relevant findings from neuroscience, but, as I’ve mentioned, it’s quite likely that no feasible experiments could ever test some of these ideas.

The essay is also unfinished. No laws of physics forbid me from finishing it. I’m just slow because K & I have two young kids. At the end of each day, once our 2.5 year old and our 3 month old are finally asleep, we exhaustedly glance at each other and murmur, “Where did the time go?”

tradersBut I am very fortunate to have a collaborator always ready to nudge me back into action. My friend recently sent me an article by Tim Christiaens on the philosophy of financial markets. He sent it because the author argues – correctly, in my opinion – that for many stock market actions it’s sensible to consider the Homo sapiens trader + the nearby multi-monitor computer as a single decision-making entity. Tool-wielding is known to change our brains – even something as simple as a pointing stick alters our self-perception of our reach. And the algorithms churned through by stock traders’ computers are incredibly complex. There’s not a good way for the human to check a computer’s results; the numbers it spits out have to be trusted. So it seems reasonable to consider the two together as a single super-entity that collaborates in choosing when to buy or sell. If something in the room has free will, it would be the tools & trader together.

Which isn’t as weird as it might initially sound. After all, each Homo sapiens shell is already a multi-species super-entity. As I type this essay, the choice of which word to write next is made inside my brain, then signals are sent through my nervous system to my hands and fingers commanding them to tap the appropriate keys. The choice is influenced by all the hormones and signaling molecules inside my brain. It so happens that bacteria and other organisms living in my body excrete signaling molecules that can cross the blood-brain barrier and influence my choice.

The milieu of intestinal bacteria living inside each of us gets to vote on our moods and actions. People with depression seem to harbor noticeably different sets of bacteria than people without. And it seems quite possible that parasites like Toxoplasma gondii can have major influences on our personalities.

CaptureIndeed, in his article on stock markets, Christiaens mentions the influence of small molecules on financial behavior, reporting that “some researchers study the trader’s body through the prism of testosterone levels as an indicator of performance. It turns out that traders who regularly visit prostitutes consequently have higher testosterone levels and outperform other traders.”

Now, I could harp on the fact that we designed these markets. That they could have been designed in many different ways. And that it seems pretty rotten to have designed a system in which higher testosterone (and the attendant impulsiveness and risky decision-making) would correlate with success. Indeed, a better, more equitable market design would probably quell the performance boost of testosterone.

I could rant about all that. But I won’t. Instead I’ll simply mention that Toxoplasma seems to boost testosterone. Instead of popping into brothels after work, traders could snack on cat shit.

cat-1014209_1280.jpg

On the topic of market design, Christiaens also includes a lovely description of the interplay between the structure of our economy and the ways that people are compelled to live:

The reason why financial markets are able to determine the viability of lifestyles is because most individuals and governments are indebted and therefore need a ‘creditworthy’ reputation. As the [U.S.] welfare state declined during the 1980s, access to credit was facilitated in order to sustain high consumption, avoid overproduction and stimulate economic growth. For Lazzarato [a referenced writer], debt is not an obligation emerging from a contract between free and equal individuals, but is from the start an unequal power relation where the creditor can assert his force over the debtor. As long as he is indebted, the latter’s rights are virtually suspended. For instance, a debtor’s property rights can be superseded when he fails to reimburse the creditor by evicting him from his home or selling his property at a public auction. State violence is called upon to force non-creditworthy individuals to comply. We [need] not even jump to these extreme cases of state enforcement to see that debt entails a disequilibrium of power. Even the peaceful house loan harbors a concentration of risk on the side of the debtor. When I take a $100,000 loan for a house that, during an economic crisis, loses its value, I still have to pay $100,000 plus interests to the bank. The risk of a housing crash is shifted to the debtor’s side of the bargain. During a financial crisis this risk concentration makes it possible for the creditors to demand a change of lifestyle from the debtor, without the former having to reform themselves.

Several of my prior essays have touched upon the benefits of a guaranteed basic income for all people, but I think this paragraph is a good lead-in for a reprise. As Christiaens implies, there is violence behind all loans – both the violence that led to initial ownership claims and the threat of state violence that compels repayment. Not that I’m against the threat of state violence to compel people to follow rules in general – without this threat we would have anarchy, in which case actual violence tends to predominate over the threat of incipient enforcement.

We all need wealth to live. After all, land holdings are wealth, and at the very least each human needs access to a place to collect fresh water, a place to grow food, a place to stand and sleep. But no one is born wealthy. A fortunate few people receive gifts of wealth soon after birth, but many people foolishly choose to be born to less well-off parents.

The need for wealth curtails the choices people can make. They need to maintain their “creditworthiness,” as in Christiaens’s passage, or their hire-ability. Wealth has to come from somewhere, and, starting from zero, we rely on others choosing to give it to us. Yes, often in recompense for labor, but just because you are willing and able to do a form of work does not mean that anyone will pay you for it.

Unless people are already wealthy enough to survive, they are at the mercy of others choosing to give them things. Employers are not forced to trade money for salaried working hours. And there isn’t wealth simply waiting around to be claimed. It all starts from something – I’d argue that all wealth stems originally from land holdings – but the world’s finite allotment of land was claimed long ago through violence.

A guaranteed basic income would serve to acknowledge the brutal baselessness of those initial land grabs. It is an imperfect solution, I know. It doesn’t make sense to me that everyone’s expenses should rise whenever a new child is born. But a world where people received a guaranteed basic income would be better than one without. The unluckily-born populace would be less compelled to enter into subjugating financial arrangements. We’d have less misery – feeling poor causes a lot of stress. We’d presumably have less crime and drug abuse, too, for similar reasons.

And, of course, less hypocrisy. It’s worth acknowledging that our good fortune comes from somewhere. No one among us created the world.