On artificial intelligence and solitary confinement.

On artificial intelligence and solitary confinement.

512px-Ludwig_WittgensteinIn Philosophical Investigations (translated by G. E. M. Anscombe), Ludwig Wittgenstein argues that something strange occurs when we learn a language.  As an example, he cites the problems that could arise when you point at something and describe what you see:

The definition of the number two, “That is called ‘two’ “ – pointing to two nuts – is perfectly exact.  But how can two be defined like that?  The person one gives the definition to doesn’t know what one wants to call “two”; he will suppose that “two” is the name given to this group of nuts!

I laughed aloud when I read this statement.  I borrowed Philosophical Investigations a few months after the birth of our second child, and I had spent most of his first day pointing at various objects in the hospital maternity ward and saying to him, “This is red.”  “This is red.”

“This is red.”

Of course, the little guy didn’t understand language yet, so he probably just thought, the warm carry-me object is babbling again.

IMG_5919
Red, you say?

Over time, though, this is how humans learn.  Wittgenstein’s mistake here is to compress the experience of learning a language into a single interaction (philosophers have a bad habit of forgetting about the passage of time – a similar fallacy explains Zeno’s paradox).  Instead of pointing only at two nuts, a parent will point to two blocks – “This is two!” and two pillows – “See the pillows?  There are two!” – and so on.

As a child begins to speak, it becomes even easier to learn – the kid can ask “Is this two?”, which is an incredibly powerful tool for people sufficiently comfortable making mistakes that they can dodge confirmation bias.

y648(When we read the children’s story “In a Dark Dark Room,” I tried to add levity to the ending by making a silly blulululu sound to accompany the ghost, shown to the left of the door on this cover. Then our youngest began pointing to other ghost-like things and asking, “blulululu?”  Is that skeleton a ghost?  What about this possum?)

When people first programmed computers, they provided definitions for everything.  A ghost is an object with a rounded head that has a face and looks very pale.  This was a very arduous process – my definition of a ghost, for instance, is leaving out a lot of important features.  A rigorous definition might require pages of text. 

Now, programmers are letting computers learn the same way we do.  To teach a computer about ghosts, we provide it with many pictures and say, “Each of these pictures has a ghost.”  Just like a child, the computer decides for itself what features qualify something for ghost-hood.

In the beginning, this process was inscrutable.  A trained algorithm could say “This is a ghost!”, but it couldn’t explain why it thought so.

From Philosophical Investigations: 

Screen Shot 2018-03-22 at 8.40.41 AMAnd what does ‘pointing to the shape’, ‘pointing to the color’ consist in?  Point to a piece of paper.  – And now point to its shape – now to its color – now to its number (that sounds queer). – How did you do it?  – You will say that you ‘meant’ a different thing each time you pointed.  And if I ask how that is done, you will say you concentrated your attention on the color, the shape, etc.  But I ask again: how is that done?

After this passage, Wittgenstein speculates on what might be going through a person’s head when pointing at different features of an object.  A team at Google working on automated image analysis asked the same question of their algorithm, and made an output for the algorithm to show what it did when it “concentrated its attention.” 

Here’s a beautiful image from a recent New York Times article about the project, “Google Researchers Are Learning How Machines Learn.”  When the algorithm is specifically instructed to “point to its shape,” it generates a bizarre image of an upward-facing fish flanked by human eyes (shown bottom center, just below the purple rectangle).  That is what the algorithm is thinking of when it “concentrates its attention” on the vase’s shape.

new york times image.jpg

At this point, we humans could quibble.  We might disagree that the fish face really represents the platonic ideal of a vase.  But at least we know what the algorithm is basing its decision on.

Usually, that’s not the case.  After all, it took a lot of work for Google’s team to make their algorithm spit out images showing what it was thinking about.  With most self-trained neural networks, we know only its success rate – even the designers will have no idea why or how it works.

Which can lead to some stunningly bizarre failures.

artificial-intelligence-2228610_1280It’s possible to create images that most humans recognize as one thing, and that an image-analysis algorithm recognizes as something else.  This is a rather scary opportunity for terrorism in a world of self-driving cars; street signs could be defaced in such a way that most human onlookers would find the graffiti unremarkable, but an autonomous car would interpret in a totally new way.

In the world of criminal justice, inscrutable algorithms are already used to determine where police officers should patrol.  The initial hope was that this system would be less biased – except that the algorithm was trained on data that came from years of racially-motivated enforcement.  Minorities are still more likely to be apprehended for equivalent infractions.

And a new artificial intelligence algorithm could be used to determine whether a crime was “gang related.”  The consequences of error can be terrible, here: in California, prisoners could be shunted to solitary for decades if they were suspected of gang affiliation.  Ambiguous photographs on somebody’s social media site were enough to subject a person to decades of torture.

Solitary_Confinement_(4692414179)When an algorithm thinks that the shape of a vase is a fish flanked by human eyes, it’s funny.  But it’s a little less comedic when an algorithm’s mistake ruins somebody’s life – if an incident is designated as a “gang-related crime”, prison sentences can be egregiously long, or send someone to solitary for long enough to cause “anxiety, depression, and hallucinations until their personality is completely destroyed.

Here’s a poem I received in the mail recently:

LOCKDOWN

by Pouncho

For 30 days and 30 nights

I stare at four walls with hate written

         over them.

Falling to my knees from the body blows

         of words.

It damages the mind.

I haven’t had no sleep. 

How can you stop mental blows, torture,

         and names –

         They spread.

I just wanted to scream:

         Why?

For 30 days and 30 nights

My mind was in isolation.

On the history of time travel.

On the history of time travel.

From the beginning, artists understood that time travel either denies humans free will or else creates absurd paradoxes.

This conundrum arises whenever an object or information is allowed to travel backward through time.  Traveling forward is perfectly logical – after all, it’s little different from a big sleep, or being shunted into an isolation cell.  The world moves on but you do not… except for the steady depredations of age and the neurological damage that solitary confinement inevitably causes.

A lurch forward is no big deal.

But backward?

oedipusConsider one of the earlier time travel stories, the myth of Oedipus.  King Laius receives a prophecy foretelling doom.  He strives to create a paradox – using information from the future to prevent that future, in this case by offing his son – but fails.  This story falls into the “time travel denies humans free will” category.  Try as they might, the characters cannot help but create their tragic future.

James Gleick puts this succinctly in his recent New York Review essay discussing Denis Villeneuve’s Arrival and Ted Chiang’s “Story of Your Life.”  Gleick posits the existence of a “Book of Ages,” a tome describing every moment of the past, present, and future.  Could a reader flip to a page describing the current moment and choose to evade the dictates of the book?  In Gleick’s words,

the_red_book_-_liber_novusCan you do that?  Logically, no.  If you accept the premise, the story is unchanging.  Knowledge of the future trumps free will.

(I’m typing this essay on January 18th, and can’t help but note how crappy it is that the final verb in that sentence looks wrong with a lowercase “t.”  Sorry, ‘merica.  I hope you get better soon.)

timetravelGleick is the author of Time Travel: A History, in which he presents a broad survey of the various tales (primarily literature and film) that feature time travel.  In each tale Gleick discusses, time travel either saps free will (a la Oedipus) or else introduces inexplicable paradox (Marty slowly fading in Back to the Future as his parents’ relationship becomes less likely; scraps of the Terminator being used to invent the Terminator; a time-traveling escapee melting into a haggard cripple as his younger self is tortured in Looper.)

It’s not just artists who have fun worrying over these puzzles; over the years, more and more physicists and philosophers have gotten into the act.  Sadly, their ideas are often less well-reasoned than the filmmakers’.  Time Travel includes a long quotation from philosopher John Hospers (“We’re still in a textbook about analytical philosphy, but you can almost hear the author shouting,” Gleick interjects), in which Hospers argues that you can’t travel back in time to build the pyramids because you already know that they were built by someone else, followed by with the brief summary:

All Giza PyramidsAdmit it: you didn’t help build the pyramids.  That’s a fact, but is it a logical fact?  Not every logician finds these syllogisms self-evident.  Some things cannot be proved or disproved by logic.

Gleick uses this moment to introduce Godel’s Incompleteness Theorem (the idea that, in any formal system, we must include unprovable assumptions), whose author, Kurt Godel, also speculated about time travel (from Gleick: If the attention paid to CTCs [closed timelike curve] is disproportionate to their importance or plausibility, Stephen Hawkins knows why: “Scientists working in this field have to disguise their real interest by using technical terms like ‘closed timelike curves’ that are code for time travel.”  And time travel is sexy.  Even for a pathologically shy, borderline paranoid Austrian logician).

Alternatively, Hospers’ strange pyramid argument could’ve been followed by a discussion of Timecrimes [http://www.imdb.com/title/tt0480669/], the one paradox-less film in which a character travels backward through time but still has free will (at least, as much free will as you or I have).

But James Gleick’s Time Travel: A History doesn’t mention Timecrimes.  Obviously there are so many stories incorporating time travel that it’d be impossible to discuss them all, but leaving out Timecrimes is a tragedy!  This is the best time travel movie (of the past and present.  I can’t figure out how to make any torrent clients download the time travel movies of the future).

timecrimesTimecrimes is great.  It provides the best analysis of free will inside a sci-fi world of time travel.  But it’s not just for sci-fi nerds – the same ideas help us understand strange-seeming human activities like temporally-incongruous prayer (e.g., praying for the safety of a friend after you’ve already seen on TV that several unidentified people died when her apartment building caught fire.  By the time you kneel, she should either be dead or not.  And yet, we pray).

Timecrimes progresses through three distinct movements.  In the first, the protagonist believes himself to be in a world of time travel as paradox: a physicist has convinced him that with any deviation from the known timeline he might cause himself to cease to exist.  And so he mimics as best he can events that he remembers.  A masked man chased him with a knife, and so he chases his past self.

screenshottimecrimesIn the second movement, the protagonist realizes that the physicist was wrong.  There are no paradoxes, but he seems powerless to change anything.  He watched his wife fall to her death at the end of his first jaunt through time, so he is striving to alter the future… but his every effort fails.  Perhaps he has no free will, no real agency.  After all, he already remembers her death.  His memory exists in the form of a specific pattern of neural connections in his brain, and those neurons will not spontaneously rearrange.  His memory is real.  The future seems set.

But then there is a third movement: this is the reason Timecrimes surpasses all other time travel tales.  The protagonist regains a sense of free will within the constraints imposed by physics.

Yes, he saw his wife die.  How can he make his memory wrong?

Similarly, you’ve already learned that the Egyptians built the pyramids.  I’m pretty confident that none of the history books you’ve perused included a smiling picture of you with the caption “… but they couldn’t have done it without her.”  And yet, if you were to travel back to Egypt, would it really be impossible to help in such a way that no history books (which will be written in the future, but which your past self has already seen) ever report your contributions.

Indeed, an analogous puzzle is set before us every time we act.  Our brains are nothing more than gooey messes of molecules, constrained by the same laws of physics as everything else, so we shouldn’t have free will.  And yet: can we still act as though we do?

We must.  It’s either that or sit around waiting to die.

Because the universe sprung senselessly into existence, birthed by chance fluctuations during the long march of eternity… and then we appeared, billions of years later, through the valueless vagaries of evolution… our actions shouldn’t matter.  But: can we pretend they do?

I try.  We have to try.