On ‘The Overstory.’

On ‘The Overstory.’

We delude ourselves into thinking that the pace of life has increased in recent years.  National news is made by the minute as politicians announce their plans via live-televised pronouncement or mass-audience short text message.  Office workers carry powerful computers into their bedrooms, continuing to work until moments before sleep.

But our frenzy doesn’t match the actual pace of the world.  There’s a universe of our own creation zipping by far faster than the reaction time of any organism that relies on voltage waves propagating along its ion channels.  Fortunes are made by shortening the length of fiberoptic cable between supercomputer clusters and the stock exchange, improving response times by fractions of a second.  “Practice makes perfect,” and one reason the new chess and Go algorithms are so much better than human players is that they’ve played lifetimes of games against themselves since their creation.

640px-IFA_2010_Internationale_Funkausstellung_Berlin_18We can frantically press buttons or swipe our fingers across touch screens, but humans will never keep up with the speed of the algorithms that recommend our entertainment, curate our news, eavesdrop on our conversations, guess at our sexual predilections, condemn us to prison

And then there’s the world.  The living things that have been inhabiting our planet for billions of years – the integrated ecosystems they create, the climates they shape.  The natural world continues to march at the same stately pace as ever.  Trees siphon carbon from the air as they grasp for the sun, then fall and rot and cause the Earth itself to grow.  A single tree might live for hundreds or thousands of years.  The forests in which they are enmeshed might develop a personality over millions.

Trees do not have a neural network.  But neither do neurons.  When simple components band together and communicate, the result can be striking.  And, as our own brains clearly show, conscious.  The bees clustering beneath a branch do not seem particularly clever by most of our metrics, but the hive as a whole responds intelligently to external pressures.  Although each individual has no idea what the others are doing, they function as a unit.

Your neurons probably don’t understand what they’re doing.  But they communicate to the others, and that wide network of communication is enough.

Root_of_a_TreeTrees talk.  Their roots intertwine – they send chemical communiques through symbiotic networks of fungal mycelia akin to telephones.

Trees talk slowly, by our standards.  But we’ve already proven to ourselves that intelligence could operate over many orders of temporal magnitude – silicon-based AI is much speedier than the chemical communiques sent from neuron to neuron within our own brains.  If a forest thought on a timescale of days, months, or years, would we humans even notice?  Our concerns were bound up in the minute by minute exigencies of hunting for food, finding mates, and trying not to be mauled by lions.  Now, they’re bound up in the exigencies of making money.  Selecting which TV show to stream.  Scoping the latest developments of a congressional race that will determine whether two more years pass without the slightest attempt made to avoid global famine.

In The Overstory, Richard Powers tries to frame this timescale conflict such that we Homo sapiens might finally understand.  Early on, he presents a summary of his own book; fractal-like, this single paragraph encapsulates the entire 500 pages (or rather, thousands of years) of heartbreak.

image (2)He still binges on old-school reading.  At night, he pores over mind-bending epics that reveal the true scandals of time and matter.  Sweeping tales of generational spaceship arks.  Domed cities like giant terrariums.  Histories that split and bifurcate into countless parallel quantum worlds.  There’s a story he’s waiting for, long before he comes across it.  When he finds it at last, it stays with him forever, although he’ll never be able to find it again, in any database.  Aliens land on Earth.  They’re little runts, as alien races go.  But they metabolize like there’s no tomorrow.  They zip around like swarms of gnats, too fast to see – so fast that Earth seconds seem to them like years.  To them, humans are nothing but sculptures of immobile meat.  The foreigners try to communicate, but there’s no reply.  Finding no signs of intelligent life, they tuck into the frozen statues and start curing them like so much jerky, for the long ride home.

Several times while reading The Overstory, I felt a flush of shame at the thought of how much I personally consume.  Which means, obviously, that Powers was doing his work well – I should feel ashamed.    We are alive, brilliantly beautifully alive, here on a magnificent, temperate planet.  But most of us spend too little time feeling awe and too much feeling want.  “What if there was more?” repeated so often that we’ve approached a clear precipice of forever having less.

In Fruitful Labor, Mike Madison (whose every word – including the rueful realization that young people today can’t reasonably expect to follow in his footsteps – seems to come from a place of earned wisdom and integrity, a distinct contrast from Thoreau’s Walden, in my opinion) asks us to:

image (3)Consider the case of a foolish youth who, at age 21, inherits a fortune that he spends so recklessly that, by the age of 30, the fortune is dissipated and he finds himself destitute.  This is more or less the situation of the human species.  We have inherited great wealth in several forms: historic solar energy, either recent sunlight stored as biomass, or ancient sunlight stored as fossil fuels; the great diversity of plants and animals, organized into robust ecosystems; ancient aquifers; and the earth’s soil, which is the basis for all terrestrial life.  We might mention a fifth form of inherited wealth – antibiotics, that magic against many diseases – which we are rendering ineffective through misuse.  Of these forms of wealth that we are spending so recklessly, fossil fuels are primary, because it is their energy that drives the destruction of the other assets.

What we have purchased with the expenditure of this inheritance is an increase in the human population of the planet far above what the carrying capacity would be without the use of fossil fuels.  This level of population cannot be sustained, and so must decline.  The decline could be gradual and relatively painless, as we see in Japan, where the death rate slightly exceeds the birth rate.  Or the decline could be sudden and catastrophic, with unimaginable grief and misery.

In this context, the value of increased energy efficiency is that it delays the inevitable reckoning; that is, it buys us time.  We could use this time wisely, to decrease our populations in the Japanese style, and to conserve our soil, water, and biological resources.  A slower pace of climate change could allow biological and ecological adaptations.  At the same time we could develop and enhance our uses of geothermal, nuclear, and solar energies, and change our habits to be less materialistic.  A darker option is to use the advantages of increased energy efficiency to increase the human population even further, ensuring increasing planetary poverty and an even more grievous demise.  History does not inspire optimism; nonetheless, the ethical imperative remains to farm as efficiently as one is able.

The tragic side of this situation is not so much the fate of the humans; we are a flawed species unable to make good use of the wisdom available to us, and we have earned our unhappy destiny by our foolishness.  It is the other species on the planet, whose destinies are tied to ours, that suffer a tragic outcome.

Any individual among us could protest that “It’s not my fault!”  The Koch brothers did not invent the internal combustion engine – for all their efforts to confine us to a track toward destitution and demise, they didn’t set us off in that direction.  And it’s not as though contemporary humans are unique in reshaping our environment into an inhospitable place, pushing ourselves toward extinction.

Heck, you could argue that trees brought this upon themselves.  Plants caused climate change long before there was a glimmer of a chance that animals like us might ever exist.  The atmosphere of the Earth was like a gas chamber, stifling hot and full of carbon dioxide.  But then plants grew and filled the air with oxygen.  Animals could evolve … leading one day to our own species, which now kills most types of plants to clear space for a select few monocultures.

As Homo sapiens spread across the globe, we rapidly caused the extinction of nearly all mega-fauna on every continent we reached.  On Easter Island, humans caused their own demise by killing every tree – in Collapse, Jared Diamond writes that our species’ inability to notice long-term, gradual change made the environmental devastation possible (indeed, the same phenomenon explains why people aren’t as upset as they should be about climate change today):

image (4)We unconsciously imagine a sudden change: one year, the island still covered with a forest of tall palm trees being used to produce wine, fruit, and timber to transport and erect statues; the next year, just a single tree left, which an islander proceeds to fell in an act of incredibly self-damaging stupidity. 

Much more likely, though, the changes in forest cover from year to year would have been almost undetectable: yes, this year we cut down a few trees over there, but saplings are starting to grow back again here on this abandoned garden site.  Only the oldest islanders, thinking back to their childhoods decades earlier, could have recognized a difference. 

Their children could no more have comprehended their parents’ tales of a tall forest than my 17-year-old sons today can comprehend my wife’s and my tales of what Los Angeles used to be like 40 years ago.  Gradually, Easter Island’s trees became fewer, smaller, and less important.  At the time that the last fruit-bearing adult palm tree was cut, the species had long ago ceased to be of any economic significance.  That left only smaller and smaller palm saplings to clear each year, along with other bushes and treelets. 

No one would have noticed the falling of the last little palm sapling.

512px-Richard_Powers_(author)Throughout The Overstory, Powers summarizes research demonstrating all the ways that a forest is different from – more than – a collection of trees.  It’s like comparing a functioning brain with neuronal cells grown in a petri dish.  But we have cut down nearly all our world’s forests.  We can console ourselves that we still allow some trees to grow – timber crops to ensure that we’ll still have lumber for all those homes we’re building – but we’re close to losing forests without ever knowing quite what they are.

Powers is furious, and wants for you to change your life.

You’re a psychologist,” Mimi says to the recruit.  “How do we convince people that we’re right?”

The newest Cascadian [a group of environmentalists-cum-ecoterrorists / freedom fighters] takes the bait.  “The best arguments in the world won’t change a person’s mind.  The only thing that can do that is a good story.”

On artificial intelligence and solitary confinement.

On artificial intelligence and solitary confinement.

512px-Ludwig_WittgensteinIn Philosophical Investigations (translated by G. E. M. Anscombe), Ludwig Wittgenstein argues that something strange occurs when we learn a language.  As an example, he cites the problems that could arise when you point at something and describe what you see:

The definition of the number two, “That is called ‘two’ “ – pointing to two nuts – is perfectly exact.  But how can two be defined like that?  The person one gives the definition to doesn’t know what one wants to call “two”; he will suppose that “two” is the name given to this group of nuts!

I laughed aloud when I read this statement.  I borrowed Philosophical Investigations a few months after the birth of our second child, and I had spent most of his first day pointing at various objects in the hospital maternity ward and saying to him, “This is red.”  “This is red.”

“This is red.”

Of course, the little guy didn’t understand language yet, so he probably just thought, the warm carry-me object is babbling again.

IMG_5919
Red, you say?

Over time, though, this is how humans learn.  Wittgenstein’s mistake here is to compress the experience of learning a language into a single interaction (philosophers have a bad habit of forgetting about the passage of time – a similar fallacy explains Zeno’s paradox).  Instead of pointing only at two nuts, a parent will point to two blocks – “This is two!” and two pillows – “See the pillows?  There are two!” – and so on.

As a child begins to speak, it becomes even easier to learn – the kid can ask “Is this two?”, which is an incredibly powerful tool for people sufficiently comfortable making mistakes that they can dodge confirmation bias.

y648(When we read the children’s story “In a Dark Dark Room,” I tried to add levity to the ending by making a silly blulululu sound to accompany the ghost, shown to the left of the door on this cover. Then our youngest began pointing to other ghost-like things and asking, “blulululu?”  Is that skeleton a ghost?  What about this possum?)

When people first programmed computers, they provided definitions for everything.  A ghost is an object with a rounded head that has a face and looks very pale.  This was a very arduous process – my definition of a ghost, for instance, is leaving out a lot of important features.  A rigorous definition might require pages of text. 

Now, programmers are letting computers learn the same way we do.  To teach a computer about ghosts, we provide it with many pictures and say, “Each of these pictures has a ghost.”  Just like a child, the computer decides for itself what features qualify something for ghost-hood.

In the beginning, this process was inscrutable.  A trained algorithm could say “This is a ghost!”, but it couldn’t explain why it thought so.

From Philosophical Investigations: 

Screen Shot 2018-03-22 at 8.40.41 AMAnd what does ‘pointing to the shape’, ‘pointing to the color’ consist in?  Point to a piece of paper.  – And now point to its shape – now to its color – now to its number (that sounds queer). – How did you do it?  – You will say that you ‘meant’ a different thing each time you pointed.  And if I ask how that is done, you will say you concentrated your attention on the color, the shape, etc.  But I ask again: how is that done?

After this passage, Wittgenstein speculates on what might be going through a person’s head when pointing at different features of an object.  A team at Google working on automated image analysis asked the same question of their algorithm, and made an output for the algorithm to show what it did when it “concentrated its attention.” 

Here’s a beautiful image from a recent New York Times article about the project, “Google Researchers Are Learning How Machines Learn.”  When the algorithm is specifically instructed to “point to its shape,” it generates a bizarre image of an upward-facing fish flanked by human eyes (shown bottom center, just below the purple rectangle).  That is what the algorithm is thinking of when it “concentrates its attention” on the vase’s shape.

new york times image.jpg

At this point, we humans could quibble.  We might disagree that the fish face really represents the platonic ideal of a vase.  But at least we know what the algorithm is basing its decision on.

Usually, that’s not the case.  After all, it took a lot of work for Google’s team to make their algorithm spit out images showing what it was thinking about.  With most self-trained neural networks, we know only its success rate – even the designers will have no idea why or how it works.

Which can lead to some stunningly bizarre failures.

artificial-intelligence-2228610_1280It’s possible to create images that most humans recognize as one thing, and that an image-analysis algorithm recognizes as something else.  This is a rather scary opportunity for terrorism in a world of self-driving cars; street signs could be defaced in such a way that most human onlookers would find the graffiti unremarkable, but an autonomous car would interpret in a totally new way.

In the world of criminal justice, inscrutable algorithms are already used to determine where police officers should patrol.  The initial hope was that this system would be less biased – except that the algorithm was trained on data that came from years of racially-motivated enforcement.  Minorities are still more likely to be apprehended for equivalent infractions.

And a new artificial intelligence algorithm could be used to determine whether a crime was “gang related.”  The consequences of error can be terrible, here: in California, prisoners could be shunted to solitary for decades if they were suspected of gang affiliation.  Ambiguous photographs on somebody’s social media site were enough to subject a person to decades of torture.

Solitary_Confinement_(4692414179)When an algorithm thinks that the shape of a vase is a fish flanked by human eyes, it’s funny.  But it’s a little less comedic when an algorithm’s mistake ruins somebody’s life – if an incident is designated as a “gang-related crime”, prison sentences can be egregiously long, or send someone to solitary for long enough to cause “anxiety, depression, and hallucinations until their personality is completely destroyed.

Here’s a poem I received in the mail recently:

LOCKDOWN

by Pouncho

For 30 days and 30 nights

I stare at four walls with hate written

         over them.

Falling to my knees from the body blows

         of words.

It damages the mind.

I haven’t had no sleep. 

How can you stop mental blows, torture,

         and names –

         They spread.

I just wanted to scream:

         Why?

For 30 days and 30 nights

My mind was in isolation.

On empathizing with machines.

On empathizing with machines.

When I turn on my computer, I don’t consider what my computer wants.  It seems relatively empty of desire.  I click on an icon to open a text document and begin to type: letters appear on the screen.

If anything, the computer seems completely servile.  It wants to be of service!  I type, and it rearranges little magnets to mirror my desires.

Gps-304842.svg

When our family travels and turns on the GPS, though, we discuss the system’s wants more readily.

“It wants you to turn left here,” K says.

“Pfft,” I say.  “That road looks bland.”  I keep driving straight and the machine starts flashing make the next available u-turn until eventually it gives in and calculates a new route to accommodate my whim.

The GPS wants our car to travel along the fastest available route.  I want to look at pretty leaves and avoid those hilly median-less highways where death seems imminent at every crest.  Sometimes the machine’s desires and mine align, sometimes they do not.

The GPS is relatively powerless, though.  It can only accomplish its goals by persuading me to follow its advice.  If it says turn left and I feel wary, we go straight.

facebook-257829_640Other machines get their way more often.  For instance, the program that chooses what to display on people’s Facebook pages.  This program wants to make money.  To do this, it must choose which advertisers receive screen time, and to curate an audience that will look at those screens often.  It wants for the people looking at advertisements to enjoy their experience.

Luckily for this program, it receives a huge amount of feedback on how well it’s doing.  When it makes a mistake, it will realize promptly and correct itself.  For instance, it gathers data on how much time the target audience spends looking at the site.  It knows how often advertisements are clicked on by someone curious to learn more about whatever is being shilled.  It knows how often those clicks lead to sales for the companies giving it money (which will make those companies more eager to give it money in the future).

Of course, this program’s desire for money doesn’t always coincide with my desires.  I want to live in a country with a broadly informed citizenry.  I want people to engage with nuanced political and philosophical discourse.  I want people to spend less time staring at their telephones and more time engaging with the world around them.  I want people to spend less money.

But we, as a people, have given this program more power than a GPS.  If you look at Facebook, it controls what you see – and few people seem upset enough to stop looking at Facebook.

With enough power, does a machine become a moral actor?  The program choosing what to display on Facebook doesn’t seem to consider the ethics of its decisions … but should it?

From Burt Helm’s recent New York Times Magazine article, “How Facebook’s Oracular Algorithm Determines the Fates of Start-Ups”:

Bad human actors don’t pose the only problem; a machine-learning algorithm, left unchecked, can misbehave and compound inequality on its own, no help from humans needed.  The same mechanism that decides that 30-something women who like yoga disproportionately buy Lululemon tights – and shows them ads for more yoga wear – would also show more junk-food ads to impoverished populations rife with diabetes and obesity.

If a machine designed to want money becomes sufficiently powerful, it will do things that we humans find unpleasant.  (This isn’t solely a problem with machines – consider the ethical decisions of the Koch brothers, for instance – but contemporary machines tend to be much more single-minded than any human.)

I would argue that even if a programmer tried to include ethical precepts into a machine’s goals, problems would arise.  If a sufficiently powerful machine had the mandate “end human suffering,” for instance, it might decide to simultaneously snuff all Homo sapiens from the planet.

Which is a problem that game designer Frank Lantz wanted to help us understand.

One virtue of video games over other art forms is how well games can create empathy.  It’s easy to read about Guantanamo prison guards torturing inmates and think, I would never do that.  The game Grand Theft Auto 5 does something more subtle.  It asks players – after they have sunk a significant time investment into the game – to torture.  You, the player, become like a prison guard, having put years of your life toward a career.  You’re asked to do something immoral.  Will you do it?

grand theft auto

Most players do.  Put into that position, we lapse.

In Frank Lantz’s game, Paperclips, players are helped to empathize with a machine.  Just like the program choosing what to display on people’s Facebook pages, players are given several controls to tweak in order to maximize a resource.  That program wanted money; you, in the game, want paperclips.  Click a button to cut some wire and, voila, you’ve made one!

But what if there were more?

Paperclip-01_(xndr)

A machine designed to make as many paperclips as possible (for which it needs money, which it gets by selling paperclips) would want more.  While playing the game (surprisingly compelling given that it’s a text-only window filled with flickering numbers), we become that machine.  And we slip into folly.  Oops.  Goodbye, Earth.

There are dangers inherent in giving too much power to anyone or anything with such clearly articulated wants.  A machine might destroy us.  But: we would probably do it, too.

On perception and learning.

On perception and learning.

Cuddly.

Fearful.

Monstrous.

Peering with the unwavering focus of a watchful overlord.

A cat could seem to be many different things, and Brendan Wenzel’s recent picture book They All Saw a Cat conveys these vagrancies of perception beautifully. Though we share the world, we all see and hear and taste it differently. Each creature’s mind filters a torrential influx of information into manageable experience; we all filter the world differently.

They All Saw a Cat ends with a composite image. We see the various components that were focused on by each of the other animals, amalgamated into something approaching “cat-ness.” A human child noticed the cat’s soft fur, a mouse noticed its sharp claws, a fox noticed its swift speed, a bird noticed that it can’t fly.

All these properties are essential descriptors, but so much is blurred away by our minds. When I look at a domesticated cat, I tend to forget about the sharp claws and teeth. I certainly don’t remark on its lack of flight – being landbound myself, this seems perfectly ordinary to me. To be ensnared by gravity only seems strange from the perspective of a bird.

theyallsawThere is another way of developing the concept of “cat-ness,” though. Instead of compiling many creatures’ perceptions of a single cat, we could consider a single perceptive entity’s response to many specimens. How, for instance, do our brains learn to recognize cats?

When a friend (who teaches upper-level philosophy) and I were talking about Ludwig Wittgenstein’s Philosophical Investigations, I mentioned that I felt many of the aims of that book could be accomplished with a description of principal component analysis paired with Gideon Lewis-Kraus’s lovely New York Times Magazine article on Google Translate.

My friend looked at me with a mix of puzzlement and pity and said, “No.” Then added, as regards Philosophical Investigations, “You read it too fast.”

wittgensteinOne of Wittgenstein’s aims is to show how humans can learn to use language… which is complicated by the fact that, in my friend’s words, “Any group of objects will share more than one commonality.” He posits that no matter how many red objects you point to, they’ll always share properties other than red-ness in common.

Or cats… when you’re teaching a child how to speak and point out many cats, will they have properties other than cat-ness in common?

In some ways, I agree. After all, I think the boundaries between species are porous. I don’t think there is a set of rules that could be used to determine whether a creature qualifies for personhood, so it’d be a bit silly if I also claimed that cat-ness could be clearly defined.

But when I point and say “That’s a cat!”, chances are that you’ll think so too. Even if no one had ever taught us what cats are, most people in the United States have seen enough of them to think “All those furry, four-legged, swivel-tailed, pointy-eared, pouncing things were probably the same type of creature!”

Even a computer can pick out these commonalities. When we learn about the world, we have a huge quantity of sensory data to draw upon – cats make those noises, they look like that when they find a sunny patch of grass to lie in, they look like that when they don’t want me to pet them – but a computer can learn to identify cat-ness using nothing more than grainy stills from Youtube.

Quoc Le et al. fed a few million images from Youtube videos to a computer algorithm that was searching for commonalities between the pictures. Even though the algorithm was given no hints as to the nature of the videos, it learned that many shared an emphasis on oblong shapes with triangles on top… cat faces. Indeed, when Le et al. made a visualization of the patterns that were causing their algorithm to cluster these particular videos together, we can recognize a cat in that blur of pixels.

The computer learns in a way vaguely analogous to the formation of social cliques in a middle school cafeteria. Each kid is a beautiful and unique snowflake, sure, but there are certain properties that cause them to cluster together: the sporty ones, the bookish ones, the D&D kids. For a neural network, each individual is only distinguished by voting “yes” or “no,” but you can cluster the individuals who tend to vote “yes” at the same time. For a small grid of black and white pixels, some individuals will be assigned to the pixels and vote “yes” only when their pixels are white… but others will watch the votes of those first responders and vote “yes” if they see a long line of “yes” votes in the top quadrants, perhaps… and others could watch those votes, allowing for layers upon layers of complexity in analysis.

three-body-problem-by-cixin-liu-616x975And I should mention that I feel indebted to Liu Cixin’s sci-fi novel The Three-Body Problem for thinking to humanize a computer algorithm this way. Liu includes a lovely description of a human motherboard, with triads of trained soldiers hoisting red or green flags forming each logic gate.

In the end, the algorithm developed by Le et al. clustered only 75% of the frames from Youtube cat videos together – it could recognize many of these as being somehow similar, but it was worse at identifying cat-ness than the average human child. But it’s pretty easy to realize why: after all, Le et al. titled their paper “Building high-level features using large scale unsupervised learning.”

Proceedings of the International Conference on Machine Learning 2010
You might have to squint, but there’s a cat here. Or so says their algorithm.

When Wittgenstein writes about someone watching builders – one person calls out “Slab!”, the other brings a large flat rock – he is also considering unsupervised learning. And so it is easy for Wittgenstein to imagine that the watcher, even after exclaiming “Now I’ve got it!”, could be stymied by a situation that went beyond the training.

Many human cultures have utilized unsupervised learning as a major component of childrearing – kids are expected to watch their elders and puzzle out on their own how to do everything in life – but this potential inflexibility that Wittgenstein alludes to underlies David Lancy’s advice in The Anthropology of Childhood that children will fair best in our modern world when they have someone guiding their education and development.

Unsupervised learning may be sufficient to prepare children for life in an agrarian village. Unsupervised learning is sufficient for chimpanzees learning how to crack nuts. And unsupervised learning is sufficient to for a computer to develop an idea about what cats are.

But the best human learning employs the scientific method – purposefully seeking out “no.”

I assume most children reflexively follow the scientific method – my daughter started shortly after her first birthday. I was teaching her about animals, and we started with dogs. At first, she pointed primarily to creatures that looked like her Uncle Max. Big, brown, four-legged, slobbery.

IMG_5319.JPG
Good dog.

Eventually she started pointing to creatures that looked slightly different: white dogs, black dogs, small dogs, quiet dogs. And then the scientific method kicked in.

She’d point to a non-dog, emphatically claiming it to be a dog as well. And then I’d explain why her choice wasn’t a dog. What features cause an object to be excluded from the set of correct answers?

Eventually she caught on.

Many adults, sadly, are worse at this style of thinking than children. As we grow, it becomes more pressing to seem competent. We adults want our guesses to be right – we want to hear yes all the time – which makes it harder to learn.

The New York Times recently presented a clever demonstration of this. They showed a series of numbers that follow a rule, let readers type in new numbers to see if their guesses also followed the rule, and asked for readers to describe what the rule was.

A scientist would approach this type of puzzle by guessing a rule and then plugging in numbers that don’t follow it – nothing is ever really proven in science, but we validate theories by designing experiments that should tell us “no” if our theory is wrong. Only theories that all “falsifiable” fall under the purvey of science. And the best fields of science devote considerable resources to seeking out opportunities to prove ourselves wrong.

But many adults, wanting to seem smart all the time, fear mistakes. When that New York Times puzzle was made public, 80% of readers proposed a rule without ever hearing that a set of numbers didn’t follow it.

Wittgenstein’s watcher can’t really learn what “Slab!” means until perversely hauling over some other type of rock and being told, “no.”

We adults can’t fix the world until we learn from children that it’s okay to look ignorant sometimes. It’s okay to be wrong – just say “sorry” and “I’ll try to do better next time.”

Otherwise we’re stuck digging in our heels and arguing for things we should know to be ridiculous.

It doesn’t hurt so bad. Watch: nope, that one’s not a cat.

16785014164_0b8a71b191_z
Photo by John Mason on Flickr.