On perception and learning.

On perception and learning.

Cuddly.

Fearful.

Monstrous.

Peering with the unwavering focus of a watchful overlord.

A cat could seem to be many different things, and Brendan Wenzel’s recent picture book They All Saw a Cat conveys these vagrancies of perception beautifully. Though we share the world, we all see and hear and taste it differently. Each creature’s mind filters a torrential influx of information into manageable experience; we all filter the world differently.

They All Saw a Cat ends with a composite image. We see the various components that were focused on by each of the other animals, amalgamated into something approaching “cat-ness.” A human child noticed the cat’s soft fur, a mouse noticed its sharp claws, a fox noticed its swift speed, a bird noticed that it can’t fly.

All these properties are essential descriptors, but so much is blurred away by our minds. When I look at a domesticated cat, I tend to forget about the sharp claws and teeth. I certainly don’t remark on its lack of flight – being landbound myself, this seems perfectly ordinary to me. To be ensnared by gravity only seems strange from the perspective of a bird.

theyallsawThere is another way of developing the concept of “cat-ness,” though. Instead of compiling many creatures’ perceptions of a single cat, we could consider a single perceptive entity’s response to many specimens. How, for instance, do our brains learn to recognize cats?

When a friend (who teaches upper-level philosophy) and I were talking about Ludwig Wittgenstein’s Philosophical Investigations, I mentioned that I felt many of the aims of that book could be accomplished with a description of principal component analysis paired with Gideon Lewis-Kraus’s lovely New York Times Magazine article on Google Translate.

My friend looked at me with a mix of puzzlement and pity and said, “No.” Then added, as regards Philosophical Investigations, “You read it too fast.”

wittgensteinOne of Wittgenstein’s aims is to show how humans can learn to use language… which is complicated by the fact that, in my friend’s words, “Any group of objects will share more than one commonality.” He posits that no matter how many red objects you point to, they’ll always share properties other than red-ness in common.

Or cats… when you’re teaching a child how to speak and point out many cats, will they have properties other than cat-ness in common?

In some ways, I agree. After all, I think the boundaries between species are porous. I don’t think there is a set of rules that could be used to determine whether a creature qualifies for personhood, so it’d be a bit silly if I also claimed that cat-ness could be clearly defined.

But when I point and say “That’s a cat!”, chances are that you’ll think so too. Even if no one had ever taught us what cats are, most people in the United States have seen enough of them to think “All those furry, four-legged, swivel-tailed, pointy-eared, pouncing things were probably the same type of creature!”

Even a computer can pick out these commonalities. When we learn about the world, we have a huge quantity of sensory data to draw upon – cats make those noises, they look like that when they find a sunny patch of grass to lie in, they look like that when they don’t want me to pet them – but a computer can learn to identify cat-ness using nothing more than grainy stills from Youtube.

Quoc Le et al. fed a few million images from Youtube videos to a computer algorithm that was searching for commonalities between the pictures. Even though the algorithm was given no hints as to the nature of the videos, it learned that many shared an emphasis on oblong shapes with triangles on top… cat faces. Indeed, when Le et al. made a visualization of the patterns that were causing their algorithm to cluster these particular videos together, we can recognize a cat in that blur of pixels.

The computer learns in a way vaguely analogous to the formation of social cliques in a middle school cafeteria. Each kid is a beautiful and unique snowflake, sure, but there are certain properties that cause them to cluster together: the sporty ones, the bookish ones, the D&D kids. For a neural network, each individual is only distinguished by voting “yes” or “no,” but you can cluster the individuals who tend to vote “yes” at the same time. For a small grid of black and white pixels, some individuals will be assigned to the pixels and vote “yes” only when their pixels are white… but others will watch the votes of those first responders and vote “yes” if they see a long line of “yes” votes in the top quadrants, perhaps… and others could watch those votes, allowing for layers upon layers of complexity in analysis.

three-body-problem-by-cixin-liu-616x975And I should mention that I feel indebted to Liu Cixin’s sci-fi novel The Three-Body Problem for thinking to humanize a computer algorithm this way. Liu includes a lovely description of a human motherboard, with triads of trained soldiers hoisting red or green flags forming each logic gate.

In the end, the algorithm developed by Le et al. clustered only 75% of the frames from Youtube cat videos together – it could recognize many of these as being somehow similar, but it was worse at identifying cat-ness than the average human child. But it’s pretty easy to realize why: after all, Le et al. titled their paper “Building high-level features using large scale unsupervised learning.”

Proceedings of the International Conference on Machine Learning 2010
You might have to squint, but there’s a cat here. Or so says their algorithm.

When Wittgenstein writes about someone watching builders – one person calls out “Slab!”, the other brings a large flat rock – he is also considering unsupervised learning. And so it is easy for Wittgenstein to imagine that the watcher, even after exclaiming “Now I’ve got it!”, could be stymied by a situation that went beyond the training.

Many human cultures have utilized unsupervised learning as a major component of childrearing – kids are expected to watch their elders and puzzle out on their own how to do everything in life – but this potential inflexibility that Wittgenstein alludes to underlies David Lancy’s advice in The Anthropology of Childhood that children will fair best in our modern world when they have someone guiding their education and development.

Unsupervised learning may be sufficient to prepare children for life in an agrarian village. Unsupervised learning is sufficient for chimpanzees learning how to crack nuts. And unsupervised learning is sufficient to for a computer to develop an idea about what cats are.

But the best human learning employs the scientific method – purposefully seeking out “no.”

I assume most children reflexively follow the scientific method – my daughter started shortly after her first birthday. I was teaching her about animals, and we started with dogs. At first, she pointed primarily to creatures that looked like her Uncle Max. Big, brown, four-legged, slobbery.

IMG_5319.JPG
Good dog.

Eventually she started pointing to creatures that looked slightly different: white dogs, black dogs, small dogs, quiet dogs. And then the scientific method kicked in.

She’d point to a non-dog, emphatically claiming it to be a dog as well. And then I’d explain why her choice wasn’t a dog. What features cause an object to be excluded from the set of correct answers?

Eventually she caught on.

Many adults, sadly, are worse at this style of thinking than children. As we grow, it becomes more pressing to seem competent. We adults want our guesses to be right – we want to hear yes all the time – which makes it harder to learn.

The New York Times recently presented a clever demonstration of this. They showed a series of numbers that follow a rule, let readers type in new numbers to see if their guesses also followed the rule, and asked for readers to describe what the rule was.

A scientist would approach this type of puzzle by guessing a rule and then plugging in numbers that don’t follow it – nothing is ever really proven in science, but we validate theories by designing experiments that should tell us “no” if our theory is wrong. Only theories that all “falsifiable” fall under the purvey of science. And the best fields of science devote considerable resources to seeking out opportunities to prove ourselves wrong.

But many adults, wanting to seem smart all the time, fear mistakes. When that New York Times puzzle was made public, 80% of readers proposed a rule without ever hearing that a set of numbers didn’t follow it.

Wittgenstein’s watcher can’t really learn what “Slab!” means until perversely hauling over some other type of rock and being told, “no.”

We adults can’t fix the world until we learn from children that it’s okay to look ignorant sometimes. It’s okay to be wrong – just say “sorry” and “I’ll try to do better next time.”

Otherwise we’re stuck digging in our heels and arguing for things we should know to be ridiculous.

It doesn’t hurt so bad. Watch: nope, that one’s not a cat.

16785014164_0b8a71b191_z
Photo by John Mason on Flickr.

On Facebook and fake news.

On Facebook and fake news.

With two credits left to finish his degree, a friend switched his major from philosophy to computer science.  One of his first assignments: build a website for a local business.  Rather than find someone needing this service, he decided to fabricate an empire.

I never knew whether he thought this would be easier.  In any case, he resolved to create the simulacrum of a small publishing company and asked me for help.  We wrote short biographies for approximately a dozen authors on the company’s roster, drafted excerpts from several books for each, designed book covers, and used Photoshop to paste our creations into conference halls, speaking at podiums and being applauded for their achievements.

This was in the fall of 2003, so we assumed that aspiring artists would also pursue a social media presence.  We created profiles for the authors on Myspace (the original incarnation of Facebook, loathe to admit fakery, would only let users register for an account using a university email address; the email accounts we’d made for our authors were all hosted through Hotmail and Yahoo).  My friend put profiles for several on dating websites.  He arranged trysts that the (imaginary) authors cancelled at the last minute.

My apologies to the men and women who were stood up by our creations.  I’d like to think that most real-world authors are less fickle.

Several years later, when my family began recording holiday albums in lieu of a photograph to mail to our friends and relatives, we named the project after the most successful of these authors… “success” here referring solely to popularity on the dating sites.  We figured that, because these entities were all constructs of our imaginations, this was the closest we’d ever come to a controlled experiment comparing the allure of different names.

a red one.jpg
It does still have a certain ring to it.

Eventually, my friend submitted his project.  By this time he’d kept up the profiles of our creations for about two months.  At first the authors were only friends with each other, but by then they’d begun to branch out, each participating in different online discussion groups, making a different set of connections to the world…

My friend received a failing grade.  None of the links to buy the authors’ books were functional.  He had thought this was a reasonable omission, since the full texts did not exist, but his professor was a stickler.

Still, I have to admit: faking is fun.

Profitable, too.  Not in my friend’s case, where he devoted prodigious quantities of effort toward a project that earned exceptionally low marks (he gave up on computer science at the end of that semester, and indeed changed his major thrice more before resigning himself to a philosophy degree and completing those last two credits).  But, for others?

From William Gaddis’s The Recognitions:

71ncmdhfzzlLong since, of course, in the spirit of that noblesse oblige which she personified, Paris had withdrawn from any legitimate connection with works of art, and directly increased her entourage of those living for Art’s sake.  One of these, finding himself on trial just two or three years ago, had made the reasonable point that a typical study of a Barbizon peasant signed with his own name brought but a few hundred francs, but signed Millet, ten thousand dollars; and the excellent defense that this subterfuge had not been practiced on Frenchmen, but on English and Americans “to whom you can sell anything” . . . here, in France, where everything was for sale.

Or, put more explicitly by Jean de la Bruyêre (& translated by Jean Stewart):

It is harder to make one’s name by means of a perfect work than to win praise for a second-rate one by means of a name one has already acquired.

Our world is saturated in information and art – to garner attention, it might seem necessary to pose as a trusted brand.

6641427981_0bc638f8e8_oOr, it seems, to peddle untruths so outlandish that they stand distinct from run-on-the-mill reality, which might be found anywhere.  This, it seems, was a profitable moneymaking scheme during the 2016 U.S. elections.  With a sufficiently catchy fabrication, anyone anywhere could dupe Facebook users and reap Google advertising dollars.

Which is frustrating, sure.  Networks created by ostensibly socially-conscious left-leaning Silicon Valley companies enabled a far-right political campaign built on lies.

But I would argue that the real problem with Facebook, in terms of distorting political discourse, isn’t the platform’s propensity for spreading lies.  The problem is Facebook itself, the working-as-properly attention waster.  Even when the material is real-ish – pointless lists, celebrity updates, and the like – it degrades the power to think.  The site is designed to be distracting.  After all, Facebook makes money through advertising.  Humans are most persuadable when harried & distracted – it’s while I’m in the grocery store holding a screaming toddler that I’m most likely to grab whatever item has a brightly-colored tag announcing its SALE! price instead of checking to see which offers the best value.  All the dopamine-releasing pings and pokes on Facebook keep users susceptible.

As described by computer scientist Cal Newport:

Consider that the ability to concentrate without distraction on hard tasks is becoming increasingly valuable in an increasingly complicated economy.  Social media weakens this skill because it’s engineered to be addictive.  The more you use social media in the way it’s designed to be used – persistently throughout your waking hours – the more your brain learns to crave a quick hit of stimulus at the slightest hint of boredom.

Once this Pavlovian connection is solidified, it becomes hard to give difficult tasks the unbroken concentration they require, and your brain simply won’t tolerate such a long period without a fix.

Big ideas take time.  And so we have a conundrum: how, in our world, can we devote the time and energy necessary to gain deep understanding?

Ideas that matter won’t always fit into 140 characters or less.  If our time spent flitting through the internet has deluded us into imagining they will, that is how we destroy our country, becoming a place where we spray Brawndo onto crops because electrolytes are “what plants crave.”

Or becoming a place that elects Donald Trump.

Or becoming a place populated by people who hate Donald Trump but think that their hate alone – or, excuse me, their impassioned hate plus their ironic Twitter posts – without getting off their asses to actually do something about all the suffering in the world, is enough.  There are very clear actions you could take to push back against climate change and mass incarceration.

Kafka could look at fish.  Can we read Rainer Maria Rilke’s “Archaic Torso of Apollo” without shame? Here:

rilke