On octopus art.

On octopus art.

When we were in college, my roommate and I spent a train ride debating the merits of Andy Warhol’s art (she was a fan, I was not).  In the end, we not only failed to change each other’s opinions, but realized that we didn’t even agree what art was.  She double majored in Biomedical Engineering and Art Theory & Practice, and her view was much more expansive than my own.

In retrospect, I can admit that she was right.  My view of art was narrow-minded.  If I had to proffer a definition of “art” today, I might go with something like:

Art is an intentionally-created module that is designed to reshape the audience’s neural architecture.

By this standard, the big images of soup qualify.  So do the happenings.

Andy Warhol’s “Campbell’s Soup Cans,” 1962. Image by Wally Gobetz on Flickr.

I recently read a book that analyzed board games using the tools of art criticism and narratology.  Obviously, I now think that board games can be art.  They’re carefully designed; their creators often seem to have a goal for how each game should make players feel; the combined effects of text, visual components, and even rules can all work toward conveying those feelings.

One drawback to my newfound open-mindedness, though, is that I could probably be convinced that almost any designed object qualifies as art.

For a piece of art to “fail” to change your neural architecture, it would have to be mnemonically invisible – immediately after seeing it, you could look at it again and it would be as though it were the first time.  You’d never be able to recall its content or meaning.

Actually, I have read some esoteric, convoluted poetry like that.  Words that skimmed over my mind as though each synapse were coated with teflon. 

I wasn’t keen on the experience.  Minutes had passed, but, because I couldn’t remember anything that I’d read, I’d accomplished nothing.  I don’t need to actually understand a poem, I just want for it to make me feel somehow different after I’ve read it.  Like Will Alexander’s “The Optic Wraith,” which triggers a mysterious sense of unease even though its meaning squirms away from me:

The Optic Wraith

Her eyes

like a swarm of dense volcano spiders

woven from cold inferno spools

contradictory

consuming

clinging to my palette

like the code from a bleak inventive ruse

now

my understanding of her scent

is condoned as general waking insomnia

as void

as a cataleptic prairie

frayed at the core

by brushstrokes of vertigo

then mazes

As Alexander’s words lure me along, I lose my grasp.  But although I might not recall any specific lines, if you asked me at the end of its six pages, “So, what did you feel?”, I’d certain know that something inside my brain was different from who I’d been five minutes before.

When I was in college, I felt strongly that art needed to be beautiful.  I was wrong.  But I still believe that art works better when it’s aesthetically pleasing, because this allows it to more readily infiltrate someone’s mind.  If two paintings are both intended to convey the same ideas, but one is more pleasurable to look at, then we can assume that it will be looked at more, and thereby convey the idea more.  A charming form helps the piece achieve its function of spreading the creator’s intended message.

And, in terms of judging the quality of art, I obviously still think that the quality of message is important.

For instance, a chair.  Every chair you’ve ever sat in was designed by somebody.  If you wanted to argue that the chair is a piece of art, I suppose I’d agree with you.  And maybe it’s a very good chair: comfortable to sit in, perfectly balanced, pleasing to see when the rising sun illuminates it in the morning.  But that doesn’t mean it’s good art.

Joseph Kosuth’s “One and Three Chairs,” 1965. Photo by Kenneth Lu on Flickr.

Indeed, a chair that is bad at being a chair is more likely to be a good artwork.  A chair that’s too small or too large, conveying the discomfort of trying to make your way in a world that is primarily concerned with the comfort of bodies unlike your own.  Or a gigantic bronze throne that affords you the chance to perch in Baphomet’s lap; it would be an unpleasant place to sit, but perhaps you’d reflect more on Lucifer’s ethic of “speaking truth to power, even at great personal cost.

When we humans make art, we try to engage the emotions of our audience.  Emotionally-charged situations are more memorable; while feeling awe, or anger, or joy, human minds are most likely to change.

And human art is almost always made for a human audience.  Our brains evolved both from and for gossip; our prodigious intellect began as a tool to track convoluted social relationships.  We’re driven to seek narrative explanations, both because a coherent story makes gossip easier to understand, and because our consciousness spins stories to rationalize our actions after we perform them.

If we considered the world’s most intelligent animal species – like humans, dolphins, crows, elephants, chimpanzees – most have evolved to gossip.  Large brains gave our ancestors a selective advantage because they were able to track and manipulate their societies complex social relationships in a way that bolstered survival and breeding opportunities.  Indeed, the average elephant probably has more emotional intelligence than the average human, judging from neuron counts in the relevant areas of each species’ brains.

Elephants at a sanctuary. Image by Gilda on Flickr.

And so, if an elephant were given the freedom to paint (without a trainer tugging on her ears!), I imagine that she’d create art with the intention that another elephant would be the audience.  When a chimpanzee starts drumming, any aesthetic message is probably intended for other chimpanzees.

But what about octopus art?

Octopuses and humans haven’t had any ancestors in common for half a billion years.  Octopuses are extremely intelligent, but their intelligence arose through a very different pathway from most other animals.  Unlike the world’s brilliant birds and mammals, octopuses do not gossip.

Octopuses tend to be antisocial unless it’s mating season (or they’ve been dosed with ecstasy / MDMA).  Most of the time, they just use their prodigious intellect to solve puzzles, like how best to escape cages, or find food, or keep from being killed.

Octopus hiding in two shells. Image by Nick Hobgood on Wikipedia.

Humans have something termed “theory of mind”: we think a lot about what others are thinking.  Many types of animals do this.  For instance, if a crow knows that another crow watched it hide food, it will then come back and move the food to a new hiding spot as soon as the second crow isn’t looking.

When we make art, we’re indirectly demonstrating a theory of mind – if we want an audience to appreciate the things we make, we have to anticipate what they’ll think.

Octopuses also seem to have a “theory of mind,” but they’re not deeply invested in the thoughts of other octopuses.  They care more about the thoughts of animals that might eat them.  And they know how to be deceptive; that’s why an octopus might collect coconut shells and use one to cover itself as it slinks across the ocean floor.

A coconut octopus. Image by Christian Gloor on Wikimedia.

Human art is for humans, and bird art for birds, but octopus art is probably intended for a non-octopus audience.  Which might require even more intelligence to create; it’s easy for me to write something that a reader like me would enjoy.  Whereas an octopus artist would be empathizing with creatures radically different from itself.

If octopuses weren’t stuck with such short lifespans, living in the nightmarishly dangerous ocean depths, I bet their outward focus would lead them to become better people than we are.  The more we struggle to empathize with others different from ourselves, the better our world will be.

On Ann Leckie’s ‘The Raven Tower.’

On Ann Leckie’s ‘The Raven Tower.’

At the beginning of Genesis, God said, Let there be light: and there was light.

“Creation” by Suus Wansink on Flickr.

In her magisterial new novel The Raven Tower, Ann Leckie continues with this simple premise: a god is an entity whose words are true.

A god might say, “The sky is green.”  Well, personally I remember it being blue, but I am not a god.  Within the world of The Raven Tower, after the god announces that the sky is green, the sky will become green.  If the god is sufficiently powerful, that is.  If the god is too weak, then the sky will stay blue, which means the statement is not true, which means that the thing who said “The sky is green” is not a god.  It was a god, sure, but now it’s dead.

Poof!

And so the deities learn to be very cautious with their language, enumerating cases and provisions with the precision of a contemporary lawyer drafting contractual agreements (like the many “individual arbitration” agreements that you’ve no doubt assented to, which allow corporations to strip away your legal rights as a citizen of this country.  But, hey, I’m not trying to judge – I have signed those lousy documents, too.  It’s difficult to navigate the modern world without stumbling across them).

A careless sentence could doom a god.

But if a god were sufficiently powerful, it could say anything, trusting that its words would reshape the fabric of the universe.  And so the gods yearn to become stronger — for their own safety in addition to all the other reasons that people seek power.

In The Raven Tower, the only way for gods to gain strength is through human faith.  When a human prays or conducts a ritual sacrifice, a deity grows stronger.  But human attention is finite (which is true in our own world, too, as demonstrated so painfully by our attention-sapping telephones and our attention-monopolizing president).

Image from svgsilh.com.

And so, like pre-monopoly corporations vying for market share, the gods battle.  By conquering vast kingdoms, a dominant god could receive the prayers of more people, allowing it to grow even stronger … and so be able to speak more freely, inured from the risk that it will not have enough power to make its statements true.

If you haven’t yet read The Raven Tower, you should.  The theological underpinnings are brilliant, the characters compelling, and the plot so craftily constructed that both my spouse and I stayed awake much, much too late while reading it.

#

In The Raven Tower, only human faith feeds gods.  The rest of the natural world is both treated with reverence – after all, that bird, or rock, or snake might be a god – and yet also objectified.  There is little difference between a bird and a rock, either of which might provide a fitting receptacle for a god but neither of which can consciously pray to empower a god.

Image by Stephencdickson on Wikimedia Commons.

Although our own world hosts several species that communicate in ways that resemble human language, in The Raven Tower the boundary between human and non-human is absolute.  Within The Raven Tower, this distinction feels totally sensible – after all, that entire world was conjured through Ann Leckie’s assiduous use of human language.

But many people mistakenly believe that they are living in that fantasy world.

In the recent philosophical treatise Thinking and Being, for example, Irad Kimhi attempts to describe what is special about thought, particularly thoughts expressed in a metaphorical language like English, German, or Greek.  (Kimhi neglects mathematical languages, which is at times unfortunate.  I’ve written previously about how hard it is to translate certain concepts from mathematics into metaphorical languages like we speak with, and Kimhi fills many pages attempting to precisely the concept of “compliments” from set theory, which you could probably understand within moments by glancing at a Wikipedia page.)

Kimhi does use English assiduously, but I’m dubious that a metaphorical language was the optimal tool for the task he set himself.  And his approach was further undermined by flawed assumptions.  Kimhi begins with a “Law of Contradiction,” in which he asserts, following Aristotle, that it is impossible for a thing simultaneously to be and not to be, and that no one can simultaneously believe a thing to be and not to be.

Maybe these assumptions seemed reasonable during the time of Aristotle, but we now know that they are false.

Many research findings in quantum mechanics have shown that it is possible for a thing simultaneously to be and not to be.  An electron can have both up spin and down spin at the same moment, even though these two spin states are mutually exclusive (the states are “absolute compliments” in the terminology of set theory).  This seemingly contradictory state of both being and not being is what allows quantum computing to solve certain types of problems much faster than standard computers.

And, as a rebuttal for the psychological formulation, we have the case of free will.  Our brains, which generate consciousness, are composed of ordinary matter.  Ordinary matter evolves through time according to a set of known, predictable rules.  If the matter composing your brain was non-destructively scanned at sufficient resolution, your future behavior could be predicted.  Accurate prediction would demonstrate that you do not have free will.

And yet it feels impossible not to believe in the existence of free will.  After all, we make decisions.  I perceive myself to be choosing the words that I type.

I sincerely, simultaneously believe that humans both do and do not have free will.  And I assume that most other scientists who have pondered this question hold the same pair of seemingly contradictory beliefs.

The “Law of Contradiction” is not a great assumption to begin with.  Kimhi also objectifies nearly all conscious life upon our planet:

The consciousness of one’s thinking must involve the identification of its syncategorematic difference, and hence is essentially tied up with the use of language.

A human thinker is also a determinable being.  This book presents us with the task of trying to understand our being, the being of human beings, as that of determinable thinkers.

The Raven Tower is a fantasy novel.  Within that world, it was reasonable that there would be a sharp border separating humans from all other animals.  There are also warring gods, magical spells, and sacred objects like a spear that never misses or an amulet that makes people invisible.

But Kimhi purports to be writing about our world.

In Mama’s Last Hug, biologist Frans de Waal discusses many more instances of human thinkers brazenly touting their uniqueness.  If I jabbed a sharp piece of metal through your cheek, it would hurt.  But many humans claimed that this wouldn’t hurt a fish. 

The fish will bleed.  And writhe.  Its body will produce stress hormones.  But humans claimed that the fish was not actually in pain.

They were wrong.

Image by Catherine Matassa.

de Waal writes that:

The consensus view is now that fish do feel pain.

Readers may well ask why it has taken so long to reach this conclusion, but a parallel case is even more baffling.  For the longest time, science felt the same about human babies.  Infants were considered sub-human organisms that produced “random sounds,” smiles simply as a result of “gas,” and couldn’t feel pain. 

Serious scientists conducted torturous experiments on human infants with needle pricks, hot and cold water, and head restraints, to make the point that they feel nothing.  The babies’ reactions were considered emotion-free reflexes.  As a result, doctors routinely hurt infants (such as during circumcision or invasive surgery) without the benefit of pain-killing anesthesia.  They only gave them curare, a muscle relaxant, which conveniently kept the infants from resisting what was being done to them. 

Only in the 1980s did medical procedures change, when it was revealed that babies have a full-blown pain response with grimacing and crying.  Today we read about these experiments with disbelief.  One wonders if their pain response couldn’t have been noticed earlier!

Scientific skepticism about pain applies not just to animals, therefore, but to any organism that fails to talk.  It is as if science pays attention to feelings only if they come with an explicit verbal statement, such as “I felt a sharp pain when you did that!”  The importance we attach to language is just ridiculous.  It has given us more than a century of agnosticism with regard to wordless pain and consciousness.

As a parent, I found it extremely difficult to read the lecture de Waal cites, David Chamberlain’s “Babies Don’t Feel Pain: A Century of Denial in Medicine.”

From this lecture, I also learned that I was probably circumcised without anesthesia as a newborn.  Luckily, I don’t remember this procedure, but some people do.  Chamberlain describes several such patients, and, with my own kids, I too have been surprised by how commonly they’ve remembered and asked about things that happened before they had learned to talk.

Vaccination is painful, too, but there’s a difference – vaccination has a clear medical benefit, both for the individual and a community.  Our children have been fully vaccinated for their ages.  They cried for a moment, but we comforted them right away.

But we didn’t subject them to any elective surgical procedures, anesthesia or no.

In our world, even creatures that don’t speak with metaphorical language have feelings.

But Leckie does include a bridge between the world of The Raven Tower and our own.  Although language does not re-shape reality, words can create empathy.  We validate other lives as meaningful when we listen to their stories. 

The narrator of The Raven Tower chooses to speak in the second person to a character in the book, a man who was born with a body that did not match his mind.  Although human thinkers have not always recognized this truth, he too has a story worth sharing.

On animals that speak, including humans.

On animals that speak, including humans.

Prairie-DogsWhen prairie dogs speak, they seem to use nouns – hawk, human, wooden cut-out – adjectives – red, blue – and adverbs – moving quickly, slowly.  They might use other parts of speech as well.  Prairie dogs chitter at each other constantly, making many sounds that no humans have yet decoded.

Ever wonder about the evolutionary origin of human intelligence?  The leading theory is that, over many generations, our ancestors became brilliant … in order to gossip better.  It takes a lot of working memory to keep track of the plot of a good soap opera, and our ancestors’ lives were soap operas.  But Carl knows that Shelly doesn’t know that Terrance and Uma are sleeping together, so …

Tool use is pretty cool.  So’s a symbolic understanding of the world – who doesn’t love cave art?  But gossip probably made us who we are.  All those juicy stories begged for a language to be shared.

Many types of birds, such as parrots and crows, spend their lives gossiping.  These busybodies also happen to be some of the smartest species (according to human metrics).  Each seems to have a unique name – through speech, the birds can reference particular individuals.  They clearly remember and can probably describe past events.  Crows can learn about dangerous humans from their fellows.

When I walk around town, squirrels sometimes tsk angrily at me.  But I’ve definitively observed only a single species using its capacity for speech to denounce all other animals.  From Tom Wolfe’s The Kingdom of Speech:

9780316404624_custom-3522b1f2a1f684ab94261905a4d4c9ddf86ca882-s400-c85There is a cardinal distinction between man and animal, a sheerly dividing line as abrupt and immovable as a cliff: namely, speech.

Without speech the human beast couldn’t have created any other artifacts, not the crudest club or the simplest hoe, not the wheel or the Atlas rocket, not dance, not music, not even hummed tunes, in fact not tunes at all, not even drumbeats, not rhythm of any kind, not even keeping time with his hands.

This claim is obviously false.  Several different species do create artifacts – either speech is unnecessary for this task, or else other species of animals can speak.  Or both.  In any case, this claim is so easily rebutted – all you’d need is an example of chimpanzees drumming, let along cooking – that it seems a strange conclusion for Wolfe to make.

Don’t get me wrong: humans are pretty great at thinking.  I’m more impressed by mathematical than emotional intelligence, which makes it easy for me to think that the average human is way brighter than the average elephant.

In all likelihood, though, humans have been pretty great at thinking for hundreds of thousands of years.  The cultural evolution that produced the Atlas rocket and skyscrapers was a very sudden development.  For most of the time that humans have been on the planet, our behavior probably didn’t look so different from the behavior of orcas, chimps, or parrots.

Throughout The Kingdom of Speech, Wolfe mocks the various theories about human language presented by Noam Chomsky.  (I’m ignoring Wolfe’s claims about evolution, which he says can’t be tested, replicated, or used to elucidate otherwise inexplicable phenomena – in his words, “sincere, but sheer, literature.”  Here and here are two of many recent experiments tracking evolution in progress.)

tom-wolfe-400I often found myself nodding in agreement with Wolfe.  For instance, I’d hope that a linguist making broad claims about human language would learn as many languages as possible.  I think that contradictory evidence from the real world holds more weight than pretty theories.  From Wolfe’s Kingdom of Speech:

In the heading of the [2007 New Yorker] article [“The Interpreter: Has a Remote Amazonian Tribe Upended Our Understanding of Language?”] was a photograph, reprinted many times since, of [Dan] Everett submerged up to his neck in the Maici River.  Only his smiling face is visible.  Right near him but above him is a thirty-five-or-so-year-old Piraha sitting in a canoe in his gym shorts.  It became the image that distinguished Everett from Chomsky.  Immersed! – up to his very neck, Everett is … immersed in the lives of a tribe of hitherto unknown na – er – indigenous peoples in the Amazon’s uncivilized northwest.  No linguist could help but contrast that with everybody’s mental picture of Chomsky sitting up high, very high, in an armchair in an air-conditioned office at MIT, spic-and-span … he never looks down, only inward.  He never leaves the building except to go to the airport to fly to other campuses to receive honorary degrees … more than forty at last count … and remain unmuddied by the Maici or any of the other muck of life down below.

But Chomsky being wrong doesn’t make Wolfe right.

9780262533492In Why Only Us, authors Robert Berwick and Noam Chomsky make some suspicious claims.  They argue that human language stems from an innate neurological process that they’ve dubbed “merge,” akin to the combination of two sets to produce a single, indivisible result.  {A} merged to {B} yields {C}, where {C} contains all the elements of {A} and {B}.

This sounds pretty abstract, so an example might help.  Berwick & Chomsky think that a verb and a direct object would be combined into a single “verb phrase” that is treated as a single unit by our brain.  Or, even more complexly, the word “that” leading into a subordinate clause would produce a whole slew of words that is treated as a single unit by our brain.  (In the preceding sentence, the phrase “that is treated as a single unit by our brain” would be one object.)

Robert C. Berwick and Noam ChomskyBerwick & Chomsky’s idea is that complex sentences can be built either by listing the final units in a row or using that hierarchical “merge” operation again, i.e. putting a verb phrase inside a subordinate clause, or one subordinate clause inside another.  Leading eventually to the tangled, twisty syntax of Marcel Proust.

But as far as I could tell (their book has a lot of jargon, and I read it while walking laps of the YMCA track with a sleeping baby strapped to my chest, so it’s possible I missed something), they don’t discuss the difference between two ideas being placed at the same level of interpretation, such as two independent clauses joined by an “and” or “or,” versus a dependent clause adjoined to an independent clause with “but,” “which,” “that,” or what have you.  I couldn’t identify a feature of their argument that suggested why some adjacent words would be processed by a human brain is this special way but others would not.  I could certainly address the way this happens in English, but an evolutionary argument ought to apply to all human language, and I know so little about most others that my opinions seem unhelpful here.

Some of Berwick & Chomsky’s ideas don’t seem to hold even in English, though.  For instance, they claim that:

there is no room in this picture for any precursors to language – say a language-like system with only short sentences.  There is no rationale for positing such a system: to go from seven-word sentences to the discrete infinity of human language requires emergence of the same recursive procedure as to go from zero to infinity, and there is of course no direct evidence for such “protolanguages.”  Similar observations hold for language acquisition, despite appearances, a matter that we put to the side here.

But we’re very confidant that spoken language arose long before written language, and the process they describe isn’t how many humans interact with spoken language.  There are definite limits to how many clauses most people can keep in mind at any one time – indeed, much of Why Only Us would sound incomprehensible if read aloud.

Is it reasonable to compare human written language with the spoken language of other animals?  The former is decidedly more complex.  Sure.  But the language actually used by most humans, most of the time, seems much simpler.

When I write, I can strangle syntax as well as any other pedant.  But when I actually talk with people, most of what I say is pretty straightforward.  I get confused if somebody says something to me with too many embedded clauses, or if words intended to operate together on a “structural” level aren’t in close proximity – Berwick & Chomsky spend a while writing about the phrase “instinctively birds that fly swim,” which sounds like gibberish to me.  Just say either “birds that fly instinctively can swim” or “birds that fly can swim instinctively” and you won’t get as many funny looks.  Except for the fact that I don’t think this is true, in either phrasing.  Syntactically, though, you’d be all set!

Colorful_Parrots_CoupleIn any case, all you’d need to show to demonstrate a linguistically equivalent behavior in other animals would be two parrots discussing the beliefs of a third.  This would be the same recursion that Berwick & Chomsky claim produces the “infinity of human language.”

Given that other social animals understand the (false) beliefs of their compatriots, I’d be shocked if they didn’t talk about it.  We just haven’t learned how to listen.

Humans are great.  We’ve accomplished a lot, especially in these last few thousand years (which is incredibly fast compared to evolutionary timescales).  The world has changed even in the short time that I’ve been alive.  But the unfounded claims in both The Kingdom of Speech and Why Only Us made me feel sad: with so much to be proud of, why should we humans also strive to distinguish ourselves with supremacist arrogance?

On naked mole-rats.

On naked mole-rats.

When Radiohead first toured, their audiences just wanted to hear “Creep.”  They were invited to play a show in Israel – everyone just wanted to hear “Creep.”  They were invited to tour America – everyone just wanted to hear “Creep.”  At festivals, people walked away after they played it.  By then the song was several years old.  The dudes in Radiohead were sick of it.

To be fair, Pablo Honey was a pretty weak album.  “You” is a fine song, but the proffered singles – “Anyone Can Play Guitar” (more ironic in retrospect than it was at the time) and “Stop Whispering” – aren’t very compelling.  At the time, nobody knew their new material.

Now, of course, Radiohead is many people’s favorite band – mine too (tied with The Marshall Cloud and anything else my brother makes).

The essayist Eliot Weinberger has also toured on the strength of a hit single.  From Christopher Byrd’s 2016 profile in The New Yorker:

EliotWeinbergerBW350In person, Weinberger is genial and self-contained; he smiles frequently and is prone to wisecracks.  When I asked him about the essay [“Naked Mole-Rats,” from his 2001 collection, Karmic Traces], he said “In Germany, I’m sort of like one of those bands that had one hit record, and so I give readings and people ask me to read ‘Nacktmull,’ which is the naked mole-rat.  It’s their favorite one.  This pretty girl said, ‘Last night, I was in bed reading it to my boyfriend.’  And I said, ‘Don’t you have anything better to read?’”

Yet, like Radiohead, Weinberger has released new work every few years – he seems to have been writing constantly ever since he dropped out of college circa 1970 and began translating the poetry of Octavio Paz – and much of it is better than the hit everybody knows.  Over the past two months, I’ve had the pleasure of reading all his books – many are stunning.  The Ghosts of Birds discuses Adam & Eve, the dreams of ancient Chinese poets, and the authorial voice of George W. Bush’s “autobiography.”  I’ve written previously about What Happened Here, a collection of Weinberger’s essays about the Bush years.  And Weinberger has written extensively about the political value of poetry.  From “The T’ang” (in Oranges and Peanuts for Sale):

…[I]n the last years of the dynasty, warlords ravished the country.  One of them, Huang Ch’ao, a salt merchant who had failed the civil service exams, captured Ch’ang-an in 881.  A satiric poem was posted on the wall of a government building, criticizing the new regime.  (As, eleven hundred years later, the Democracy Movement would begin with the poems that Bei Dao and other young poets glued to the walls in their capital, Beijing.)  Huang Ch’ao issued orders that everyone capable of writing such a poem be put to death.  Three thousand were killed.

When dudes ask what we’re doing teaching a poetry class in jail, it’s great to have stories like this to relate … or to toss out a quote from Norman Dubie, my co-teacher’s advisor, who says, “If Stalin feared poetry, so should you.

And yet, I have to admit: Weinberger’s “Naked Mole-Rats” really is a lovely essay.

#

During the 1970s, evolutionary biologist Richard Alexander gave a series of lectures describing conditions that might spawn eusocial vertebrates.  Alexander was a bug guy – the term “eusocial” refers to bees, ants, and termites, where individuals are extremely self-sacrificing for the good of the colony, including an abundance of non-breeding members helping with childcare.

Alexander proposed that a eusocial species of mammal could evolve if they lived in relatively safe underground burrows that could be expanded easily and defended by a small percentage of the colony.  The animals would need to be small compared to their food sources, so that a stroke of good luck by one worker could feed many.

thebioofnakedAn audience member at one of Alexander’s lectures mentioned that this “hypothetical eusocial mammal” sounded a lot like the naked mole-rat and connected Alexander with Jennifer Jarvis, who’d studied the biology of these critters but hadn’t yet investigated their their social structure.  The collaboration between Alexander and Jarvis led to the textbook The Biology of the Naked Mole-Rat.

Eliot Weinberger combed through this 500-plus page textbook to produce his 3-page essay.  In Weinberger’s words:

As many as three hundred inhabit a colony, moving a ton of dirt every month.  They have a caste system

The medium sized are the warriors, who try to fend off the rufous-beaked snaked, the file snakes, the white-lipped snakes, and the sand boas that sometimes find their way in.When, by chance, two colonies of naked mole-rats tunnel into each other, their warriors fight to the death.

Interbred for so long, they are virtually clones.  One dead-end branch of the tunnel is their toilet: they wallow there in the soaked earth so that all will smell alike.  They are nearly always touching each other, rubbing noses, pawing, nuzzling.

6257373863_08b21a81b4_b

Like us, naked mole-rats are both good and bad.  They are cooperative.  They are affectionate.  They are always touching.  Encountering outsiders, they fight to the death.  When a breeding female dies, many other females regain fertility and the colony erupts into civil war.

Naked mole-rats care for others.  Naked mole-rats are callous toward others.

[The breeding female, of which each colony has only one] has four or five litters a year of a dozen pups.  The babies have transparent skin through which their internal organs are clearly visible.  Only a few survive, and they live long lives, twenty years or more.  The dead babies are eaten, except for their heads.  At times the live ones are eaten too.

These details are drawn from innumerable experimental observations.  We humans have spent decades investigating the naked mole-rats.  But Weinberger ends his essay with the reverse.  Naked mole-rats observe us, too:

Sometimes a naked mole-rat will suddenly stop, stand on its hind-legs, and remain motionless, its head pressed against the roof of the tunnel.  Above its head is the civil war in Somalia.  Their hearing is acute.

#

Naked mole-rats “are continually cruel in small ways.”  But they are outdone by naked apes.  After all, the cruelty of naked mole-rats is invariably directed to others of their own kind.  Our cruelty embraces ourselves as well as them.

For a research paper published in 2008, Park et al. discovered that being pinched by tweezers causes naked mole-rats pain, but the injection of caustic acid does not:

We tested naked mole-rats in standard behavioral models of acute pain including tests for mechanical (pinch), thermal, and chemical pain.  We found that for noxious pinch and heat, the mole-rats responded similarly to mice.

In contrast to the results using mechanical and thermal stimuli, there was a striking difference in responses to strong chemical irritants known to excite nociceptors [these are sensory receptors that detect noxious inputs, like pain].  Indeed, the two chemicals used – capsaicin and low-pH saline solution – normally evoke very intense pain in humans and other animals.  Injection of either irritant into the skin rapidly evoked intense licking and guarding behaviors in mice.

(In case you’re worried that acid-resistant naked mole-rats might conquer the world: a form of kryptonite exists.  Injection of an 11-amino-acid signaling peptide allows acid to hurt naked mole-rats just as much as it hurts mice.  Half a dozen animals were subjected to each treatment.)

So, naked mole-rats are selectively resistant to pain.  This has inspired some envy in human researchers – after all, chronic pain is miserable, and most of our strategies to dampen pain have a few unwanted side-effects.

But what really gets us humans jealous is that naked mole-rats seem not to age.

#

Naked mole-rats almost never develop cancer.  They should get cancer.  After all, their cells, like ours, copy themselves.  Over time, each copy is a copy of a copy of a copy… any errors are compounded.  And some errors are particularly deadly.  Our cells are supposed to stop growing when they touch each other, and they are supposed to commit suicide when their usefulness has run its course.  But the instructions telling our cells when and how to kill themselves can be lost, just like any other information.  Too many rounds of cell division is like making photocopies of photocopies… eventually the letters melt into static and become unreadable.

So I don’t quite understand why naked mole-rats don’t get cancer … but, in my defense, no one else does either.  Tian et al. found that naked mole-rats fill the space between their cells with a particular sugar that acts as an anti-clumping agent.  This contributes to their cancer resistance, because cells that can’t clump can’t form tumors… but, although many types of deadly human cancers form tumors, others, like leukemia, do not.

Lung_cancer_cell_during_cell_division-NIH.jpgOf course, “cancer” cells – mutant versions of ourselves that would kill us if they could – appear all the time.  Usually, our immune system destroys them.  Most chemotherapy agents do not kill cancer.  Chemotherapy involves pumping the body full of general poisons that stop all cells from reproducing, with the hope being that, if the spread of cancer can be slowed, a patient’s immune system will sop up the bad cells already there.

In addition to anti-clumping sugars, naked mole-rats must have other (currently unknown) virtues that enable their remarkable tenacity.

And, although the little critters seem not to age – they have “no age-related increase in mortality” and remain fertile until death – they do die.  The oldest naked mole-rat lived for 27 years in captivity, and seems to have been at least a year old when first captured, based on his size.

He was rutting and eating normally until April, 2002… but then, seemingly without cause, he died.  Writing for Scientific American shortly after this duder’s death, David Stipp described him (and naked mole-rats in general) as “a little buck-toothed burrower [who] ages like a demigod.”

But it’s worth noting that he had aged.  He had accumulated extensive oxidative damage in his lipids, proteins, and, presumably, his DNA… which is to say, his cells were noticeably rusted and falling apart.  He just didn’t let it slow him down.  Not until he keeled over.

They live with gusto, the naked mole-rats.

For as long as they energy, that is.  Several researchers have proposed that naked mole-rats have all these powers because they starve often in the wild.

Caloric restriction – which means, roughly, intentional starvation – is known to extend lifespan in a wide variety of species.  It’s been tested in monkeys, mice, flies, and worms.  Between two- and ten-fold increases in lifespan have been observed.  There are some unpleasant side effects.  Hunger, for instance.  Caloric-restricted mice spend a lot of time staring at their empty food bowls.

Many humans who attempt caloric restriction on their own find it difficult.  Hunger hurts, especially when there’s food nearby.  Plus, it’s a rare diet that provides adequate nutrition while still limiting calories.  Malnutrition makes people die younger, which defeats the point… unless your goal is simply to make God uncomfortable.  Maybe you’ll get a wish!

But naked mole-rats have no choice.  Workers tunnel outward, searching for tuberous roots.  When they find one, they’ll gnaw it carefully, attempting to keep the plant alive as long as possible, but the colony invariably consumes roots faster than a plant can grow.  Although naked mole-rats try to be good stewards of their environment – they are compulsive recyclers, eating their own excrement to make sure no nutrients are lost – their colonies plunge repeatedly into famine.

And they sleep in mounds, hundreds of bodies respiring underground.  Anyone sleeping near the center probably runs out of oxygen.

But they survive.

We would not.  Most mammals, deprived of oxygen, can no longer fuel their brains.  Our brains are expensive.  Even at rest our brains demand a constant influx of energy or else the neurons “depolarize” – we fall apart.  This is apparently an unpleasant experience.  It’s brief, though.  At Stanford, my desk was adjacent to a well-trafficked gas chamber.  A mouse, or a Chinese-food takeout container with several mice, was dropped in; a valve for carbon dioxide was opened; within seconds, the mice inside lost consciousness; they shat; they died.

A naked mole-rat would live.  Unless a very determined researcher left the carbon dioxide flowing for half an hour.  Or so found Park et al. – a graph from their recent Science paper is shown below.  Somewhere between three and twelve animals were used for every time point; all the mice would’ve been dead within a minute, but perhaps as few as three naked mole-rats died in this experiment.

survival curves

Human brains are like hummingbirds – our brains drink up sugar and give us nothing but a fleeting bit of beauty in return.  And our brains are very persnickety in their taste for sugar.  We are fueled exclusively by glucose.

Naked mole-rats are less fussy than we are – their minds will slurp fructose to keep from dying.

#

Naked mole-rats: the most cooperative of all mammals.  Resistant to cancer.  Unperturbed by acid.  Aging with the libidinous gracelessness of Hugh Hefner.  Able to withstand the horrors of a gas chamber.

And yet, for all those superpowers, quite easily tormented by human researchers.

On elephants.

On elephants.

During springtime each year, my spouse tells a lot of people that high school prom is a blast … as long as you’re not a high schooler. Many teachers attend, nominally as chaperones, and they don’t have to worry about who they’ll leave with or what they’ll be doing afterward. (Shucking earplugs and going to sleep.)

Prom_crowded_dancefloor

We go to the local high school prom most years. My spouse greets her students and compliments their attire: you clean up well! The boys on the cross country and track teams shake my hand and compliment my attire: you clean up well, coach!

unnamed
The most magical night of our lives … every year.

At times, briefly, I am allowed to dance. (My only formal dance training was in preparation for the South Asian Students’ Association spring show during college – I was part of a Dandiya Raas set to “Chale Chalo” from Lagaan – and my preferred style of dancing still involves a lot of leaping.)

raas
Yep.

Each year’s prom is themed, with decorations prepared by junior members of the student council. My favorite was 2012’s “prom-apocalypse,” with fake flames and wreckage. Coincidentally, I prepared the same style of decoration for a fundraiser when I was my high school’s National Honor Society president. The kids here were inspired by the end of the Mayan calendar; our dance was held in December, 1999, when the newspapers were rife with reports of people hoarding cans or turning blue-ish from ingesting too much anti-microbial silver.

I also convinced a d.j. buddy to put together some music for the event, like a track splicing Britney Spears’s “…Baby One More Time” with Marilyn Manson’s “Sweet Dreams (Are Made of This).”

Despite having hated being in high school, I love the corny tropes involved. Like, okay, film noir about drug deals gone bad? Eh, seen it. But set that same noir in high school, you get Brick, with charming lines like “She knows where I eat lunch.”

This year, though, prom is circus-themed.

“Oh, cool,” I said. “Like Cirque du Soleil?”

“No. Like, elephants in cages.”

We won’t attend. It seems an especially bizarre choice of theme now, when even Ringling Brothers, after 145 years of torturing elephants, has announced that they’ll stop. They will, of course, continue to torture other species.

Ringling_Bros_and_Barnum_&_Bailey_Circus_Gunther_Gebel-Williams_1969.jpg

#

As humans have learned more about animal cognition, we have steadily revised our claims as to the features of our brains that make us special. Once upon a time, we claimed that our superiority came simply from our very large brains; we contrasted ourselves to dinosaurs, whom we claimed (erroneously) had brains no bigger than walnuts.

Elephants have the largest brains of any land animal.

Later, we realized that sheer brain bulk does not equate with intelligence – actual neuron counts would be far more informative.

Elephants have three times as many neurons as humans.

We once posited that “tool use” separated humans from other animals, until we learned that chimpanzees, crows, and others use tools too.

We claimed that only humans understand death. Touting that no other species buries their dead, we claimed that only Homo sapiens have the emotional intelligence necessary to understand narrative. Other animals are trapped inside an eternal now.

This, too, is false.

Three_elephant's_curly_kisses

In elephants, the hippocampus – the brain region implicated in processing narrative emotional memory – is enlarged relative to humans. They routinely visit sites where friends or relatives died. They caress the bones of their lost. After violent encounters with a brutal species of hairless ape, elephants can suffer post-traumatic stress disorder for years. Their children require the guidance of elders to learn behavioral norms.

Like human children, young elephant males who grow in broken communities run wild.

female-elephant-341983_1280.jpg

#

We humans have treated elephants abysmally, not in spite of their magnificence, but because of it. When a small, flamboyantly-dressed circus tamer can break an elephant’s will so completely that the creature will perform in the center of a jeering crowd, we receive proof just how powerful humans are.

61m03HD3UQL._SY344_BO1,204,203,200_Elena Passarello writes of our dominance over nature in her essay “Jumbo II,” which interlaces two histories: that of elephants brought to the United States, and our ability to harness electricity.

From the beginning, the elephants were tortured: placed in small zoo enclosures (Passarello: They gave “Old Chief” to the Cincinnati Zoo, which shot him by the end of the decade. Two days after, Cincinnati’s Palace Restaurant added “elephant loin” to its dinner menu.), beaten by circus trainers until they learned to do “tricks,” condemned to death for unexpectedly dangerous behavior during musth.

As our technological prowess grew, electricity was put to ever new uses. Electricity could light our streets! It could power our factories! It could execute the condemned!

The histories of elephants and electricity in America merge in 1903. In Passarello’s words:

[Electrocuting an Elephant] is a minute-long, live short of the first elephant – and the second female of any species on the planet – to be condemned to electrocution for her crimes.

In the yards around Coney Island’s Luna Park, the condemned elephant places each foot onto a copper plate. Once ignited with over 6,000 volts of alternating current, they smoke beneath her planted feet. The smoke rises around her body, her trunk goes rigid, and all five tons of her list forward.

And, from Ciaran Berry’s poem, “Electrocuting an Elephant:”

…though it changes nothing,
I want to explain how, when the elephant falls, she falls
like a cropped elm. First the shudder, then the toppling
as the surge ripples through each nerve and vein,
and she drops in silence and a fit of steam to lie there
prone, one eye opened that I wish I could close.

I could not bring myself to watch the video footage to verify this description, but I am glad her eye was open. We humans behave better when we believe we are watched. And our behavior, in the past, was not good enough.

Even now, we make mistakes. If we want a world with elephants, the money from ecotourism is not enough. Those who have been born to wealthy nations – beneficiaries of a long history of exploitation and violence – should devote funds to repairing some of the damage we’ve inherited.

33-1196543987

On perception and learning.

On perception and learning.

Cuddly.

Fearful.

Monstrous.

Peering with the unwavering focus of a watchful overlord.

A cat could seem to be many different things, and Brendan Wenzel’s recent picture book They All Saw a Cat conveys these vagrancies of perception beautifully. Though we share the world, we all see and hear and taste it differently. Each creature’s mind filters a torrential influx of information into manageable experience; we all filter the world differently.

They All Saw a Cat ends with a composite image. We see the various components that were focused on by each of the other animals, amalgamated into something approaching “cat-ness.” A human child noticed the cat’s soft fur, a mouse noticed its sharp claws, a fox noticed its swift speed, a bird noticed that it can’t fly.

All these properties are essential descriptors, but so much is blurred away by our minds. When I look at a domesticated cat, I tend to forget about the sharp claws and teeth. I certainly don’t remark on its lack of flight – being landbound myself, this seems perfectly ordinary to me. To be ensnared by gravity only seems strange from the perspective of a bird.

theyallsawThere is another way of developing the concept of “cat-ness,” though. Instead of compiling many creatures’ perceptions of a single cat, we could consider a single perceptive entity’s response to many specimens. How, for instance, do our brains learn to recognize cats?

When a friend (who teaches upper-level philosophy) and I were talking about Ludwig Wittgenstein’s Philosophical Investigations, I mentioned that I felt many of the aims of that book could be accomplished with a description of principal component analysis paired with Gideon Lewis-Kraus’s lovely New York Times Magazine article on Google Translate.

My friend looked at me with a mix of puzzlement and pity and said, “No.” Then added, as regards Philosophical Investigations, “You read it too fast.”

wittgensteinOne of Wittgenstein’s aims is to show how humans can learn to use language… which is complicated by the fact that, in my friend’s words, “Any group of objects will share more than one commonality.” He posits that no matter how many red objects you point to, they’ll always share properties other than red-ness in common.

Or cats… when you’re teaching a child how to speak and point out many cats, will they have properties other than cat-ness in common?

In some ways, I agree. After all, I think the boundaries between species are porous. I don’t think there is a set of rules that could be used to determine whether a creature qualifies for personhood, so it’d be a bit silly if I also claimed that cat-ness could be clearly defined.

But when I point and say “That’s a cat!”, chances are that you’ll think so too. Even if no one had ever taught us what cats are, most people in the United States have seen enough of them to think “All those furry, four-legged, swivel-tailed, pointy-eared, pouncing things were probably the same type of creature!”

Even a computer can pick out these commonalities. When we learn about the world, we have a huge quantity of sensory data to draw upon – cats make those noises, they look like that when they find a sunny patch of grass to lie in, they look like that when they don’t want me to pet them – but a computer can learn to identify cat-ness using nothing more than grainy stills from Youtube.

Quoc Le et al. fed a few million images from Youtube videos to a computer algorithm that was searching for commonalities between the pictures. Even though the algorithm was given no hints as to the nature of the videos, it learned that many shared an emphasis on oblong shapes with triangles on top… cat faces. Indeed, when Le et al. made a visualization of the patterns that were causing their algorithm to cluster these particular videos together, we can recognize a cat in that blur of pixels.

The computer learns in a way vaguely analogous to the formation of social cliques in a middle school cafeteria. Each kid is a beautiful and unique snowflake, sure, but there are certain properties that cause them to cluster together: the sporty ones, the bookish ones, the D&D kids. For a neural network, each individual is only distinguished by voting “yes” or “no,” but you can cluster the individuals who tend to vote “yes” at the same time. For a small grid of black and white pixels, some individuals will be assigned to the pixels and vote “yes” only when their pixels are white… but others will watch the votes of those first responders and vote “yes” if they see a long line of “yes” votes in the top quadrants, perhaps… and others could watch those votes, allowing for layers upon layers of complexity in analysis.

three-body-problem-by-cixin-liu-616x975And I should mention that I feel indebted to Liu Cixin’s sci-fi novel The Three-Body Problem for thinking to humanize a computer algorithm this way. Liu includes a lovely description of a human motherboard, with triads of trained soldiers hoisting red or green flags forming each logic gate.

In the end, the algorithm developed by Le et al. clustered only 75% of the frames from Youtube cat videos together – it could recognize many of these as being somehow similar, but it was worse at identifying cat-ness than the average human child. But it’s pretty easy to realize why: after all, Le et al. titled their paper “Building high-level features using large scale unsupervised learning.”

Proceedings of the International Conference on Machine Learning 2010
You might have to squint, but there’s a cat here. Or so says their algorithm.

When Wittgenstein writes about someone watching builders – one person calls out “Slab!”, the other brings a large flat rock – he is also considering unsupervised learning. And so it is easy for Wittgenstein to imagine that the watcher, even after exclaiming “Now I’ve got it!”, could be stymied by a situation that went beyond the training.

Many human cultures have utilized unsupervised learning as a major component of childrearing – kids are expected to watch their elders and puzzle out on their own how to do everything in life – but this potential inflexibility that Wittgenstein alludes to underlies David Lancy’s advice in The Anthropology of Childhood that children will fair best in our modern world when they have someone guiding their education and development.

Unsupervised learning may be sufficient to prepare children for life in an agrarian village. Unsupervised learning is sufficient for chimpanzees learning how to crack nuts. And unsupervised learning is sufficient to for a computer to develop an idea about what cats are.

But the best human learning employs the scientific method – purposefully seeking out “no.”

I assume most children reflexively follow the scientific method – my daughter started shortly after her first birthday. I was teaching her about animals, and we started with dogs. At first, she pointed primarily to creatures that looked like her Uncle Max. Big, brown, four-legged, slobbery.

IMG_5319.JPG
Good dog.

Eventually she started pointing to creatures that looked slightly different: white dogs, black dogs, small dogs, quiet dogs. And then the scientific method kicked in.

She’d point to a non-dog, emphatically claiming it to be a dog as well. And then I’d explain why her choice wasn’t a dog. What features cause an object to be excluded from the set of correct answers?

Eventually she caught on.

Many adults, sadly, are worse at this style of thinking than children. As we grow, it becomes more pressing to seem competent. We adults want our guesses to be right – we want to hear yes all the time – which makes it harder to learn.

The New York Times recently presented a clever demonstration of this. They showed a series of numbers that follow a rule, let readers type in new numbers to see if their guesses also followed the rule, and asked for readers to describe what the rule was.

A scientist would approach this type of puzzle by guessing a rule and then plugging in numbers that don’t follow it – nothing is ever really proven in science, but we validate theories by designing experiments that should tell us “no” if our theory is wrong. Only theories that all “falsifiable” fall under the purvey of science. And the best fields of science devote considerable resources to seeking out opportunities to prove ourselves wrong.

But many adults, wanting to seem smart all the time, fear mistakes. When that New York Times puzzle was made public, 80% of readers proposed a rule without ever hearing that a set of numbers didn’t follow it.

Wittgenstein’s watcher can’t really learn what “Slab!” means until perversely hauling over some other type of rock and being told, “no.”

We adults can’t fix the world until we learn from children that it’s okay to look ignorant sometimes. It’s okay to be wrong – just say “sorry” and “I’ll try to do better next time.”

Otherwise we’re stuck digging in our heels and arguing for things we should know to be ridiculous.

It doesn’t hurt so bad. Watch: nope, that one’s not a cat.

16785014164_0b8a71b191_z
Photo by John Mason on Flickr.

On human uniqueness and invasive species.

On human uniqueness and invasive species.

We like to see ourselves as special.  “I am a beautiful and unique snowflake,” we’re taught to intone.

Most of the time, this is lovely.  Other than the U.S. Supreme Court, hardly anyone thinks you should be punished for being special.  Of course, the Court’s opinion does matter, since the ignorant claims of five old rich white men have an inordinate sway in determining how U.S. citizens will be allowed to live.  And they, the conservative predecessors of our lockstep quartet (soon to return to a quintet) of hate machines, oft feel that the beautiful snowflakes should melt in prison.  In McCleskey v. Kemp, the court decided that statistical evidence of injustice should not be admissible as evidence; they would only consider documentation of deliberate bias in individual cases.

snowflake
Unique when you are on trial, now orange & a number.  Photo by Joel Franusic on Flickr.

Which means, for instance, that if a police force decides to systematically harass black drivers, and winds up stopping hundreds of black drivers and zero white drivers each month, they’re in the clear as long as each black driver stopped was violating some portion of the traffic code.  At that point, each black driver is a unique individual lawbreaker, and the court sees no reason why their experiences should be lumped together as statistical evidence of racial injustice.  Adolph Lyons, after being nearly choked to death by an L.A. police officer, could not convince the courts that the L.A. police should stop choking innocuous black drivers.

Lovely, eh?

So it can hurt if others see us as being too special.  Too distinct for our collective identity to matter.

At other times, we humans might not feel special enough.  That’s when the baseless claims get bandied about.  For instance, K recently received a letter from Stanford’s Graduate School of Education pontificating that “Only humans teach.”  A specious example is given, followed by the reiteration that “Only humans look to see if their pupils are learning.”  Which simply isn’t true.

But people feel such a burning desire to be special – as individuals, as fans of a particular sports team, as people with a particular skin color, or as people who follow a particular set of religious credos – that an ostensibly very-educated someone needed to write this letter.

That’s why the occasional correctives always make me smile.  For instance, research findings showing that other animal species have some of the skills that our sapiens chauvinists oft claim as uniquely human, or other data indicating that humans are not as exceptional as we at times believe.

Consider our brains.  For many years, we thought our brains were anomalously large for the size of our bodies.  The basic rationale for this metric was that more brain power would be needed to control a larger body – this seems tenuous if you compare to robots we’ve created, but so it goes.  Recently, a research group directed by Suzana Herculano-Houzel counted how many actual neurons are in brains of different sizes.  Again comparing to human creations, computer scientists would argue that more neurons allow for more patterns of connections and thus more brainpower, somewhat comparable to the total number of transistors inside a computer.

As it happens, no one knew how many neurons were in different creatures’ brains, because brains are very inhomogeneous.  But they can be homogenized – rather easily, as it happens.  I did this (unfortunately!) with cow brains.  These arrived frozen and bloodied; I’d smash them with a hammer then puree them in a blender till they looked rather like strawberry daiquiri.  For my work I’d then spin the soupy slushy muck so fast that all the cell nuclei pelleted on the bottom of centrifuge tubes, ready to be thrown away.

brains
After a spin in the blender, all brains look the same. Photo from Wikipedia.

Alternatively, one could take a sample of the soup and simply count.  How many nuclei are here?  Then stain an equivalent sample with antibodies that recognize proteins expressed in neurons but not the other cell types present in a brain: what fraction of the nuclei were neurons?  And, voila, you have your answer!

Gabi et al. did roughly this, publishing their findings with the subtly anti-exceptionalist title “No relative expansion of the number of prefrontal neurons in primate and human evolution.”  We have more neurons than smaller primates, but only as many as you’d expect based on our increased size.

zombie-starfish(Perhaps this leaves you wondering why gorillas rarely best us on human-designed IQ tests – as it happens, the other great apes are outliers, with fewer neurons than you would expect based on the primate trends.  Some of this data was presented in a paper I discussed in my essay about the link between “origin of fire” and “origin of knowledge” myths.  In brief, the idea is that the caloric requirements of human-like brainpower demanded cooked food.  The evolutionary precursors to gorillas instead progressed toward smaller brains – which happens.  The evolutionary precursors to starfish also jettisoned their brains, making themselves rather more like zombies.)

Perhaps all these brain musings are an insufficient corrective.  After all, humans are very smart – I’m trusting that you’re getting more out of this essay than the average hamster would, even if I translated these words into squeaks.

So let’s close with one more piece of humility-inducing (humiliating) research: archaeologists have long studied the migration of early humans, trying to learn when Homo sapiens first reached various areas and what happened after they arrived.  Sadly, “what happened” was often the same: rapid extinction of all other variety of humans, first, then most other species of large animals.

All the Neanderthal disappeared shortly after Homo sapiens forayed into Europe.  There are reasons why someone might quibble with the timeline, but it seems that Homo erectus disappeared from Asia shortly after Homo sapiens arrived.  The arrival of Homo sapiens in Australia brought the extinction of all large animals other than kangaroos.  The arrival of Homo sapiens in South America presaged, again, a huge megafaunal extinction.

On evolutionary timescales, we are a slow-moving meaty wrecking ball.

27435263990_8f7566831d_b.jpg
Bad as we are, we can always get worse. My country! Picture by DonkeyHotey on Flickr.

And our spread, apparently, resembles that of all other invasive species.  This is slightly less derogatory than the summation given in The Matrix – “[humans] move to an area and … multiply and multiply until every natural resource is consumed and the only way [they] can survive is to spread to another area.  There is another organism on this planet that follows the same pattern.  Do you know what it is?  A virus.  Human beings are a disease, a cancer of this planet.” – but only slightly.

Upon the arrival of Homo sapiens in South America, we quickly filled the entire continent to its carrying capacity, and then, after the invention of sedentary agriculture – which boosts food production sufficiently for an area to support more human farmers than hunter gatherers – resumed exponential population growth.  Although the switch to an agricultural lifestyle may have been rotten for the individual actors – the strength needed to push plows makes human sexual dimorphism more important, which is why the spread of agriculture heralded the oppression of & violence against women throughout human history – it’s certainly a great technology if our goal is to fill the world with as many miserable humans as possible.

We’ll be passing eight billion soon, a population inconceivable without modern farming technologies.  And likely unsustainable even with.

Not, again, that this makes us unique.  Plenty of species are willing to breed themselves into misery & extinction if given half the chance.  Almost any species that follows r-type population growth (this jargon signifies “quantity over quality”) – which oft seems to include Homo sapiens – is likely to do so.  My home town, wolf-less, is currently riddled with starving, sickly deer.