On human uniqueness and invasive species.

On human uniqueness and invasive species.

We like to see ourselves as special.  “I am a beautiful and unique snowflake,” we’re taught to intone.

Most of the time, this is lovely.  Other than the U.S. Supreme Court, hardly anyone thinks you should be punished for being special.  Of course, the Court’s opinion does matter, since the ignorant claims of five old rich white men have an inordinate sway in determining how U.S. citizens will be allowed to live.  And they, the conservative predecessors of our lockstep quartet (soon to return to a quintet) of hate machines, oft feel that the beautiful snowflakes should melt in prison.  In McCleskey v. Kemp, the court decided that statistical evidence of injustice should not be admissible as evidence; they would only consider documentation of deliberate bias in individual cases.

snowflake
Unique when you are on trial, now orange & a number.  Photo by Joel Franusic on Flickr.

Which means, for instance, that if a police force decides to systematically harass black drivers, and winds up stopping hundreds of black drivers and zero white drivers each month, they’re in the clear as long as each black driver stopped was violating some portion of the traffic code.  At that point, each black driver is a unique individual lawbreaker, and the court sees no reason why their experiences should be lumped together as statistical evidence of racial injustice.  Adolph Lyons, after being nearly choked to death by an L.A. police officer, could not convince the courts that the L.A. police should stop choking innocuous black drivers.

Lovely, eh?

So it can hurt if others see us as being too special.  Too distinct for our collective identity to matter.

At other times, we humans might not feel special enough.  That’s when the baseless claims get bandied about.  For instance, K recently received a letter from Stanford’s Graduate School of Education pontificating that “Only humans teach.”  A specious example is given, followed by the reiteration that “Only humans look to see if their pupils are learning.”  Which simply isn’t true.

But people feel such a burning desire to be special – as individuals, as fans of a particular sports team, as people with a particular skin color, or as people who follow a particular set of religious credos – that an ostensibly very-educated someone needed to write this letter.

That’s why the occasional correctives always make me smile.  For instance, research findings showing that other animal species have some of the skills that our sapiens chauvinists oft claim as uniquely human, or other data indicating that humans are not as exceptional as we at times believe.

Consider our brains.  For many years, we thought our brains were anomalously large for the size of our bodies.  The basic rationale for this metric was that more brain power would be needed to control a larger body – this seems tenuous if you compare to robots we’ve created, but so it goes.  Recently, a research group directed by Suzana Herculano-Houzel counted how many actual neurons are in brains of different sizes.  Again comparing to human creations, computer scientists would argue that more neurons allow for more patterns of connections and thus more brainpower, somewhat comparable to the total number of transistors inside a computer.

As it happens, no one knew how many neurons were in different creatures’ brains, because brains are very inhomogeneous.  But they can be homogenized – rather easily, as it happens.  I did this (unfortunately!) with cow brains.  These arrived frozen and bloodied; I’d smash them with a hammer then puree them in a blender till they looked rather like strawberry daiquiri.  For my work I’d then spin the soupy slushy muck so fast that all the cell nuclei pelleted on the bottom of centrifuge tubes, ready to be thrown away.

brains
After a spin in the blender, all brains look the same. Photo from Wikipedia.

Alternatively, one could take a sample of the soup and simply count.  How many nuclei are here?  Then stain an equivalent sample with antibodies that recognize proteins expressed in neurons but not the other cell types present in a brain: what fraction of the nuclei were neurons?  And, voila, you have your answer!

Gabi et al. did roughly this, publishing their findings with the subtly anti-exceptionalist title “No relative expansion of the number of prefrontal neurons in primate and human evolution.”  We have more neurons than smaller primates, but only as many as you’d expect based on our increased size.

zombie-starfish(Perhaps this leaves you wondering why gorillas rarely best us on human-designed IQ tests – as it happens, the other great apes are outliers, with fewer neurons than you would expect based on the primate trends.  Some of this data was presented in a paper I discussed in my essay about the link between “origin of fire” and “origin of knowledge” myths.  In brief, the idea is that the caloric requirements of human-like brainpower demanded cooked food.  The evolutionary precursors to gorillas instead progressed toward smaller brains – which happens.  The evolutionary precursors to starfish also jettisoned their brains, making themselves rather more like zombies.)

Perhaps all these brain musings are an insufficient corrective.  After all, humans are very smart – I’m trusting that you’re getting more out of this essay than the average hamster would, even if I translated these words into squeaks.

So let’s close with one more piece of humility-inducing (humiliating) research: archaeologists have long studied the migration of early humans, trying to learn when Homo sapiens first reached various areas and what happened after they arrived.  Sadly, “what happened” was often the same: rapid extinction of all other variety of humans, first, then most other species of large animals.

All the Neanderthal disappeared shortly after Homo sapiens forayed into Europe.  There are reasons why someone might quibble with the timeline, but it seems that Homo erectus disappeared from Asia shortly after Homo sapiens arrived.  The arrival of Homo sapiens in Australia brought the extinction of all large animals other than kangaroos.  The arrival of Homo sapiens in South America presaged, again, a huge megafaunal extinction.

On evolutionary timescales, we are a slow-moving meaty wrecking ball.

27435263990_8f7566831d_b.jpg
Bad as we are, we can always get worse. My country! Picture by DonkeyHotey on Flickr.

And our spread, apparently, resembles that of all other invasive species.  This is slightly less derogatory than the summation given in The Matrix – “[humans] move to an area and … multiply and multiply until every natural resource is consumed and the only way [they] can survive is to spread to another area.  There is another organism on this planet that follows the same pattern.  Do you know what it is?  A virus.  Human beings are a disease, a cancer of this planet.” – but only slightly.

Upon the arrival of Homo sapiens in South America, we quickly filled the entire continent to its carrying capacity, and then, after the invention of sedentary agriculture – which boosts food production sufficiently for an area to support more human farmers than hunter gatherers – resumed exponential population growth.  Although the switch to an agricultural lifestyle may have been rotten for the individual actors – the strength needed to push plows makes human sexual dimorphism more important, which is why the spread of agriculture heralded the oppression of & violence against women throughout human history – it’s certainly a great technology if our goal is to fill the world with as many miserable humans as possible.

We’ll be passing eight billion soon, a population inconceivable without modern farming technologies.  And likely unsustainable even with.

Not, again, that this makes us unique.  Plenty of species are willing to breed themselves into misery & extinction if given half the chance.  Almost any species that follows r-type population growth (this jargon signifies “quantity over quality”) – which oft seems to include Homo sapiens – is likely to do so.  My home town, wolf-less, is currently riddled with starving, sickly deer.

On fish (and their similarities to us).

On fish (and their similarities to us).

I had a pet leopard gecko when I was growing up – he lived with me from fourth grade until I graduated from high school.  After that, my father took care of him, but I’d visit several times a year.  He would sit on my chest, occasionally skittering up to hide between my chin & neck, or buried in my underarm, while I lay on my back reading a book.

MRLizard.JPG
In all his glory (circa 2002).

He was a good friend to me, Mr. Lizard was (it took me almost an entire year to name him, and this was the name all that cogitation produced).  We had similar interests, mostly involving lying down in warm places to think.  I assume he was thinking.  But I have no idea what he was thinking about.  He rarely spoke – only twice that I remember – and, when he did, he made an irate chirping sound.  We didn’t have a great way to communicate.

But we, as humans, are moving closer to being able to understand some of the thoughts of other animals.  With some species, this is manageable for laypeople.  Dogs, for instance, co-evolved with humans (during which time both their & our brains shrunk as we sloughed certain tasks off onto the others).  Most humans are pretty good at guessing what a dog is thinking, especially when the dog’s thoughts involve wanting the human to scoop kibble or go on a walk.

fishFish, though?  I find fish inscrutable.  Mr. Lizard ate crickets, and the cricket bin at the local pet store was kept in the middle of the fish room, so I spent a lot of time peering into the various aquaria while their inhabitants blurbled lackadaisically about.  I always liked seeing the velvety black goldfish with eyes telescoped outward like hammerhead sharks.  I even bought a few to put into the pond in our backyard, but they swim slowly.  Within a week the raccoons had caught them all.

fishknowsJonathan Balcombe thinks I’ve been unfair, ignoring the thoughts of fish.  In What a Fish Knows, he combs through many decades of research into fish cognition in order to give blithely naive readers like myself some insight into their world.

It would be remiss of me to proceed with this essay without mentioning that, for many years, numerous scientists have argued that fish lack consciousness.  The crux of this argument is that fish brains are very different from human brains.  Indeed, fish lack the brain region that most humans use to process the experience of pain.  But that’s okay – recent animal cognition research has found that very different brain regions can be used for the same tasks in different species, as with parrots learning to sing and human children learning to speak.  And we’ve recently learned that blind humans, who use the brain regions most of us devote to sight for other purposes, are able to rewire their minds if suddenly granted vision.  Thank you Project Prakash.

So I’m not convinced by most of the arguments against fish feeling pain.  Throughout history, we’ve argued over and over again that perceived others don’t feel the way we do.  Descartes claimed that animals were nothing but automata.  White people in the United States often think that black people feel less pain.  That last sentence – I’m not just writing about the horrific way African Americans were treated long ago.  This is about how black people in the U.S. are treated today by highly-educated medical doctors.  Belief in bizarre racial stereotypes is widespread, and one consequence is that doctors offer less treatment for black people in pain than they would for equivalent white patients.

So I’m suspicious of any claims that the way we think, or feel, or suffer, is special.  As is Jonathan Balcombe.  In his words:

Thanks to breakthroughs in ethology, sociobiology, neurobiology, and ecology, we can now better understand what the world looks like to a fish, how they perceive, feel, and experience the world.

What this book explores is a simple possibility with a profound implication.  The simple possibility is that fishes are individual beings whose lives have intrinsic value — that is, value to themselves quite apart from any utilitarian value they might have to us, for example as a source of profit, or of entertainment.  The profound implication is that this would qualify them for inclusion in our circle of moral concern.

Not only is scientific consensus squarely behind consciousness and pain in fishes, consciousness probably evolved first in fishes.  Why?  Because fishes were the first vertebrates, because they had been evolving for well over 100 million years before the ancestors of today’s mammals and birds set foot on land, and because those ancestors would have greatly benefited from having some modicum of wherewithal by the time they started colonizing such dramatically new terrain.

Despite claiming that fish are extremely different from us, scientists have used fish to study human mental conditions for many years.  Since the 1950s, researchers have tried dosing fish with LSD, finding that, like humans, most fish seem to enjoy low doses of psychedelic drugs but are terrified by high doses.  Even today, antisocial cave fish are being investigated as a model to test drugs for autism and schizophrenia.  It is illogical to simultaneously claim that fish may be useful models to understand our own brains and that their brains are so different from ours that they cannot feel pain.

Of course, there probably are very significant differences between our minds and those of fish.  I’ve discussed some of these ideas in two prior speculative essays on octopus literature.  I stumbled across another lovely insight into fish brains in Sean Carroll’s The Big Picture.  He suggests, quite reasonably, that fish brains probably operate faster than our own, with less tendency toward meditative rumination.  His argument is based on the behavior of light in water versus air; in his words:

bigpictureAs far as stimulating new avenues of thought is concerned, the most important feature of their new environment was simply the ability to see a lot farther.  If you’ve spent much time swimming or diving, you know that you can’t see as far underwater as you can in air.  The attenuation length – the distance past which light is mostly absorbed by the medium you are looking through – is tens of meters through clear water, while in air it’s practically infinite. (We have no trouble seeing the moon, or distant objects on our horizon.)

What you see has a dramatic effect on how you think.  If you’re a fish, you move through the water at a meter or two per second, and you see some tens of meters in front of you.  Every few seconds you are entering a new perceptual environment.  As something new looms into your view, you have only a very brief amount of time in which to evaluate how to react to it.  Is it friendly, fearsome, or foodlike?

Under these conditions, there is enormous evolutionary pressure to think fast.  See something, respond almost immediately.  A fish brain is going to be optimized to do just that.  Quick reaction, not leisurely contemplation, is the name of the game.

Now imagine you’ve climbed up onto land.  Suddenly your sensory horizon expands enormously.  Surrounded by clear air, you can see for kilometers – much farther than you can travel in a couple of seconds.  At first, there wasn’t much to see, since there weren’t any other animals up there with you.  But there is food of different varieties, obstacles like rocks and trees, not to mention the occasional geological eruption.  And before you know it, you are joined by other kinds of locomotive creatures.  Some friendly, some tasty, some simply to be avoided.

Now the selection pressures have shifted dramatically.  Being simple-minded and reactive might be okay in some circumstances, but it’s not the best strategy on land.  When you can see what’s coming long before you are forced to react, you have the time to contemplate different possible actions, and weigh the pros and cons of each.  You can even be ingenious, putting some of your cognitive resources into inventing plans of action other than those that are immediately obvious.

Out in the clear air, it pays to use your imagination.

(An aside, added later, not about fish: dolphin sonar & whale songs often travel farther in water than visible light does near the Earth’s surface, perhaps inclining whales & dolphins to be more imaginative and introspective than land animals.  I neglected this thought when I first posted the essay because it’s hard to avoid favoring our own forms of perception.)

Human brains are amazing.  I think that goes without saying, especially because my ability to type the words “I think that goes without saying” is already a dramatic demonstration of our mental capacity.  As is your ability to read those words and understand roughly what I meant.

And yet.  Our brains are sufficiently remarkable that I think there’s no need to denigrate the cognitive abilities of other animals.  They can feel.  They can think.  They almost certainly have their own wants and desires.

Recognizing their value shouldn’t make us feel bad about our own minds, though.

We’ve come a long way.  We still have more, as a species, to do.  That’s glaringly obvious to anyone who so much as glances at the news.  Still, I’d like to think that the average person is doing a better job of recognizing the concerns of others than was common in our past.  There is dramatically less (but non-zero) slavery in the modern world than in the past.  And we treat non-human animals far more kindly than we used to.

From Frans de Waal’s Are We Smart Enough to Know How Smart Animals Are?:

smartenoughDesmond Morris once told me an amusing story to drive this point home.  At the time Desmond was working at the London Zoo, which still held tea parties in the ape house with the public looking on.  Gathered on chairs around a table, the apes had been trained to use bowls, spoons, cups, and a teapot.  Naturally, this equipment posed no problem for these tool-using animals.  Unfortunately, over time the apes became too polished and their performance too perfect for the English public, for whom high tea constitutes the peak of civilization.  When the public tea parties began to threaten the human ego, something had to be done.  The apes were retrained to spill the tea, throw food around, drink from the teapot’s spout, and pop the cups into the bowl as soon as the keeper turned his back.  The public loved it!  The apes were wild and naughty, as they were supposed to be.

On octopus literature, a reprise: what would books be like if we didn’t love gossip?

On octopus literature, a reprise: what would books be like if we didn’t love gossip?

A few months ago, I lost several days reading about the structure of octopus brains.  A fascinating subject — they are incredibly intelligent creatures despite sharing little evolutionary history with any other intelligent species.  And their minds are organized differently from our own.

Human minds are highly centralized — we can’t do much without our head being involved.  Whereas octopus minds seem to be distributed throughout their bodies.  It’s difficult to address how this might feel for an octopus, but researchers have studied the behavior of hacked-off octopus tentacles.  An octopus tentacle can behave intelligently even when it’s not connected to the rest of the body.  Each limb may have something akin to a mind of its own.

Which seems fascinating from the perspective of narrative.  The way human minds seem to work is, first our subconscious makes a decision, then a signal is sent to our muscles.  We speak, or press a button, or pull our hand away from something hot. And then, last, our conscious mind begins rationalizing why we made that choice.

The temporal sequencing is wacky, sure. But for the purpose of this essay, the important concept is that a centralized brain makes all the choices and constructs a coherent narrative for why each choice was made.

An octopus might find it more difficult to construct a single unifying narrative to explain its actions in a way that we humans would consider logical.  There are hints that octopus tentacles have characteristics akin to personalities — some behave as though shy, some as though bold, some aggressive, some curious.  If one tentacle is trying to hide while another is trying to attack, there might not be a single internal narrative that describes the creature’s self-sabotage.

3951158255_91401dd80a
And what might your personality be? Shy? Bold? Inquisitive? Photo by Jaula De Ardilla.

From our perspective, octopus consciousness might be like trying to explain in one sweep the behavior of an entire rambunctious dysfunctional family.  Sure, some calamities would affect them all together, but moment by moment each family member might have his or her own distinct interests.  A daughter who wants to stay out late, a mother who wants her daughter home by nine, a father who wants somebody to play catch in the yard, a son who just wants to be left alone…

It’s not that the collective is inexplicable, it’s just that we humans are unaccustomed to thinking of collectives like that as representing a single consciousness.  We look for logical motivations on a smaller scale — centralized minds — than an octopus might embrace as its worldview.

Anyway, I thought this might have a big impact on the way octopus literature would be structured.  Once, you know, they develop a language, start spinning myths, etc.

(To the best of my knowledge, there is no octopus language.  If they have one that’s chemical- or color-based, I’m not sure I would even notice.  Someone else probably would’ve, though.)

unnamed (1)While reading Sy Montgomery’s The Soul of an Octopus, I learned that there would probably be another major difference between octopus literature and our own.  Their literature might seem chaotic to human readers, yes.  But also, our literature is often character-drivenOur brains evolved to gossip, and the books that most human readers love most feature charming, striking individuals.  I love The Idiot largely because of the dynamic between Myshkin and Rogozhin, In Search of Lost Time for the vicarious misery of watching Marcel’s crumbling relationship with Albertine.  Readers of Game of Thrones are immersed in a rich world of political intrigue, tracking everyone’s motives as they push against each other.

Octopus readers might not care about any of that.  From Montgomery’s book:

Belonging to a group is one of humankind’s deepest desires.  We’re a social species, like our primate ancestors.  Evolutionary biologists suggest that keeping track of our many social relationships over our long lives was one of the factors driving the evolution of the human brain.  In fact, intelligence itself is most often associated with similarly social and long-lived creatures, like chimps, elephants, parrots, and whales.

But octopuses represent the opposite end of this spectrum.  They are famously short-lived, and most do not appear to be social.  There are intriguing exceptions: Male and female lesser Pacific striped octopuses, for instance, sometimes cohabit in pairs, sharing a single den.  Groups of these octopuses may live in associations of forty or more animals — a fact so unexpected that it was disbelieved and unpublished for thirty years, until Richard Ross of the Steinhart Aquarium recently raised the long-forgotten species in his home lab.  But the giant Pacific, at least, is thought to seek company only at the end of its life, to mate.  And even that is an iffy proposition, as one known outcome is the literal dinner date, when one octopus eats the other.  If not to interact with fellow octopuses, what is their intelligence for?  If octopuses don’t interact with each other, why would they want to interact with us?

Jennifer, the octopus psychologist, says, “The same thing that got them their smarts isn’t the same thing that got us our smarts.”  Octopus and human intelligence evolved separately and for different reasons.  She believes the event driving the octopus toward intelligence was the loss of the ancestral shell.  Losing the shell freed the animal for mobility.  An octopus, unlike a clam, does not have to wait for food to find it; the octopus can hunt like a tiger.  And while most octopuses love crab best, a single octopus may hunt many dozens of different prey species, each of which demands a different hunting strategy, a different skill set, a different set of decisions to make and modify.  Will you camouflage yourself for a stalk-and-ambush attack?  Shoot through the sea with your siphon for a quick chase?  Crawl out of the water to capture escaping prey?

Capture1
Come to think of it, the mammalian Auntie Ferret would also enjoy reading “The Loner’s Guide to Building Fabulous Underwater Contraptions”

All of which made me realize, an octopus reader would probably be indifferent to well-crafted characters with rich inner lives.  An octopus would probably care more far more about the plot than the characters.  My assumption is that an ideal octopus novel would be  a thriller, crammed full of facts, action-packed, and weave together numerous barely-integrated narratives.

Indeed, octopus readers might not like Montgomery’s book, since she devotes so much space to the tangled lives and interactions of the humans who love and study them.  The Soul of an Octopus is clearly intended for a human audience.

I’d be curious to read a book written specifically for an octopus someday… although it’s probable that, like music composed specifically for tamarin monkeys, octopus literature would seem awful to me.

On Gerry Alanguilan’s “ELMER,” his author bio, and animal cognition.

On Gerry Alanguilan’s “ELMER,” his author bio, and animal cognition.

I was talking to a runner about graphic novels, once again recommending Andy Hartzell’s Fox Bunny Funny (which I imagine would be exceptionally treasured by a young person questioning their gender identity or sexuality, but is still great for anybody who feels they don’t quite fit in), when he recommended Gerry Alanguilan’s ELMER.  An excellent recommendation — I thoroughly enjoyed it.

elmer
The comic’s premise is that chickens suddenly gain intelligence roughly equivalent to humans.  Then they fight against murder, oppression, and prejudice in ways reminiscent of the U.S. civil rights movement.  The beginning of the book is horrifying, first with scenes depicting chickens coming into awareness while hanging by their feet in a slaughter house, then the violent reprisal they affect against humans.

gerryAlanguilan is a great artist and clearly a very empathetic man.

But that’s why I thought it was so strange that two out of four sentences of his short bio on the back cover read, “Gerry really likes chicken adobo, Psych, Mr. Belvedere, Titanic, Doctor Who, dogs, video blogging and specially Century Gothic. Transformed.”  For a moment I thought the first clause might be ironic because his author photograph for ELMER was taken in front of a busy bulletin board & one sheet of paper was a diet guide that appeared to have the vegan “v” logo at the bottom — maybe Gerry is making a point about what he gave up! — but with some squinting I realized it was a “Diet Guide for High Cholesterol Patients,” the symbol at the bottom merely a checkmark.

Why, then, would Alanguilan want to punctuate his work with the statement that he eats chickens, as though that is a defining feature of his life?

It’s commonly assumed among people who study animal cognition that other species are less aware of the world than humans are.  That humans perceive more acutely, our immense brainpower ensuring that our feelings cut deep.

The differences are matters of degree, though. It’s also widely acknowledged that humans exists on the same continuum as other animals, with no clear boundaries — genetic, physiological, or cognitive — demarcating us from them.  I thought this was phrased well by Frans de Waal in his editorial on Homo naledi and teleological misconceptions about evolution:

capThe problem is that we keep assuming that there is a point at which we became human.  This is about as unlikely as there being a precise wavelength at which the color spectrum turns from orange into red.  The typical proposition of how this happened is that of a mental breakthrough — a miraculous spark — that made us radically different.  But if we have learned anything from more than 50 years of research on chimpanzees and other intelligent animals, it is that the wall between human and animal cognition is like a Swiss cheese.

This is why, after reading Alanguilan’s brief biography, I began to wonder what percentage of human-like awareness chickens would need to have for their treatment in slaughterhouses, or the conveyer belt & macerator (grinder) used to expunge male chicks, or their confinement in dismal laying operations, to seem acceptable?

In Elmer, Alanguilan makes clear that their treatment would be unacceptable if the average chicken had one hundred percent of the cognitive capacity of the average human.  But then, below what percentage cognition does their treatment become okay?  Eighty percent?  Ten?  One?  Point one?

I think that’s an important question to ask, especially of an artist capable of creating such powerful work.

(And I should make clear that my own moral decisions exist in the same grey zone that I find curious in Alanguilan’s author bio.  I support abortion rights, an implicit declaration that the fractional cognition of a fetus is insufficient to outweigh the interests of the mother.  It’s more complicated than that, but it’s worth making clear that I’m not purporting to be morally pure.)

It’s true that humans are heterotrophs.  It’s impossible for us to live without harming — it irks me when vegetarians claim, for instance, that plants have no feelings.  They clearly do, they have wants and desires, they have rudimentary means of communication.  You could argue that eating fruit is ethically simple because fruit represents a pact between flowering plants and animal life, which co-evolved.  A plant expends energy to create fruit as a gift to animals, and animals in accepting that gift spread the plant’s seeds.

ketchupsmoothieBut anyone who eats vegetables (where “vegetable” means something like kale or broccoli or carrots — Supreme Court justices are not scientists) harms other perceiving entities by eating.

Which is fine. I eat, too!  Our first concern, given that we are perceiving entities, is to take care of ourselves.  If you didn’t care for your own well-being, what would motivate you to care for someone else’s?  Beyond that, I don’t think there’s a simple way to identify what or whom else is sufficiently self-like to merit our concern.  Personally, I care much more about my family than I do other humans — I devote the majority of my time and energy to helping them.  And I care much more about the well-being of the average human than I do the average cow, say, or lion.

Moral philosophers like Peter Singer would describe this as “speciest.”  I think that’s a silly-sounding word for a silly concept.  I don’t care about other humans because we have similar sequences in our DNA, or even because they resemble what I see when I look into a mirror.  I care about their well-being because of their internal mental life — I can imagine what it might feel like to be another human and so their plights sadden me.

Sure, I can imagine what it might feel like to be a chicken… but less well.  Other animals don’t perceive the world the same way we do.  And they seem to think less well.  I’d rather they not suffer.  But if somebody has to suffer, I’d rather that somebody be a Gallus gallus than a Homo sapiens.  I’d rather many chickens suffer than one human — I weigh chickens’ interests at only a small fraction of my concern for other humans.

Humans can talk to me.  They can share their travails with words, or gestures, or interpretative dance, or facial expressions.  And that matters a lot to me.

But integrity matters, too.  For instance, it seemed strange to me that David Duchovny could both write the book Holy Cow, in which he depicts farmed animals attempting to escape their doom, and still announce that he is “a very lazy vegetarian, which means I will look for the vegetarian meal, but I will also give up.”

My main objection isn’t to people eating meat.  It isn’t even to people who understand that animals can think (with differences in degree from human cognition, not differences in kind) eating meat.  Not everyone lives where I do, within a short walk of several grocery stores that all offer excellent nutrition from plants alone.  It’d be extremely difficult (and expensive) for humans living near the arctic to stay healthy without eating fish.  Those people’s well-being matters to me far more than the well-being of fish they catch.

And, for people living in close proximity to large, dangerous carnivores? Yes, obviously it’s reasonable for them to kill the animals terrorizing their villages.  I wish humans bred a little more slowly so that there’d still be space in our world for those large carnivores, but given that the at-risk humans already exist, I’d rather they be safe.  I can imagine how they feel.  I wouldn’t want my own daughter to be in danger.  I ruthlessly smash any mosquitos that go near her, and they are far less deadly than lions.

I simply find it upsetting when people who seem to believe that animal thought matters won’t take minor steps toward hurting them less.  It’s when confronted with stories about people who understand the moral implications of animal cognition, and who live in a place where it’s easy to be healthy eating vegetables alone, but don’t, that I feel sad.

If you had the chance to make your life consistent with your values, why wouldn’t you?

2162852505_2f021ca0c7_b

On attempts to see the world through other eyes.

On attempts to see the world through other eyes.

flowers

Most writers spend a lot of time thinking about how others see the world.  Hopefully most non-writers spend time thinking about this too.  It’s easier to feel empathy for the plights of others if you imagine seeing through their eyes.

So I thought it was pretty cool that the New York Times published an article about processing images to represent how they might appear to other species.

The algorithm shifts the color distribution of images to highlight which objects appear most distinct for an animal with different photoreceptors.  I thought it was cool even though the processing they describe fails in many ways to convey how differently various animals perceive the world.

For one thing, image processing can only affect visuals.  Another species may rely more on sound, scent, taste (although perhaps it’s cheating to list both scent and taste — they are essentially the same sense, chemodetection, with the difference being that humans respond more sensitively, and to a wider variety of chemicals, with our noses than our tongues), touch, sensing magnetic fields, etc.

If we assume that other animals will also place maximal trust in the detection of inbound electromagnetic radiation from the narrow band we’ve deemed “the visual spectrum,” we can fool ourselves regarding their most likely interpretations.  For an example, you could read my previous post about why rattlesnakes might assume that humans employ chameleon-like camouflage (underlying idea courtesy of Jesus Rivas & Gordon Burghardt).

The second problem with assuming that an image with shifted colors represents how another animal would view the world is on the level of neurological processing.  When a neurotypical human looks at an image and something resembles a face, that portion of the image will immediately dominate the viewer’s attention; a huge amount of human brainpower is devoted to processing faces.  Similarly, some dogs, if another dog enters their visual field, have trouble seeing anything else.  And bees: yes, they see more blues & ultraviolets than we do, but it’s also likely that flowers dominate their attention. I imagine it’s something like the image below, taken with N and her Uncle Max on a recent walk. Although, depending on your personality, you might have some dog-style neurological processing, too.

unnamed

Even amongst humans this type of perceptual difference exists.  A friend of mine who does construction (ranked the second-best apprentice pipefitter in the nation the year he finished his training, despite being out at a buddy’s bachelor party, i.e. not sleeping, all night before the competition), when he walks into a room, immediately notices all exposed ductwork, piping, etc.  Most people care so little about these features as to render them effectively invisible.  And I, after three weeks of frantic itching and a full course of methylprednisolone, could glance at any landscape in northern California and immediately point out all the poison oak.  My daughter can spot a picture or statue of an owl from disconcertingly far away and won’t stop yelling “owww woo!” until I see it too.

The color processing written up in the New York Times, though, was automated.  Given the current state of computerized image recognition, you probably can’t write a script that would magnify dogs or flowers or poison oak effectively.  Maybe in a few years.

There’s one last big problem, though.  And the last problem is about the colors alone.  There is simply no way to re-color images so that a dichromatic (colloquially, “colorblind”) human would see the world like a trichromat.

(A brief aside: Shortly after I wrote the above sentence, I read an article about glasses marketed to colorblind people to let them see color.  And the basic idea is clever, but I don’t think it invalidates my claim.

glasses

Here’s how it works: most colorblind people are dichromats, meaning they have two different flavors of color receptors.  Colored light stimulates these receptors differentially: green light stimulates green receptors a lot and blue receptors a little.  Blue light stimulates blue receptors a lot and green receptors a little.  The brain processes the ratio of receptor stimulation to say, “Ah ha!  That object is blue!”

A typical human, however, is a trichromat.  This means that the brain uses three datapoints to determine an object’s color instead of two.  The red and green receptors absorb maximally near the same part of the spectrum, though… the red vs. blue & green vs. blue ratios are generally very similar.  So the third receptor type mostly helps a trichromat distinguish between red and green.

This means a dichromat will have a narrower range of the electromagnetic spectrum that they are good at distinguishing color within.  For a dichromat, reds and greens both will be characterized by “green receptor stimulated a lot, blue receptor only a little.”

Now, if you imagine that the visual spectrum is number line that runs from 0 to 100, a dichromat would be good at distinguishing colors in the first 0 to 50 segment, and not good at distinguishing color beyond that point — everything with green wavelength, ca. 500 nanometers, and longer, would appear to be green.

But you could take that 0 to 100 number line and just divide everything by 2.  Then every color would look “wrong” — no object would appear to be the same color as it was before you put on the wacky glasses — and you’d be less able to distinguish between close shades — if two colors needed to be 15 nanometers apart to seem different, now they’d need to be 30 nanometers apart — but a dichromat could distinguish between colors over the same full visual spectrum as trichromats.

That’s roughly how the glasses should work — inbound light is shifted such that all colors are made blue & greenish, and the visual spectrum is condensed).

Of course, even though you can’t change an image in a way that will allow you (I’m assuming that you, dear reader, are a trichromat.  But my assumption has a 10% chance of being wrong.  My apologies!  I care about you, too, dichromatic reader!) and a dichromatic friend to see it the same way.  But you can change your friend.  You can inject a DNA-delivering retrovirus into your friend’s eyeball, and after a short neurological training period, you and your friend will see colors the same way!

Only in the eyeball!
Only in the eyeball!

It’s possible that your friend won’t like you any more if you do this.  But here’s how it works: the retrovirus encodes for the flavor of photoreceptor that none of your friend’s cone cells were expressing.  Upon infection, the virus will initiate production of that receptor… so now a subpopulation of cone cells will be sending new signals to the brain.  They’ll be stimulated by different wavelengths of light than they were before.  And brains, magically plastic things that they are, rapidly rewire themselves to incorporate any new data they have access to.

(If you’re interested in this sort of thing, you should look up biohacking.  Like implanting magnets in your fingers to “feel” electric or magnetic fields.  But I’m not going to link to anything.  Wrestling your friend to the ground in order to inject recombinant DNA into his eyeball?  That makes me smile.  But slicing open your own fingertips to put magnets under the skin?  That’s too creepy for me).

If a brain is suddenly receiving different signals after exposure to red versus green light, it’ll use that information.  Which means: Color vision achieved!  Unfortunately, viral DNA integrates randomly, so a weird eye cancer might’ve been achieved as well.  You win some, you lose some.

What we call “color vision,” though, is still only trichromatic.  With three flavors of cone cells, humans can do a pretty good job distinguishing colors from about 400 to 700 nanometers.  But some species have more flavors of cone cells, which means they can distinguish the world’s colors more precisely.  Even some humans are tetrachromats, although their fourth cone cell flavor is maximally stimulated by light midway between red and green, a part of the electromagnetic spectrum that trichromatic humans are already good at parsing.  And tetrachromatic humans are rare: to the best of my knowledge no languages have a word for that secret color between red and green.  I don’t know any words for it, at least, but maybe this too is a secret guarded by those who see it.

Still, no amount of image processing would allow you, dear reader, even if you’re one of those rare tetrachromatic individuals, to see the world in all the spangled glory seen by a starling or a peacock.  This graph shows the stimulation of each flavor of cone cell receptor by different wavelengths of light.

bird eyes

And even the splendorous beauty seen by birds pales in comparison to the way we thought mantis shrimps perceive the world.  Because mantis shrimps, see, have twelve flavors of photoreceptors, which means that if their brains processed colors the same ways ours do, by considering the ratio of cone cell flavors that are stimulated by incident light, they’d be exquisitely sensitive to color.  Here: compare the spectral sensitivity graph for humans and starlings, shown above, to the equivalent graph for mantis shrimps.  This makes humans look pathetic!

mantis shrimp spectral sensitivity

If you haven’t see it, you should definitely read this cartoon about mantis shrimp perception from The Oatmeal.

oatmealIt’s possible that mantis shrimps process color differently from humans, though.  Instead of computing ratios of cone-flavor activation to determine the color of an object, they might decide that an object is the color of whatever single cone flavor is most stimulated.  In other words, while humans use stimulation ratios from our mere three flavors of cone cells to identify thousands of hues, a species with a dozen photoreceptor flavors might regard every object as being one of those dozen discrete colors.

Indeed, that’s what a recent study from Thoen et al. (“A Different Form of Color Vision in Mantis Shrimp”) suggests.  They trained mantis shrimps to attack a particular color of light in order to win a treat, then tested how well it could distinguish that color from nearby wavelengths.  In their hands, the shrimps needed approximately 50 nanometers separating two colors to distinguish them, whereas humans, with our meager three flavors of photoreceptors, can often distinguish colors as close as 1 or 2 nanometers apart.

Still, it’s hard to know exactly what a shrimp is thinking.  Testing human cognition and perception is easier because we can, you know, talk to each other.  Describe what we see.

With humans, the biggest barrier to empathy is that sometimes we forget to listen.

On watchful gods, trust, and how academic scientists undermined their own credibility.

On watchful gods, trust, and how academic scientists undermined their own credibility.

k10063Despite my disagreements with a lot of its details, I thoroughly enjoyed Ara Norenzayan’s Big Gods.  The book posits an explanation for the current global dominance of the big three Abrahamic religions: Christianity, Islam, and Judaism.

Instead of the “quirks of history & dumb luck” explanation offered in Jared Diamond’s Guns, Germs, and Steel, Norenzayan suggests that the Abrahamic religions have so many adherents today because beneficial economic behaviors were made possible by belief in those religions.

Here’s a rough summary of the argument: Economies function best in a culture of trust.  People are more trustworthy when they’re being watched.  If people think they’re being watched, that’s just as good.  Adherents to the Abrahamic faiths think they are always being watched by God.  And, because anybody could claim to believe in an omnipresent, ever-watchful god, it was worthwhile for believers to practice costly rituals (church attendance, dietary restrictions, sexual moderation, risk of murder by those who hate their faith) in order to signal that they were genuine, trustworthy, God-fearing individuals.

A clever argument.  To me, it calls to mind the trustworthiness passage of Daniel Dennett’s Freedom Evolves:

When evolution gets around to creating agents that can learn, and reflect, and consider rationally what they ought to do next, it confronts these agents with a new version of the commitment problem: how to commit to something and convince others you have done so.  Wearing a cap that says “I’m a cooperator” is not going to take you far in a world of other rational agents on the lookout for ploys.  According to [Robert] Frank, over evolutionary time we “learned” how to harness our emotions to the task of keeping us from being too rational, and–just as important–earning us a reputation for not being too rational.  It is our unwanted excess of myopic or local rationality, Frank claims, that makes us so vulnerable to temptations and threats, vulnerable to “offers we can’t refuse,” as the Godfather says.  Part of becoming a truly responsible agent, a good citizen, is making oneself into a being that can be relied upon to be relatively impervious to such offers.

I think that’s a beautiful passage — the logic goes down so easily that I hardly notice the inaccuracies beneath the surface.  It makes a lot of sense unless you consider that many other species, including relatively non-cooperative species, have emotional lives very similar to our own, and will like us act in irrational ways to stay true to those emotions (I still love this clip of an aggrieved monkey rejecting its cucumber slice).

Maybe that doesn’t seem important to Dennett, who shrugs off decades of research indicating the cognitive similarities between humans and other animals when he asserts that only we humans have meaningful free will, but that kind of detail matters to me.

You know, accuracy or truth or whatever.

Similarly, I think Norenzayan’s argument is elegant, even though I don’t agree.  One problem is that he supports his claims with results from social psychology experiments, many of which are not credible.  But that’s not entirely his fault.  Arguments do sound more convincing when there’s experimental data to back them up, and surely there are a few tolerably accurate social psychology results tucked away in the scientific literature. The problem is that the basic methodology of modern academic science produces a lot of inaccurate garbage (References? Here & here & here & here... I could go on, but I already have a half-written post on the reasons why the scientific method is not a good persuasive tool, so I’ll elaborate on this idea later).

For instance, many of the experiments Norenzayan cites are based on “priming.”  Study subjects are unconsciously inoculated with an idea: will they behave differently?

Naturally, Norenzayan includes a flattering description of the first priming experiment, the Bargh et al. study (“Automaticity of Social Behavior: Direct Effects of Trait Construct and Stereotype Activation on Action”) in which subjects walked more slowly down a hallway after being unconsciously exposed to words about old age.  But this study is terrible!  It’s a classic in the field, sure, and its “success” has resulted in many other laboratories copying the technique, but it almost certainly isn’t meaningful.

Look at the actual data from the Bargh paper: they’ve drawn a bar graph that suggests a big effect, but that’s just because they picked an arbitrary starting point for their axis.  There are no error bars.  The work couldn’t be replicated (unless a research assistant was “primed” to know what the data “should” look like in advance).

fig2

The author of the original priming study also published a few apoplectic screeds denouncing the researchers who attempted to replicate his work — here’s a quote from Ed Yong’s analysis:

Bargh also directs personal attacks at the authors of the paper (“incompetent or ill-informed”), at PLoS (“does not receive the usual high scientific journal standards of peer-review scrutiny”), and at me (“superficial online science journalism”).  The entire post is entitled “Nothing in their heads”.

Personally, I am extremely skeptical of any work based on the “priming” methodology.  You might expect the methodology to be sound because it’s been used in so many subsequent studies.  I don’t think so.  Scientific publishing is sufficiently broken that unsound methodologies could be used to prove all sorts of untrue things, including precognition.

If you’re interested in the failings of modern academic science and don’t want to wait for my full post on the topic, you should check out Simmons et al.’s “False-Positive Psychology: Undisclosed Flexibility in Data Collection and AnalysNais Allows Presenting Anything as Significant.”  This paper demonstrates that listening to the Beatles will make you chronologically younger.

Wait.  No.  That can’t be right.

The_Beatles_in_America

The Simmons et al. paper actually demonstrates why so many contemporary scientific results are false, a nice experimental supplement to the theoretical Ioannidis model (“Why Most Published Research Findings Are False”).  The paper pre-emptively rebuts empty rationalizations such as those given in Lisa Feldman Barrett’s New York Times editorial (“Psychology Is not in Crisis,” in which she incorrectly argues that it’s no big deal that most findings cannot be replicated).

Academia rewards researchers who can successfully hunt for publishable results.  But the optimal strategy for obtaining something publishable (collect lots of data, analyze it repeatedly using different mathematical formula, discard all the data that look “wrong”) is very different from the optimal strategy for uncovering truth.

28-1

Here’s one way to understand why much of modern academic publishing isn’t really science: in general, results are publishable only if they are positive (i.e. a treatment causes a change, as opposed to a treatment having no effect) and significant (i.e. you would see the result only 1 out of 20 times if the claim were not actually true).  But that means that if twenty labs decide to test the same false idea, 19 of them will get negative results and be unable to publish their findings, whereas 1 of them will see a false positive and publish.  Newspapers will announce that the finding is real, and there will be a published record of only the incorrect lab’s result.

Because academic training is set up like a pyramid scheme, we have a huge glut of researchers.  For any scientific question, there are probably enough laboratories studying it to nearly guarantee that significance testing will provide one of them an untrue publishable result.

And that’s even if everyone involved were 100% ethical.  Even then, a huge quantity of published research would be incorrect.  In our world, where many researchers are not ethical, the situation is even worse.

Norenzayan even documents this sort of unscientific over-analysis of data in his book.  One example appears in his chapter on anti-atheist prejudice:

In addition to assessing demographic information and individual religious beliefs, we asked [American] participants to rate the degree to which they viewed both atheists and gays with either distrust or with disgust.

. . .

It is possible that, for whatever reason, people may have felt similarly toward both atheists and gays, but felt more comfortable openly voicing distrust of atheists than of gays.  In addition, our sample consisted of American adults, overall a quite religious group.  To address these concerns, we performed additional studies in a population with considerable variability in religious involvement, but overall far less religious on the whole than most Americans.  We studied the attitudes of university students in Vancouver, Canada.  To circumvent any possible artifacts that result from overtly asking people about their prejudices, we designed studies that included more covert ways of measuring distrust.

When I see an explanation like that, it suggests that the researchers first conducted their study using the same methodology for both populations, obtained data that did not agree with their hypothesis, then collected more data for only one group in order to build a consistent, publishable story (if you’re interested, you can see their final paper here).

Because researchers can (and do!) collect data until they see what they want — until they have results that agree with a pet hypothesis, perhaps one they’ve built their career around — it’s not hard to obtain publishable data that appear to support any claim.  Doesn’t matter whether the claim is true or not.  And that, in essence, is why the practices that masquerade as the scientific method in the hands of modern researchers are not convincing persuasive tools.

I think it’s unfair to denounce people for not believing scientific results about climate change, for instance.  Because modern scientific results simply are not believable.

scientists_montageWhich is a shame.  The scientific method, used correctly, is the best way to understand the world.  And many scientists are very bright, ethical people.  And we should act upon certain research findings.

For instance, even if the reality underlying most climate change studies is a little less dire than some papers would lead you to believe, our world will be better off — more ecological diversity, less asthma, less terrorism, and, yes, less climate destabilization — if we pretend the results are real.

So it’s tragic, in my opinion, that a toxic publishing culture has undermined the authority of academic scientists.

And that’s one downside to Norenzayan’s book.  He supports his argument with a lot of data that I’m disinclined to believe.

The other problem is that he barely addresses historical information that doesn’t agree with his hypothesis.  For instance, several cultures developed long-range trust-based commerce without believing in omnipresent, watchful, morality-enforcing gods, including ancient Kanesh, China, the pre-Christian Greco-Roman empires, some regions of Polynesia.

CaptureThere’s also historical data demonstrating that trust is separable from religion (and not just in contemporary secular societies, where Norenzayan would argue that a god-like role is played by the police… didn’t sound so scary the way he wrote it).  The most heart-wrenching example of this, in my opinion, is presented in Nunn & Wantchekon’s paper, “The Slave Trade and the Origins of Mistrust in Africa.” They suggest a casual relationship between kidnapping & treachery during the transatlantic slave trade and contemporary mistrust in the plundered regions.  Which would mean that slavery in the United States created a drag on many African nations’ economies that persists to this day.

That legacy of mistrust persists despite the once-plundered nations (untrusting, with high economic transaction costs to show for it) & their neighbors (trusting, with greater prosperity) having similar proportions of believers in the Abrahamic faiths.

Is it so wrong to wish Norenzayan had addressed some of these issues?  I’ll admit that complexity might’ve sullied his clever logic.  But, all apologies to Keats, sometimes it’s necessary to introduce some inelegance in the pursuit of truth.

Still, the book was pleasurable to read.  Definitely gave me a lot to think about, and the writing is far more lucid and accessible than I’d expected.  Check out this passage on the evolutionary flux — replete with dead ends — that the world’s religions have gone through:

CaptureThis cultural winnowing of religions over time is evident throughout history and is occurring every day.  It is easy to miss this dynamic process, because the enduring religious movements are all that we often see in the present.  However, this would be an error.  It is called survivor bias.  When groups, entities, or persons undergo a process of competition and selective retention, we see abundant cases of those that “survived” the competition process; the cases that did not survive and flourish are buried in the dark recesses of the past, and are overlooked.  To understand how religions propagate, we of course want to put the successful religions under the microscope, but we do not want to forget the unsuccessful ones that did not make it — the reasons for their failures can be equally instructive.

This idea, that the histories we know preserve only a lucky few voices & occurrences, is also beautifully alluded to in Jurgen Osterhammel’s The Transformation of the World (trans. Patrick Camiller).  The first clause here just slays me:

The teeth of time gnaw selectively: the industrial architecture of the nineteenth century has worn away more quickly than many monuments from the Middle Ages.  Scarcely anywhere is it still possible to gain a sensory impression of what the Industrial “Revolution” meant–of the sudden appearance of a huge factory in a narrow valley, or of tall smokestacks in a world where nothing had risen higher than the church tower.

Indeed, Norenzayan is currently looking for a way to numerically analyze oft-overlooked facets of history.  So, who knows?  Perhaps, given more data, and a more thorough consideration of data that don’t slot nicely into his favored hypothesis, he could convince me yet.

On mental architecture and octopus literature.

CaptureI might spend too much time thinking about how brains work.  Less than some people, sure — everybody working on digital replication of human thought must devote more energy than I do to the topic, and they’re doing it in a more rigorous way — but for a dude with no professional connection to cognitive science or neurobiology or what-have-you, I spend an unreasonable amount of time obsessing over ’em.

What can I say?  Brains are cool.  That they function at all is pretty amazing, and that they do it in a way that gives us either free will or at least the illusion of having it is even better.

Most of my “obsessing over brains” time is devoted to thinking about how humans work, but studies on animal cognition always floor me as well.  A major focus of these studies, though, is often how similar human minds are to those of other animals… for instance, my recent hamsters & poverty essay was about the common response of most mammalian species to unfair, unrectifiable circumstance, and I’m planning a piece on the (mild) similarities between prairie dog language and our own.

The only post I’ve slapped up lately on differences between human and animal cognition was about potential rattlesnake misconceptions, but even that piece hinged upon a difference in the way they see, not the way they think.

Today’s post, though, will be about octopi.

A baby octopus (graneledone verrucosa)  moves across the seafloor as ROV Deep Discoverer (D2) explores Veatch Canyon.

A study on octopus evolution was recently published in Nature (Albertin et al., “The octopus genome and the evolution of cephalopod neural and morphological novelties”), and the main thing I learned from that paper & some background reading is that octopus brains are wicked cool.

Honestly, if we asked Superman to spin our planet backward some twenty billion times in order to re-run evolution, I think cephalopods could give apes a run for their money on potential planetary dominance.  Cephalopods are quite intelligent, adept problem solvers, have tentacles sufficiently agile for tool use, and can communicate by changing colors (although with much less finesse than the octospiders in Arthur C. Clarke’s Rama series. The octospiders used a language based on shifting striations of color displayed on their skin).

6654420081_968853a01e_z

The biggest obstacle holding octopi back from world domination is the difficulty for a water-dwelling species to harness fire or electricity.  But octopi can make brief sojourns onto dry land… and even land-dwelling apes took something like 20 million years to discover fire and some 22 million for electricity.

Sure, that’s faster than octopi — they’ve had a hundred million years already and still no fire — but once Superman spins the planet (first he fought crime!  Now he’ll muck up our timeline to investigate evolution!), there’ll be a chance for him to stop that asteroid and save the dinosaurs.  I imagine that living in constant terror of T-Rex & friends would slow the apes down a little.

I’ve never had to work under that kind of pressure, but it’s probably much more difficult to discover fire if you’re worried that a dinosaur will stomp by, demolish your laboratory, and eat you.

Octopi ingenuity might be similarly stymied by pervasive fear of giant monsters: sharks, dolphins, sea lions, seals, eels, and, yes, those ostensibly land-bound hairless apes.  Voracious, vicious predators all… especially those apes.

16123611074_65d9c0a61c_o

And yet.  Despite the fear, octopi are extremely clever.  They have a massive genome, too.  In itself, genome size is not a measure of complexity, in part because faulty cell division machinery sometimes results in the duplication of entire genomes — no matter how many copies of Fuzzy Bee & Friends you staple together, even if you create a 1,000+ page monstrosity, you won’t create a narrative with the complexity of The Odyssey.

That’s what researchers thought had happened with the octopus genome.  Sure, they have more genes than us, but they’re probably all duplicates!  Albertin et al. were the first to actually test that hypothesis, though… and it turns out to be wrong.  The octopus genome underwent massive expansion specifically for neural proteins & regulatory regions.  Which suggests that their huge genome is not dreck, that it is actually the product of intense selection for cognitive performance.  It isn’t proof, but it’s definitely consistent with selection for greater mental capacities.

There isn’t any octopus literature yet, but evolution isn’t done.  As long as octopus survival & mating success is bolstered by intelligence, there’s a chance the species will continue to slowly “improve.”

(I am biased in favor of smart creatures, but more brainpower is not necessarily better in an evolutionary sense.  For an example, here’s my essay on starfish zombies.)

3281235824_eb8493125f_z

But even if a species derived from contemporary octopi eventually gains cognitive capacities equivalent to our own, we may never grasp the way they perceive the world.  Their brains are organized very differently from our own.  Our minds are highly centralized — our actions result from decisions passed down from on high.

For most human actions, it seems that the mind subconsciously initiates movement, firing off instructions to the appropriate muscles, and then the conscious mind notices what’s going on and concocts a story to rationalize that action.  For instance, if you touch something hot, nociceptors (pain receptors) in your hand send an “Ouch!” signal to your brain, your brain relays back “Pull yer damn hand away!”, then the conscious mind types up a report, “I decided to pull my hand away because that was too hot.”

(Some people have argued that this sequence of timing indicates that we lack free will, by the way.  Which seems silly.  Our freedom doesn’t need to be at the level of conscious decision-making to be worthwhile.  Indeed, your subconscious is as much you as your consciousness.  Your subconscious reflexes reflect who you are, and with concerted effort you can modify most if not all of them.)

Octopi minds are different.  They seem to be much more decentralized.  Each tentacle has a significant neural network and can act independently.  Octopus tentacles can still move and make minor decisions even if cleaved away… like the zombie movie trope where a severed arm continues to strangle someone.

Since we have no good way to communicate with octopi, we don’t know whether their minds are wired for storytelling the way ours are.  Whether they also construct elaborate internal rationalizations for every action (does this help explain why I’m so fascinated by free will?  Even if our freedom is illusory, the ability to maintain that illusion underpins our ability to tell stories).

But if octopi do explain their world with stories, the types of stories they tell would presumably seem highly chaotic to us humans.  Our brains are building explanations for decisions made internally, whereas an octopus would be constructing a narrative from the actions of eight independently-acting entities.

Who knows?  Someday, many many years from now, if octopi undergo further selection for brain power & communication, we might find octopus literature to be exceptionally rambunctious.  Brimming with arbitrary twists & turns.  If their minds also tend toward narrative storytelling (and it’s worth mentioning that octopi also process time in a cascade of short-term and long-term memory the way mammals do), their stories would likely veer inexorably toward the inexplicable.

Toward, that is, actions & consequences that a human reader would perceive to be inexplicable.

Octopi might likewise condemn our own classics as overly regimented.  Lifeless, stilted, formulaic.  And it’d be devilishly hard to explain to an octopus why I think In Search of Lost Time is so good.

Octopus_vitiensis

*******************

p.s. I should offer a brief mea culpa for having listed different lengths of time that apes & octopi have had with which to discover fire.  All known life uses the same genetic code, so it’s extremely likely that we all share a common ancestor.  Everything alive today — bacteria, birds, octopi, humans — have had the same length of time to evolve.

This is part of why it sounds so silly when people refer to contemporary bacteria as being “lower” life forms or somehow less evolved.  Current bacteria have had just as long to perfect themselves for their environments as we have, and they simply pursued a different strategy for survival than humans did.  (For more on this topic, feel free to read this previous post.)

I listed different numbers, though… mostly because it seemed funny to imagine a lineage of octopi racing the apes in that “decent of man” cartoon.  Who will conquer the planet first?!

I chose my times based on the divergence of great apes from their nearest common ancestor (gibbons, whom we’ve rudely declared to be “lesser apes”) and the divergence of octopi from theirs (squids, ca. 135 million years ago).  The numbers themselves are pretty accurate, but the choice of those particular numbers was arbitrary.  You could easily rationalize instead starting the clock for apes in their quest for fire as soon as the first primates appeared, ca. 65 million years ago… then octopi don’t look so bad.  Perhaps only two-fold slower than us.  Or you could start the apes’ clock at the appearance of the very first mammals… in which case octopi might beat us yet.

On hunting.

I saw many posts on the internet from people upset about hunting, specifically hunting lions.  And eventually I watched the Jimmy Kimmel spot where he repeatedly maligns the Minnesota hunter for shooting that lion, and even appears to choke up near the end while plugging a wildlife research fund that you could donate money to.

And, look, I don’t really like hunting.  I’m an animal lover, so I’m not keen on the critters being shot, and I’m a runner who likes being out and about in our local state parks.  Between my loping stride and long hair, I look like a woodland creature.  I’m always nervous, thinking somebody might accidentally shoot me.  Yeah, I wear orange during the big seasons, but I still worry.

But I thought Jimmy Kimmel’s segment was silly.

141202150915-lion-exlarge-169For one thing, he’s a big barbecue fan — you can watch him driving through Austin searching for the best — and pigs are a far sight smarter than lions.  Plus, most of the lions that people hunt had a chance to live (this isn’t always true — there are horror stories out there about zoos auctioning off their excess animals to hunters, which means they go from a tiny zoo enclosure to a hunting preserve to dead — but in the case of Cecil it clearly was.  He was a wild animal who got to experience life in ways that CAFO-raised pigs could hardly dream of).  Yes, Cecil suffered a drawn-out death, but that seems far preferable to a life consistently horrific from first moment to last.

Most people eat meat.  And humans are heterotrophs.  We aren’t obligate carnivores the way cats are, but a human can’t survive without hurting things — it bothers me when vegetarians pretend that their lives have reached some ethical ideal or other.  Especially because there are so many ways you could conceptualize being good.  I have some friends who raise their own animals, for instance, and they could easily argue that their extreme local eating harms the world less than my reliance on vegetables shipped across the country.

I think it’s good to consider the ramifications of our actions, and I personally strive to be kind and contribute more to the world than I take from it, but I think it’s most important to live thoughtfully.  To think about what we’re doing before we do it.  Our first priority should be taking care of ourselves and those we love.  I don’t think there’s any reasonable argument you can make to ask people to value the lives of other animals without also valuing their own.

That said, if people are going to eat meat, I’d rather they hunt.  We live in southern Indiana.  Lots of people here hunt.  In general, those people also seem less wasteful — hunters are more cognizant of the value of their meals than the people who buy under-priced grocery store cuts of meat but don’t want to know about CAFOs or slaughterhouses.

Hunters often care more about the environment than other people.  They don’t want to eat animals that’ve been grazing on trash.  Ducks Unlimited, a hunting organization, has made huge efforts to ensure that we still have wetlands for ducks and many other creatures to live in.

To the best of my knowledge, Tyson Foods hasn’t been saving any wetlands lately.

Hunters generally don’t kill off entire populations.  And they don’t pump animals full of antibiotics (which is super evil, honestly.  Antibiotics are miracle drugs.  It’s amazing that we can survive infections without amputation.  And the idea that we would still those compounds’ magic by feeding constant low levels to overcrowded animals, which is roughly what you would do if you were intentionally trying to create bacteria that would shrug off the drugs, is heartbreaking.  There are virtually no medical discoveries we could possibly make that would counterbalance the shame we should feel if we bestow a world without antibiotics on our children’s generation.  See more I’ve written about antibiotics here).

"Cecil the lion at Hwange National Park (4516560206)" by Daughter#3 - Cecil. Licensed under CC BY-SA 2.0 via Wikimedia Commons - https://commons.wikimedia.org/wiki/File:Cecil_the_lion_at_Hwange_National_Park_(4516560206).jpg

Sure, Cecil wasn’t shot for food.  I would rather people not hunt lions.  But lions are terrifying, and they stir something primal in most humans — you could learn more about this by reading either Goodwell Nzou’s New York Times editorial or Barbara Ehrenreich’s Blood Rites: Origins and History of the Passions of War, in which she argues that humanity’s fear of predators like lions gave rise to our propensity for violence (a thesis I don’t agree with — you can see my essay here — but Ehrenreich does a lovely job of evoking some of the terror that protohumans must have felt living weak and hairless amongst lions and other giant betoothed beclawed beasts).

The money paid to shoot Cecil isn’t irrelevant, either.  It’s a bit unnerving to think of ethics being for sale — that it’s not okay to kill a majestic creature unless you slap down $50,000 first — but let’s not kid ourselves.  Money buys a wide variety of ethical exemptions.  The rich in our country are allowed to steal millions of dollars and clear their names by paying back a portion of those spoils in fines, whereas the poor can be jailed for years for thefts well under a thousand dollars and typically pay back far more than they ever took.

The money that hunters pay seems to change a lot of host countries for the better.  Trophy hunting often occurs in places where $50,000 means a lot more than it does in the United States, and that money helps prevent poaching and promote habitat maintenance.  Unless a huge amount of economic aid is given to those countries (aid that they are owed, honestly, for the abuses committed against them in the past), the wild animals will be killed anyway, either by poachers or by settlers who have nowhere else to live.  So, sure, I dislike hunting, but hunters are providing some of the only economic support for those animals.

And, look, if you think about all of that and you still want to rail against hunters, go ahead.  But if you’re going to denounce them, I hope you’re doing more than they are for conservation.  And I hope you’re living in a way that doesn’t reveal embarrassing hypocrisies — I’m sure any one of those pigs Jimmy Kimmel eats would’ve loved to experience a small fraction of Cecil’s unfettered life.

***************

Photo by Jessika.
Food at our house (taken by Jessika).

p.s. If you happen to be one of those people who can’t imagine living happily without eating meat, you should let me know and I’ll try to invite you to dinner sometime.  I love food, and I’m a pretty good cook.  I should be honest — it is a little bit more work to make life delicious if you’re only eating vegetables, but it definitely can be done.

Excerpts from some other book: our heroic annelid makes a daring escape.

Image from Soil-Net.com at Cranfield University, UK, 2015.
Image from Soil-Net.com at Cranfield University, UK, 2015.

We were in Louisville over the weekend, visiting a pregnant friend.  She had given us many baby clothes before the birth of our daughter; we were returning them.  Her son is now nearly three years old, so we spent part of the afternoon standing in the yard watching him dig with a plastic shovel.  He found a worm, triumphantly showed it to us, then moved it to a safe spot near their sprouting peas.

That’s when my friend and I started talking about worms.

“Moles are their worst enemies,” she told me.  “They hunt worms and store them in their burrows.  But moles have to keep the worms fresh.  If they kill them, worms dry up.  So moles bite off their heads, which means they can’t dig out to escape.”

I grimaced slightly while slurping my pink strawberry smoothie through a straw.

“That doesn’t kill them.  And, actually, if you wait long enough, the worms can regenerate their heads.”

“Huh,” I said, nodding.  “So it’s a race?”

“Guess where this dirt goes, mommy.”

“In the pile?”

“Yes!  In the pile!”  And another plastic shovel’s worth of dirt was added to the small mound he’d made beside their flower bed.

I went on, imagining this could be the seed of a compelling suspense or horror story.  “Because once the mole leaves, the worm would be racing, frantically trying to regrow its head so that it could escape.  Seems way more intense than all those movies where a tied-up hostage is struggling with the ropes.”

“And this dirt?”

“In the pile?”

“It goes in the pile!”

“Except, wait… worms can think, right?” I asked her.  I wasn’t sure, being unaware, for instance, of Charles Darwin’s 1881 study to test whether worms could solve small puzzles, like choosing which objects could best be used to plug a burrow.  And the question felt important; it’d be hard to write a compelling story when working with the drab emotional palette and unreflective inner life of a jellyfish.  Jellyfish, see, have no brains.

“They do, I think,” she told me.  “But I don’t think they’re very cephalated.”

“Oh,” I said, thinking the idea of an in-between state, brain-bearing yet decentrilized-decision-making, sounded perfectly reasonable.  After all, that organizational scheme has led to considerable success for terrorist organizations like al Qaeda, if “success” means propagation despite environmental adversity, so why not believe that evolution could’ve stumbled into the same schema employed biologically?  “But then, what would the worm feel?”

“Worm!  Where is my worm?”

“You set it over there, honey?”

He scampered over to the peas and peered.  No worm, apparently, was found.

“Worm went away!”

“That’s what they do.  They dig.  Now the worm is underground.”

“Underground,” he mused.  And set a dirt-flecked hand upon his chin, philosophically.

At the time I worried that an uncephalated worm (i.e. cognitive function was never fully localized to the head, as opposed to our decephalated hero post encounter with the nemesis mole) would make a lousy protagonist.  Being a brain-in-head-type fellow, I am somewhat biased toward the emotional experiences of my own kind.  Now, though, I’m not so sure.  Because head-centered cognition might well result in a worse, emotionally flattened story; the most dramatic action occurs while our protagonist’s head is missing, after all.

And I’m still concerned about my original question, what would a worm feel?  If I’m going through all the bother of writing a story, I’d like for people to enjoy it.  And I’ve seen many reviews that criticize human male writers, say, for attempting to inhabit the inner voice of a woman in fiction, or an iphone.  Although those perspectives both seem easier to project myself into than that of a worm.  The life of an iPhone seems so similar to my own.  Talk to people; look up facts; draw maps; listen to snippets of music and try to guess the song; spend aggravatingly long periods of time thinking, thinking, thinking, with no apparent progress visible from the outside.  Or perhaps that last one is not what you think of when you contemplate such devices, but my younger brother has one and he also has a tendency toward dropping things, and of forgetting things in his pants’ pockets when he puts them in the wash (you may have read previously his très bourgeois tragicomedy, “Another Bagful of Rice”).  His phone spends as much time as I do staring idly into space, unresponsive.

But, a worm?  How would I write a worm?


NOTES:

Some of the information above as relayed by the narrator and his friend is not true.  Earthworms will not, for instance, regrow their heads.  An earthworm can regenerate some fraction of the lower half of its body, but not the top half.  It’s possible that the narrator’s friend was thinking of planaria, from which a fraction of tail can in fact be used to create an entire regenerated animal, and in which the nervous system has a concentrated mass of neurons in the head that seems brain-like, but doesn’t seem to have a true central nervous system.

Her slight error does not invalidate the story, however; according to A. C. Evans’ article “The Identity of Earthworms Stored by Moles,” it would seem that our heroic earthworm might not require a whole new head.  To quote Evans regarding the potential status of our hero, “The earthworms could not burrow their way out of the holes because the anterior three to five segments had been bitten off or at least mutilated.”

The worms whose heads were bitten off?  They are doomed.  They will not regenerate their heads and will eventually be eaten (unless some larger predator finds the mole, in which case they’ll die fruitlessly… although even then they’ll still be eaten, I suppose, as long as you’re willing to use the verb “eat” to describe decomposition effected by bacteria).  But if our hero was simply mutilated, then there is still a chance!  Come on, little buddy!  You can do it!  Escape, escape!

And, in case you’re curious about earthworm cognition, Eileen Crist wrote a lovely article describing Charles Darwin’s experiments; it was published in Bekoff, Allen, and Burghardt’s The Cognitive Animal and is very accessible (I even convinced K to have her high school biology class read it one year) and, to my mind, very fun.  Well worth a read, even if you don’t yet care about worm thoughts.  But you will!  Just you wait.