On octopus literature, a reprise: what would books be like if we didn’t love gossip?

On octopus literature, a reprise: what would books be like if we didn’t love gossip?

A few months ago, I lost several days reading about the structure of octopus brains.  A fascinating subject — they are incredibly intelligent creatures despite sharing little evolutionary history with any other intelligent species.  And their minds are organized differently from our own.

Human minds are highly centralized — we can’t do much without our head being involved.  Whereas octopus minds seem to be distributed throughout their bodies.  It’s difficult to address how this might feel for an octopus, but researchers have studied the behavior of hacked-off octopus tentacles.  An octopus tentacle can behave intelligently even when it’s not connected to the rest of the body.  Each limb may have something akin to a mind of its own.

Which seems fascinating from the perspective of narrative.  The way human minds seem to work is, first our subconscious makes a decision, then a signal is sent to our muscles.  We speak, or press a button, or pull our hand away from something hot. And then, last, our conscious mind begins rationalizing why we made that choice.

The temporal sequencing is wacky, sure. But for the purpose of this essay, the important concept is that a centralized brain makes all the choices and constructs a coherent narrative for why each choice was made.

An octopus might find it more difficult to construct a single unifying narrative to explain its actions in a way that we humans would consider logical.  There are hints that octopus tentacles have characteristics akin to personalities — some behave as though shy, some as though bold, some aggressive, some curious.  If one tentacle is trying to hide while another is trying to attack, there might not be a single internal narrative that describes the creature’s self-sabotage.

And what might your personality be? Shy? Bold? Inquisitive? Photo by Jaula De Ardilla.

From our perspective, octopus consciousness might be like trying to explain in one sweep the behavior of an entire rambunctious dysfunctional family.  Sure, some calamities would affect them all together, but moment by moment each family member might have his or her own distinct interests.  A daughter who wants to stay out late, a mother who wants her daughter home by nine, a father who wants somebody to play catch in the yard, a son who just wants to be left alone…

It’s not that the collective is inexplicable, it’s just that we humans are unaccustomed to thinking of collectives like that as representing a single consciousness.  We look for logical motivations on a smaller scale — centralized minds — than an octopus might embrace as its worldview.

Anyway, I thought this might have a big impact on the way octopus literature would be structured.  Once, you know, they develop a language, start spinning myths, etc.

(To the best of my knowledge, there is no octopus language.  If they have one that’s chemical- or color-based, I’m not sure I would even notice.  Someone else probably would’ve, though.)

unnamed (1)While reading Sy Montgomery’s The Soul of an Octopus, I learned that there would probably be another major difference between octopus literature and our own.  Their literature might seem chaotic to human readers, yes.  But also, our literature is often character-drivenOur brains evolved to gossip, and the books that most human readers love most feature charming, striking individuals.  I love The Idiot largely because of the dynamic between Myshkin and Rogozhin, In Search of Lost Time for the vicarious misery of watching Marcel’s crumbling relationship with Albertine.  Readers of Game of Thrones are immersed in a rich world of political intrigue, tracking everyone’s motives as they push against each other.

Octopus readers might not care about any of that.  From Montgomery’s book:

Belonging to a group is one of humankind’s deepest desires.  We’re a social species, like our primate ancestors.  Evolutionary biologists suggest that keeping track of our many social relationships over our long lives was one of the factors driving the evolution of the human brain.  In fact, intelligence itself is most often associated with similarly social and long-lived creatures, like chimps, elephants, parrots, and whales.

But octopuses represent the opposite end of this spectrum.  They are famously short-lived, and most do not appear to be social.  There are intriguing exceptions: Male and female lesser Pacific striped octopuses, for instance, sometimes cohabit in pairs, sharing a single den.  Groups of these octopuses may live in associations of forty or more animals — a fact so unexpected that it was disbelieved and unpublished for thirty years, until Richard Ross of the Steinhart Aquarium recently raised the long-forgotten species in his home lab.  But the giant Pacific, at least, is thought to seek company only at the end of its life, to mate.  And even that is an iffy proposition, as one known outcome is the literal dinner date, when one octopus eats the other.  If not to interact with fellow octopuses, what is their intelligence for?  If octopuses don’t interact with each other, why would they want to interact with us?

Jennifer, the octopus psychologist, says, “The same thing that got them their smarts isn’t the same thing that got us our smarts.”  Octopus and human intelligence evolved separately and for different reasons.  She believes the event driving the octopus toward intelligence was the loss of the ancestral shell.  Losing the shell freed the animal for mobility.  An octopus, unlike a clam, does not have to wait for food to find it; the octopus can hunt like a tiger.  And while most octopuses love crab best, a single octopus may hunt many dozens of different prey species, each of which demands a different hunting strategy, a different skill set, a different set of decisions to make and modify.  Will you camouflage yourself for a stalk-and-ambush attack?  Shoot through the sea with your siphon for a quick chase?  Crawl out of the water to capture escaping prey?

Come to think of it, the mammalian Auntie Ferret would also enjoy reading “The Loner’s Guide to Building Fabulous Underwater Contraptions”

All of which made me realize, an octopus reader would probably be indifferent to well-crafted characters with rich inner lives.  An octopus would probably care more far more about the plot than the characters.  My assumption is that an ideal octopus novel would be  a thriller, crammed full of facts, action-packed, and weave together numerous barely-integrated narratives.

Indeed, octopus readers might not like Montgomery’s book, since she devotes so much space to the tangled lives and interactions of the humans who love and study them.  The Soul of an Octopus is clearly intended for a human audience.

I’d be curious to read a book written specifically for an octopus someday… although it’s probable that, like music composed specifically for tamarin monkeys, octopus literature would seem awful to me.

On Gerry Alanguilan’s “ELMER,” his author bio, and animal cognition.

On Gerry Alanguilan’s “ELMER,” his author bio, and animal cognition.

I was talking to a runner about graphic novels, once again recommending Andy Hartzell’s Fox Bunny Funny (which I imagine would be exceptionally treasured by a young person questioning their gender identity or sexuality, but is still great for anybody who feels they don’t quite fit in), when he recommended Gerry Alanguilan’s ELMER.  An excellent recommendation — I thoroughly enjoyed it.

The comic’s premise is that chickens suddenly gain intelligence roughly equivalent to humans.  Then they fight against murder, oppression, and prejudice in ways reminiscent of the U.S. civil rights movement.  The beginning of the book is horrifying, first with scenes depicting chickens coming into awareness while hanging by their feet in a slaughter house, then the violent reprisal they affect against humans.

gerryAlanguilan is a great artist and clearly a very empathetic man.

But that’s why I thought it was so strange that two out of four sentences of his short bio on the back cover read, “Gerry really likes chicken adobo, Psych, Mr. Belvedere, Titanic, Doctor Who, dogs, video blogging and specially Century Gothic. Transformed.”  For a moment I thought the first clause might be ironic because his author photograph for ELMER was taken in front of a busy bulletin board & one sheet of paper was a diet guide that appeared to have the vegan “v” logo at the bottom — maybe Gerry is making a point about what he gave up! — but with some squinting I realized it was a “Diet Guide for High Cholesterol Patients,” the symbol at the bottom merely a checkmark.

Why, then, would Alanguilan want to punctuate his work with the statement that he eats chickens, as though that is a defining feature of his life?

It’s commonly assumed among people who study animal cognition that other species are less aware of the world than humans are.  That humans perceive more acutely, our immense brainpower ensuring that our feelings cut deep.

The differences are matters of degree, though. It’s also widely acknowledged that humans exists on the same continuum as other animals, with no clear boundaries — genetic, physiological, or cognitive — demarcating us from them.  I thought this was phrased well by Frans de Waal in his editorial on Homo naledi and teleological misconceptions about evolution:

capThe problem is that we keep assuming that there is a point at which we became human.  This is about as unlikely as there being a precise wavelength at which the color spectrum turns from orange into red.  The typical proposition of how this happened is that of a mental breakthrough — a miraculous spark — that made us radically different.  But if we have learned anything from more than 50 years of research on chimpanzees and other intelligent animals, it is that the wall between human and animal cognition is like a Swiss cheese.

This is why, after reading Alanguilan’s brief biography, I began to wonder what percentage of human-like awareness chickens would need to have for their treatment in slaughterhouses, or the conveyer belt & macerator (grinder) used to expunge male chicks, or their confinement in dismal laying operations, to seem acceptable?

In Elmer, Alanguilan makes clear that their treatment would be unacceptable if the average chicken had one hundred percent of the cognitive capacity of the average human.  But then, below what percentage cognition does their treatment become okay?  Eighty percent?  Ten?  One?  Point one?

I think that’s an important question to ask, especially of an artist capable of creating such powerful work.

(And I should make clear that my own moral decisions exist in the same grey zone that I find curious in Alanguilan’s author bio.  I support abortion rights, an implicit declaration that the fractional cognition of a fetus is insufficient to outweigh the interests of the mother.  It’s more complicated than that, but it’s worth making clear that I’m not purporting to be morally pure.)

It’s true that humans are heterotrophs.  It’s impossible for us to live without harming — it irks me when vegetarians claim, for instance, that plants have no feelings.  They clearly do, they have wants and desires, they have rudimentary means of communication.  You could argue that eating fruit is ethically simple because fruit represents a pact between flowering plants and animal life, which co-evolved.  A plant expends energy to create fruit as a gift to animals, and animals in accepting that gift spread the plant’s seeds.

ketchupsmoothieBut anyone who eats vegetables (where “vegetable” means something like kale or broccoli or carrots — Supreme Court justices are not scientists) harms other perceiving entities by eating.

Which is fine. I eat, too!  Our first concern, given that we are perceiving entities, is to take care of ourselves.  If you didn’t care for your own well-being, what would motivate you to care for someone else’s?  Beyond that, I don’t think there’s a simple way to identify what or whom else is sufficiently self-like to merit our concern.  Personally, I care much more about my family than I do other humans — I devote the majority of my time and energy to helping them.  And I care much more about the well-being of the average human than I do the average cow, say, or lion.

Moral philosophers like Peter Singer would describe this as “speciest.”  I think that’s a silly-sounding word for a silly concept.  I don’t care about other humans because we have similar sequences in our DNA, or even because they resemble what I see when I look into a mirror.  I care about their well-being because of their internal mental life — I can imagine what it might feel like to be another human and so their plights sadden me.

Sure, I can imagine what it might feel like to be a chicken… but less well.  Other animals don’t perceive the world the same way we do.  And they seem to think less well.  I’d rather they not suffer.  But if somebody has to suffer, I’d rather that somebody be a Gallus gallus than a Homo sapiens.  I’d rather many chickens suffer than one human — I weigh chickens’ interests at only a small fraction of my concern for other humans.

Humans can talk to me.  They can share their travails with words, or gestures, or interpretative dance, or facial expressions.  And that matters a lot to me.

But integrity matters, too.  For instance, it seemed strange to me that David Duchovny could both write the book Holy Cow, in which he depicts farmed animals attempting to escape their doom, and still announce that he is “a very lazy vegetarian, which means I will look for the vegetarian meal, but I will also give up.”

My main objection isn’t to people eating meat.  It isn’t even to people who understand that animals can think (with differences in degree from human cognition, not differences in kind) eating meat.  Not everyone lives where I do, within a short walk of several grocery stores that all offer excellent nutrition from plants alone.  It’d be extremely difficult (and expensive) for humans living near the arctic to stay healthy without eating fish.  Those people’s well-being matters to me far more than the well-being of fish they catch.

And, for people living in close proximity to large, dangerous carnivores? Yes, obviously it’s reasonable for them to kill the animals terrorizing their villages.  I wish humans bred a little more slowly so that there’d still be space in our world for those large carnivores, but given that the at-risk humans already exist, I’d rather they be safe.  I can imagine how they feel.  I wouldn’t want my own daughter to be in danger.  I ruthlessly smash any mosquitos that go near her, and they are far less deadly than lions.

I simply find it upsetting when people who seem to believe that animal thought matters won’t take minor steps toward hurting them less.  It’s when confronted with stories about people who understand the moral implications of animal cognition, and who live in a place where it’s easy to be healthy eating vegetables alone, but don’t, that I feel sad.

If you had the chance to make your life consistent with your values, why wouldn’t you?


On attempts to see the world through other eyes.

On attempts to see the world through other eyes.


Most writers spend a lot of time thinking about how others see the world.  Hopefully most non-writers spend time thinking about this too.  It’s easier to feel empathy for the plights of others if you imagine seeing through their eyes.

So I thought it was pretty cool that the New York Times published an article about processing images to represent how they might appear to other species.

The algorithm shifts the color distribution of images to highlight which objects appear most distinct for an animal with different photoreceptors.  I thought it was cool even though the processing they describe fails in many ways to convey how differently various animals perceive the world.

For one thing, image processing can only affect visuals.  Another species may rely more on sound, scent, taste (although perhaps it’s cheating to list both scent and taste — they are essentially the same sense, chemodetection, with the difference being that humans respond more sensitively, and to a wider variety of chemicals, with our noses than our tongues), touch, sensing magnetic fields, etc.

If we assume that other animals will also place maximal trust in the detection of inbound electromagnetic radiation from the narrow band we’ve deemed “the visual spectrum,” we can fool ourselves regarding their most likely interpretations.  For an example, you could read my previous post about why rattlesnakes might assume that humans employ chameleon-like camouflage (underlying idea courtesy of Jesus Rivas & Gordon Burghardt).

The second problem with assuming that an image with shifted colors represents how another animal would view the world is on the level of neurological processing.  When a neurotypical human looks at an image and something resembles a face, that portion of the image will immediately dominate the viewer’s attention; a huge amount of human brainpower is devoted to processing faces.  Similarly, some dogs, if another dog enters their visual field, have trouble seeing anything else.  And bees: yes, they see more blues & ultraviolets than we do, but it’s also likely that flowers dominate their attention. I imagine it’s something like the image below, taken with N and her Uncle Max on a recent walk. Although, depending on your personality, you might have some dog-style neurological processing, too.


Even amongst humans this type of perceptual difference exists.  A friend of mine who does construction (ranked the second-best apprentice pipefitter in the nation the year he finished his training, despite being out at a buddy’s bachelor party, i.e. not sleeping, all night before the competition), when he walks into a room, immediately notices all exposed ductwork, piping, etc.  Most people care so little about these features as to render them effectively invisible.  And I, after three weeks of frantic itching and a full course of methylprednisolone, could glance at any landscape in northern California and immediately point out all the poison oak.  My daughter can spot a picture or statue of an owl from disconcertingly far away and won’t stop yelling “owww woo!” until I see it too.

The color processing written up in the New York Times, though, was automated.  Given the current state of computerized image recognition, you probably can’t write a script that would magnify dogs or flowers or poison oak effectively.  Maybe in a few years.

There’s one last big problem, though.  And the last problem is about the colors alone.  There is simply no way to re-color images so that a dichromatic (colloquially, “colorblind”) human would see the world like a trichromat.

(A brief aside: Shortly after I wrote the above sentence, I read an article about glasses marketed to colorblind people to let them see color.  And the basic idea is clever, but I don’t think it invalidates my claim.


Here’s how it works: most colorblind people are dichromats, meaning they have two different flavors of color receptors.  Colored light stimulates these receptors differentially: green light stimulates green receptors a lot and blue receptors a little.  Blue light stimulates blue receptors a lot and green receptors a little.  The brain processes the ratio of receptor stimulation to say, “Ah ha!  That object is blue!”

A typical human, however, is a trichromat.  This means that the brain uses three datapoints to determine an object’s color instead of two.  The red and green receptors absorb maximally near the same part of the spectrum, though… the red vs. blue & green vs. blue ratios are generally very similar.  So the third receptor type mostly helps a trichromat distinguish between red and green.

This means a dichromat will have a narrower range of the electromagnetic spectrum that they are good at distinguishing color within.  For a dichromat, reds and greens both will be characterized by “green receptor stimulated a lot, blue receptor only a little.”

Now, if you imagine that the visual spectrum is number line that runs from 0 to 100, a dichromat would be good at distinguishing colors in the first 0 to 50 segment, and not good at distinguishing color beyond that point — everything with green wavelength, ca. 500 nanometers, and longer, would appear to be green.

But you could take that 0 to 100 number line and just divide everything by 2.  Then every color would look “wrong” — no object would appear to be the same color as it was before you put on the wacky glasses — and you’d be less able to distinguish between close shades — if two colors needed to be 15 nanometers apart to seem different, now they’d need to be 30 nanometers apart — but a dichromat could distinguish between colors over the same full visual spectrum as trichromats.

That’s roughly how the glasses should work — inbound light is shifted such that all colors are made blue & greenish, and the visual spectrum is condensed).

Of course, even though you can’t change an image in a way that will allow you (I’m assuming that you, dear reader, are a trichromat.  But my assumption has a 10% chance of being wrong.  My apologies!  I care about you, too, dichromatic reader!) and a dichromatic friend to see it the same way.  But you can change your friend.  You can inject a DNA-delivering retrovirus into your friend’s eyeball, and after a short neurological training period, you and your friend will see colors the same way!

Only in the eyeball!
Only in the eyeball!

It’s possible that your friend won’t like you any more if you do this.  But here’s how it works: the retrovirus encodes for the flavor of photoreceptor that none of your friend’s cone cells were expressing.  Upon infection, the virus will initiate production of that receptor… so now a subpopulation of cone cells will be sending new signals to the brain.  They’ll be stimulated by different wavelengths of light than they were before.  And brains, magically plastic things that they are, rapidly rewire themselves to incorporate any new data they have access to.

(If you’re interested in this sort of thing, you should look up biohacking.  Like implanting magnets in your fingers to “feel” electric or magnetic fields.  But I’m not going to link to anything.  Wrestling your friend to the ground in order to inject recombinant DNA into his eyeball?  That makes me smile.  But slicing open your own fingertips to put magnets under the skin?  That’s too creepy for me).

If a brain is suddenly receiving different signals after exposure to red versus green light, it’ll use that information.  Which means: Color vision achieved!  Unfortunately, viral DNA integrates randomly, so a weird eye cancer might’ve been achieved as well.  You win some, you lose some.

What we call “color vision,” though, is still only trichromatic.  With three flavors of cone cells, humans can do a pretty good job distinguishing colors from about 400 to 700 nanometers.  But some species have more flavors of cone cells, which means they can distinguish the world’s colors more precisely.  Even some humans are tetrachromats, although their fourth cone cell flavor is maximally stimulated by light midway between red and green, a part of the electromagnetic spectrum that trichromatic humans are already good at parsing.  And tetrachromatic humans are rare: to the best of my knowledge no languages have a word for that secret color between red and green.  I don’t know any words for it, at least, but maybe this too is a secret guarded by those who see it.

Still, no amount of image processing would allow you, dear reader, even if you’re one of those rare tetrachromatic individuals, to see the world in all the spangled glory seen by a starling or a peacock.  This graph shows the stimulation of each flavor of cone cell receptor by different wavelengths of light.

bird eyes

And even the splendorous beauty seen by birds pales in comparison to the way we thought mantis shrimps perceive the world.  Because mantis shrimps, see, have twelve flavors of photoreceptors, which means that if their brains processed colors the same ways ours do, by considering the ratio of cone cell flavors that are stimulated by incident light, they’d be exquisitely sensitive to color.  Here: compare the spectral sensitivity graph for humans and starlings, shown above, to the equivalent graph for mantis shrimps.  This makes humans look pathetic!

mantis shrimp spectral sensitivity

If you haven’t see it, you should definitely read this cartoon about mantis shrimp perception from The Oatmeal.

oatmealIt’s possible that mantis shrimps process color differently from humans, though.  Instead of computing ratios of cone-flavor activation to determine the color of an object, they might decide that an object is the color of whatever single cone flavor is most stimulated.  In other words, while humans use stimulation ratios from our mere three flavors of cone cells to identify thousands of hues, a species with a dozen photoreceptor flavors might regard every object as being one of those dozen discrete colors.

Indeed, that’s what a recent study from Thoen et al. (“A Different Form of Color Vision in Mantis Shrimp”) suggests.  They trained mantis shrimps to attack a particular color of light in order to win a treat, then tested how well it could distinguish that color from nearby wavelengths.  In their hands, the shrimps needed approximately 50 nanometers separating two colors to distinguish them, whereas humans, with our meager three flavors of photoreceptors, can often distinguish colors as close as 1 or 2 nanometers apart.

Still, it’s hard to know exactly what a shrimp is thinking.  Testing human cognition and perception is easier because we can, you know, talk to each other.  Describe what we see.

With humans, the biggest barrier to empathy is that sometimes we forget to listen.

On watchful gods, trust, and how academic scientists undermined their own credibility.

On watchful gods, trust, and how academic scientists undermined their own credibility.

k10063Despite my disagreements with a lot of its details, I thoroughly enjoyed Ara Norenzayan’s Big Gods.  The book posits an explanation for the current global dominance of the big three Abrahamic religions: Christianity, Islam, and Judaism.

Instead of the “quirks of history & dumb luck” explanation offered in Jared Diamond’s Guns, Germs, and Steel, Norenzayan suggests that the Abrahamic religions have so many adherents today because beneficial economic behaviors were made possible by belief in those religions.

Here’s a rough summary of the argument: Economies function best in a culture of trust.  People are more trustworthy when they’re being watched.  If people think they’re being watched, that’s just as good.  Adherents to the Abrahamic faiths think they are always being watched by God.  And, because anybody could claim to believe in an omnipresent, ever-watchful god, it was worthwhile for believers to practice costly rituals (church attendance, dietary restrictions, sexual moderation, risk of murder by those who hate their faith) in order to signal that they were genuine, trustworthy, God-fearing individuals.

A clever argument.  To me, it calls to mind the trustworthiness passage of Daniel Dennett’s Freedom Evolves:

When evolution gets around to creating agents that can learn, and reflect, and consider rationally what they ought to do next, it confronts these agents with a new version of the commitment problem: how to commit to something and convince others you have done so.  Wearing a cap that says “I’m a cooperator” is not going to take you far in a world of other rational agents on the lookout for ploys.  According to [Robert] Frank, over evolutionary time we “learned” how to harness our emotions to the task of keeping us from being too rational, and–just as important–earning us a reputation for not being too rational.  It is our unwanted excess of myopic or local rationality, Frank claims, that makes us so vulnerable to temptations and threats, vulnerable to “offers we can’t refuse,” as the Godfather says.  Part of becoming a truly responsible agent, a good citizen, is making oneself into a being that can be relied upon to be relatively impervious to such offers.

I think that’s a beautiful passage — the logic goes down so easily that I hardly notice the inaccuracies beneath the surface.  It makes a lot of sense unless you consider that many other species, including relatively non-cooperative species, have emotional lives very similar to our own, and will like us act in irrational ways to stay true to those emotions (I still love this clip of an aggrieved monkey rejecting its cucumber slice).

Maybe that doesn’t seem important to Dennett, who shrugs off decades of research indicating the cognitive similarities between humans and other animals when he asserts that only we humans have meaningful free will, but that kind of detail matters to me.

You know, accuracy or truth or whatever.

Similarly, I think Norenzayan’s argument is elegant, even though I don’t agree.  One problem is that he supports his claims with results from social psychology experiments, many of which are not credible.  But that’s not entirely his fault.  Arguments do sound more convincing when there’s experimental data to back them up, and surely there are a few tolerably accurate social psychology results tucked away in the scientific literature. The problem is that the basic methodology of modern academic science produces a lot of inaccurate garbage (References? Here & here & here & here... I could go on, but I already have a half-written post on the reasons why the scientific method is not a good persuasive tool, so I’ll elaborate on this idea later).

For instance, many of the experiments Norenzayan cites are based on “priming.”  Study subjects are unconsciously inoculated with an idea: will they behave differently?

Naturally, Norenzayan includes a flattering description of the first priming experiment, the Bargh et al. study (“Automaticity of Social Behavior: Direct Effects of Trait Construct and Stereotype Activation on Action”) in which subjects walked more slowly down a hallway after being unconsciously exposed to words about old age.  But this study is terrible!  It’s a classic in the field, sure, and its “success” has resulted in many other laboratories copying the technique, but it almost certainly isn’t meaningful.

Look at the actual data from the Bargh paper: they’ve drawn a bar graph that suggests a big effect, but that’s just because they picked an arbitrary starting point for their axis.  There are no error bars.  The work couldn’t be replicated (unless a research assistant was “primed” to know what the data “should” look like in advance).


The author of the original priming study also published a few apoplectic screeds denouncing the researchers who attempted to replicate his work — here’s a quote from Ed Yong’s analysis:

Bargh also directs personal attacks at the authors of the paper (“incompetent or ill-informed”), at PLoS (“does not receive the usual high scientific journal standards of peer-review scrutiny”), and at me (“superficial online science journalism”).  The entire post is entitled “Nothing in their heads”.

Personally, I am extremely skeptical of any work based on the “priming” methodology.  You might expect the methodology to be sound because it’s been used in so many subsequent studies.  I don’t think so.  Scientific publishing is sufficiently broken that unsound methodologies could be used to prove all sorts of untrue things, including precognition.

If you’re interested in the failings of modern academic science and don’t want to wait for my full post on the topic, you should check out Simmons et al.’s “False-Positive Psychology: Undisclosed Flexibility in Data Collection and AnalysNais Allows Presenting Anything as Significant.”  This paper demonstrates that listening to the Beatles will make you chronologically younger.

Wait.  No.  That can’t be right.


The Simmons et al. paper actually demonstrates why so many contemporary scientific results are false, a nice experimental supplement to the theoretical Ioannidis model (“Why Most Published Research Findings Are False”).  The paper pre-emptively rebuts empty rationalizations such as those given in Lisa Feldman Barrett’s New York Times editorial (“Psychology Is not in Crisis,” in which she incorrectly argues that it’s no big deal that most findings cannot be replicated).

Academia rewards researchers who can successfully hunt for publishable results.  But the optimal strategy for obtaining something publishable (collect lots of data, analyze it repeatedly using different mathematical formula, discard all the data that look “wrong”) is very different from the optimal strategy for uncovering truth.


Here’s one way to understand why much of modern academic publishing isn’t really science: in general, results are publishable only if they are positive (i.e. a treatment causes a change, as opposed to a treatment having no effect) and significant (i.e. you would see the result only 1 out of 20 times if the claim were not actually true).  But that means that if twenty labs decide to test the same false idea, 19 of them will get negative results and be unable to publish their findings, whereas 1 of them will see a false positive and publish.  Newspapers will announce that the finding is real, and there will be a published record of only the incorrect lab’s result.

Because academic training is set up like a pyramid scheme, we have a huge glut of researchers.  For any scientific question, there are probably enough laboratories studying it to nearly guarantee that significance testing will provide one of them an untrue publishable result.

And that’s even if everyone involved were 100% ethical.  Even then, a huge quantity of published research would be incorrect.  In our world, where many researchers are not ethical, the situation is even worse.

Norenzayan even documents this sort of unscientific over-analysis of data in his book.  One example appears in his chapter on anti-atheist prejudice:

In addition to assessing demographic information and individual religious beliefs, we asked [American] participants to rate the degree to which they viewed both atheists and gays with either distrust or with disgust.

. . .

It is possible that, for whatever reason, people may have felt similarly toward both atheists and gays, but felt more comfortable openly voicing distrust of atheists than of gays.  In addition, our sample consisted of American adults, overall a quite religious group.  To address these concerns, we performed additional studies in a population with considerable variability in religious involvement, but overall far less religious on the whole than most Americans.  We studied the attitudes of university students in Vancouver, Canada.  To circumvent any possible artifacts that result from overtly asking people about their prejudices, we designed studies that included more covert ways of measuring distrust.

When I see an explanation like that, it suggests that the researchers first conducted their study using the same methodology for both populations, obtained data that did not agree with their hypothesis, then collected more data for only one group in order to build a consistent, publishable story (if you’re interested, you can see their final paper here).

Because researchers can (and do!) collect data until they see what they want — until they have results that agree with a pet hypothesis, perhaps one they’ve built their career around — it’s not hard to obtain publishable data that appear to support any claim.  Doesn’t matter whether the claim is true or not.  And that, in essence, is why the practices that masquerade as the scientific method in the hands of modern researchers are not convincing persuasive tools.

I think it’s unfair to denounce people for not believing scientific results about climate change, for instance.  Because modern scientific results simply are not believable.

scientists_montageWhich is a shame.  The scientific method, used correctly, is the best way to understand the world.  And many scientists are very bright, ethical people.  And we should act upon certain research findings.

For instance, even if the reality underlying most climate change studies is a little less dire than some papers would lead you to believe, our world will be better off — more ecological diversity, less asthma, less terrorism, and, yes, less climate destabilization — if we pretend the results are real.

So it’s tragic, in my opinion, that a toxic publishing culture has undermined the authority of academic scientists.

And that’s one downside to Norenzayan’s book.  He supports his argument with a lot of data that I’m disinclined to believe.

The other problem is that he barely addresses historical information that doesn’t agree with his hypothesis.  For instance, several cultures developed long-range trust-based commerce without believing in omnipresent, watchful, morality-enforcing gods, including ancient Kanesh, China, the pre-Christian Greco-Roman empires, some regions of Polynesia.

CaptureThere’s also historical data demonstrating that trust is separable from religion (and not just in contemporary secular societies, where Norenzayan would argue that a god-like role is played by the police… didn’t sound so scary the way he wrote it).  The most heart-wrenching example of this, in my opinion, is presented in Nunn & Wantchekon’s paper, “The Slave Trade and the Origins of Mistrust in Africa.” They suggest a casual relationship between kidnapping & treachery during the transatlantic slave trade and contemporary mistrust in the plundered regions.  Which would mean that slavery in the United States created a drag on many African nations’ economies that persists to this day.

That legacy of mistrust persists despite the once-plundered nations (untrusting, with high economic transaction costs to show for it) & their neighbors (trusting, with greater prosperity) having similar proportions of believers in the Abrahamic faiths.

Is it so wrong to wish Norenzayan had addressed some of these issues?  I’ll admit that complexity might’ve sullied his clever logic.  But, all apologies to Keats, sometimes it’s necessary to introduce some inelegance in the pursuit of truth.

Still, the book was pleasurable to read.  Definitely gave me a lot to think about, and the writing is far more lucid and accessible than I’d expected.  Check out this passage on the evolutionary flux — replete with dead ends — that the world’s religions have gone through:

CaptureThis cultural winnowing of religions over time is evident throughout history and is occurring every day.  It is easy to miss this dynamic process, because the enduring religious movements are all that we often see in the present.  However, this would be an error.  It is called survivor bias.  When groups, entities, or persons undergo a process of competition and selective retention, we see abundant cases of those that “survived” the competition process; the cases that did not survive and flourish are buried in the dark recesses of the past, and are overlooked.  To understand how religions propagate, we of course want to put the successful religions under the microscope, but we do not want to forget the unsuccessful ones that did not make it — the reasons for their failures can be equally instructive.

This idea, that the histories we know preserve only a lucky few voices & occurrences, is also beautifully alluded to in Jurgen Osterhammel’s The Transformation of the World (trans. Patrick Camiller).  The first clause here just slays me:

The teeth of time gnaw selectively: the industrial architecture of the nineteenth century has worn away more quickly than many monuments from the Middle Ages.  Scarcely anywhere is it still possible to gain a sensory impression of what the Industrial “Revolution” meant–of the sudden appearance of a huge factory in a narrow valley, or of tall smokestacks in a world where nothing had risen higher than the church tower.

Indeed, Norenzayan is currently looking for a way to numerically analyze oft-overlooked facets of history.  So, who knows?  Perhaps, given more data, and a more thorough consideration of data that don’t slot nicely into his favored hypothesis, he could convince me yet.

On mental architecture and octopus literature.

CaptureI might spend too much time thinking about how brains work.  Less than some people, sure — everybody working on digital replication of human thought must devote more energy than I do to the topic, and they’re doing it in a more rigorous way — but for a dude with no professional connection to cognitive science or neurobiology or what-have-you, I spend an unreasonable amount of time obsessing over ’em.

What can I say?  Brains are cool.  That they function at all is pretty amazing, and that they do it in a way that gives us either free will or at least the illusion of having it is even better.

Most of my “obsessing over brains” time is devoted to thinking about how humans work, but studies on animal cognition always floor me as well.  A major focus of these studies, though, is often how similar human minds are to those of other animals… for instance, my recent hamsters & poverty essay was about the common response of most mammalian species to unfair, unrectifiable circumstance, and I’m planning a piece on the (mild) similarities between prairie dog language and our own.

The only post I’ve slapped up lately on differences between human and animal cognition was about potential rattlesnake misconceptions, but even that piece hinged upon a difference in the way they see, not the way they think.

Today’s post, though, will be about octopi.

A baby octopus (graneledone verrucosa)  moves across the seafloor as ROV Deep Discoverer (D2) explores Veatch Canyon.

A study on octopus evolution was recently published in Nature (Albertin et al., “The octopus genome and the evolution of cephalopod neural and morphological novelties”), and the main thing I learned from that paper & some background reading is that octopus brains are wicked cool.

Honestly, if we asked Superman to spin our planet backward some twenty billion times in order to re-run evolution, I think cephalopods could give apes a run for their money on potential planetary dominance.  Cephalopods are quite intelligent, adept problem solvers, have tentacles sufficiently agile for tool use, and can communicate by changing colors (although with much less finesse than the octospiders in Arthur C. Clarke’s Rama series. The octospiders used a language based on shifting striations of color displayed on their skin).


The biggest obstacle holding octopi back from world domination is the difficulty for a water-dwelling species to harness fire or electricity.  But octopi can make brief sojourns onto dry land… and even land-dwelling apes took something like 20 million years to discover fire and some 22 million for electricity.

Sure, that’s faster than octopi — they’ve had a hundred million years already and still no fire — but once Superman spins the planet (first he fought crime!  Now he’ll muck up our timeline to investigate evolution!), there’ll be a chance for him to stop that asteroid and save the dinosaurs.  I imagine that living in constant terror of T-Rex & friends would slow the apes down a little.

I’ve never had to work under that kind of pressure, but it’s probably much more difficult to discover fire if you’re worried that a dinosaur will stomp by, demolish your laboratory, and eat you.

Octopi ingenuity might be similarly stymied by pervasive fear of giant monsters: sharks, dolphins, sea lions, seals, eels, and, yes, those ostensibly land-bound hairless apes.  Voracious, vicious predators all… especially those apes.


And yet.  Despite the fear, octopi are extremely clever.  They have a massive genome, too.  In itself, genome size is not a measure of complexity, in part because faulty cell division machinery sometimes results in the duplication of entire genomes — no matter how many copies of Fuzzy Bee & Friends you staple together, even if you create a 1,000+ page monstrosity, you won’t create a narrative with the complexity of The Odyssey.

That’s what researchers thought had happened with the octopus genome.  Sure, they have more genes than us, but they’re probably all duplicates!  Albertin et al. were the first to actually test that hypothesis, though… and it turns out to be wrong.  The octopus genome underwent massive expansion specifically for neural proteins & regulatory regions.  Which suggests that their huge genome is not dreck, that it is actually the product of intense selection for cognitive performance.  It isn’t proof, but it’s definitely consistent with selection for greater mental capacities.

There isn’t any octopus literature yet, but evolution isn’t done.  As long as octopus survival & mating success is bolstered by intelligence, there’s a chance the species will continue to slowly “improve.”

(I am biased in favor of smart creatures, but more brainpower is not necessarily better in an evolutionary sense.  For an example, here’s my essay on starfish zombies.)


But even if a species derived from contemporary octopi eventually gains cognitive capacities equivalent to our own, we may never grasp the way they perceive the world.  Their brains are organized very differently from our own.  Our minds are highly centralized — our actions result from decisions passed down from on high.

For most human actions, it seems that the mind subconsciously initiates movement, firing off instructions to the appropriate muscles, and then the conscious mind notices what’s going on and concocts a story to rationalize that action.  For instance, if you touch something hot, nociceptors (pain receptors) in your hand send an “Ouch!” signal to your brain, your brain relays back “Pull yer damn hand away!”, then the conscious mind types up a report, “I decided to pull my hand away because that was too hot.”

(Some people have argued that this sequence of timing indicates that we lack free will, by the way.  Which seems silly.  Our freedom doesn’t need to be at the level of conscious decision-making to be worthwhile.  Indeed, your subconscious is as much you as your consciousness.  Your subconscious reflexes reflect who you are, and with concerted effort you can modify most if not all of them.)

Octopi minds are different.  They seem to be much more decentralized.  Each tentacle has a significant neural network and can act independently.  Octopus tentacles can still move and make minor decisions even if cleaved away… like the zombie movie trope where a severed arm continues to strangle someone.

Since we have no good way to communicate with octopi, we don’t know whether their minds are wired for storytelling the way ours are.  Whether they also construct elaborate internal rationalizations for every action (does this help explain why I’m so fascinated by free will?  Even if our freedom is illusory, the ability to maintain that illusion underpins our ability to tell stories).

But if octopi do explain their world with stories, the types of stories they tell would presumably seem highly chaotic to us humans.  Our brains are building explanations for decisions made internally, whereas an octopus would be constructing a narrative from the actions of eight independently-acting entities.

Who knows?  Someday, many many years from now, if octopi undergo further selection for brain power & communication, we might find octopus literature to be exceptionally rambunctious.  Brimming with arbitrary twists & turns.  If their minds also tend toward narrative storytelling (and it’s worth mentioning that octopi also process time in a cascade of short-term and long-term memory the way mammals do), their stories would likely veer inexorably toward the inexplicable.

Toward, that is, actions & consequences that a human reader would perceive to be inexplicable.

Octopi might likewise condemn our own classics as overly regimented.  Lifeless, stilted, formulaic.  And it’d be devilishly hard to explain to an octopus why I think In Search of Lost Time is so good.



p.s. I should offer a brief mea culpa for having listed different lengths of time that apes & octopi have had with which to discover fire.  All known life uses the same genetic code, so it’s extremely likely that we all share a common ancestor.  Everything alive today — bacteria, birds, octopi, humans — have had the same length of time to evolve.

This is part of why it sounds so silly when people refer to contemporary bacteria as being “lower” life forms or somehow less evolved.  Current bacteria have had just as long to perfect themselves for their environments as we have, and they simply pursued a different strategy for survival than humans did.  (For more on this topic, feel free to read this previous post.)

I listed different numbers, though… mostly because it seemed funny to imagine a lineage of octopi racing the apes in that “decent of man” cartoon.  Who will conquer the planet first?!

I chose my times based on the divergence of great apes from their nearest common ancestor (gibbons, whom we’ve rudely declared to be “lesser apes”) and the divergence of octopi from theirs (squids, ca. 135 million years ago).  The numbers themselves are pretty accurate, but the choice of those particular numbers was arbitrary.  You could easily rationalize instead starting the clock for apes in their quest for fire as soon as the first primates appeared, ca. 65 million years ago… then octopi don’t look so bad.  Perhaps only two-fold slower than us.  Or you could start the apes’ clock at the appearance of the very first mammals… in which case octopi might beat us yet.

On hunting.

I saw many posts on the internet from people upset about hunting, specifically hunting lions.  And eventually I watched the Jimmy Kimmel spot where he repeatedly maligns the Minnesota hunter for shooting that lion, and even appears to choke up near the end while plugging a wildlife research fund that you could donate money to.

And, look, I don’t really like hunting.  I’m an animal lover, so I’m not keen on the critters being shot, and I’m a runner who likes being out and about in our local state parks.  Between my loping stride and long hair, I look like a woodland creature.  I’m always nervous, thinking somebody might accidentally shoot me.  Yeah, I wear orange during the big seasons, but I still worry.

But I thought Jimmy Kimmel’s segment was silly.

141202150915-lion-exlarge-169For one thing, he’s a big barbecue fan — you can watch him driving through Austin searching for the best — and pigs are a far sight smarter than lions.  Plus, most of the lions that people hunt had a chance to live (this isn’t always true — there are horror stories out there about zoos auctioning off their excess animals to hunters, which means they go from a tiny zoo enclosure to a hunting preserve to dead — but in the case of Cecil it clearly was.  He was a wild animal who got to experience life in ways that CAFO-raised pigs could hardly dream of).  Yes, Cecil suffered a drawn-out death, but that seems far preferable to a life consistently horrific from first moment to last.

Most people eat meat.  And humans are heterotrophs.  We aren’t obligate carnivores the way cats are, but a human can’t survive without hurting things — it bothers me when vegetarians pretend that their lives have reached some ethical ideal or other.  Especially because there are so many ways you could conceptualize being good.  I have some friends who raise their own animals, for instance, and they could easily argue that their extreme local eating harms the world less than my reliance on vegetables shipped across the country.

I think it’s good to consider the ramifications of our actions, and I personally strive to be kind and contribute more to the world than I take from it, but I think it’s most important to live thoughtfully.  To think about what we’re doing before we do it.  Our first priority should be taking care of ourselves and those we love.  I don’t think there’s any reasonable argument you can make to ask people to value the lives of other animals without also valuing their own.

That said, if people are going to eat meat, I’d rather they hunt.  We live in southern Indiana.  Lots of people here hunt.  In general, those people also seem less wasteful — hunters are more cognizant of the value of their meals than the people who buy under-priced grocery store cuts of meat but don’t want to know about CAFOs or slaughterhouses.

Hunters often care more about the environment than other people.  They don’t want to eat animals that’ve been grazing on trash.  Ducks Unlimited, a hunting organization, has made huge efforts to ensure that we still have wetlands for ducks and many other creatures to live in.

To the best of my knowledge, Tyson Foods hasn’t been saving any wetlands lately.

Hunters generally don’t kill off entire populations.  And they don’t pump animals full of antibiotics (which is super evil, honestly.  Antibiotics are miracle drugs.  It’s amazing that we can survive infections without amputation.  And the idea that we would still those compounds’ magic by feeding constant low levels to overcrowded animals, which is roughly what you would do if you were intentionally trying to create bacteria that would shrug off the drugs, is heartbreaking.  There are virtually no medical discoveries we could possibly make that would counterbalance the shame we should feel if we bestow a world without antibiotics on our children’s generation.  See more I’ve written about antibiotics here).

"Cecil the lion at Hwange National Park (4516560206)" by Daughter#3 - Cecil. Licensed under CC BY-SA 2.0 via Wikimedia Commons - https://commons.wikimedia.org/wiki/File:Cecil_the_lion_at_Hwange_National_Park_(4516560206).jpg

Sure, Cecil wasn’t shot for food.  I would rather people not hunt lions.  But lions are terrifying, and they stir something primal in most humans — you could learn more about this by reading either Goodwell Nzou’s New York Times editorial or Barbara Ehrenreich’s Blood Rites: Origins and History of the Passions of War, in which she argues that humanity’s fear of predators like lions gave rise to our propensity for violence (a thesis I don’t agree with — you can see my essay here — but Ehrenreich does a lovely job of evoking some of the terror that protohumans must have felt living weak and hairless amongst lions and other giant betoothed beclawed beasts).

The money paid to shoot Cecil isn’t irrelevant, either.  It’s a bit unnerving to think of ethics being for sale — that it’s not okay to kill a majestic creature unless you slap down $50,000 first — but let’s not kid ourselves.  Money buys a wide variety of ethical exemptions.  The rich in our country are allowed to steal millions of dollars and clear their names by paying back a portion of those spoils in fines, whereas the poor can be jailed for years for thefts well under a thousand dollars and typically pay back far more than they ever took.

The money that hunters pay seems to change a lot of host countries for the better.  Trophy hunting often occurs in places where $50,000 means a lot more than it does in the United States, and that money helps prevent poaching and promote habitat maintenance.  Unless a huge amount of economic aid is given to those countries (aid that they are owed, honestly, for the abuses committed against them in the past), the wild animals will be killed anyway, either by poachers or by settlers who have nowhere else to live.  So, sure, I dislike hunting, but hunters are providing some of the only economic support for those animals.

And, look, if you think about all of that and you still want to rail against hunters, go ahead.  But if you’re going to denounce them, I hope you’re doing more than they are for conservation.  And I hope you’re living in a way that doesn’t reveal embarrassing hypocrisies — I’m sure any one of those pigs Jimmy Kimmel eats would’ve loved to experience a small fraction of Cecil’s unfettered life.


Photo by Jessika.
Food at our house (taken by Jessika).

p.s. If you happen to be one of those people who can’t imagine living happily without eating meat, you should let me know and I’ll try to invite you to dinner sometime.  I love food, and I’m a pretty good cook.  I should be honest — it is a little bit more work to make life delicious if you’re only eating vegetables, but it definitely can be done.

Excerpts from some other book: our heroic annelid makes a daring escape.

Image from Soil-Net.com at Cranfield University, UK, 2015.
Image from Soil-Net.com at Cranfield University, UK, 2015.

We were in Louisville over the weekend, visiting a pregnant friend.  She had given us many baby clothes before the birth of our daughter; we were returning them.  Her son is now nearly three years old, so we spent part of the afternoon standing in the yard watching him dig with a plastic shovel.  He found a worm, triumphantly showed it to us, then moved it to a safe spot near their sprouting peas.

That’s when my friend and I started talking about worms.

“Moles are their worst enemies,” she told me.  “They hunt worms and store them in their burrows.  But moles have to keep the worms fresh.  If they kill them, worms dry up.  So moles bite off their heads, which means they can’t dig out to escape.”

I grimaced slightly while slurping my pink strawberry smoothie through a straw.

“That doesn’t kill them.  And, actually, if you wait long enough, the worms can regenerate their heads.”

“Huh,” I said, nodding.  “So it’s a race?”

“Guess where this dirt goes, mommy.”

“In the pile?”

“Yes!  In the pile!”  And another plastic shovel’s worth of dirt was added to the small mound he’d made beside their flower bed.

I went on, imagining this could be the seed of a compelling suspense or horror story.  “Because once the mole leaves, the worm would be racing, frantically trying to regrow its head so that it could escape.  Seems way more intense than all those movies where a tied-up hostage is struggling with the ropes.”

“And this dirt?”

“In the pile?”

“It goes in the pile!”

“Except, wait… worms can think, right?” I asked her.  I wasn’t sure, being unaware, for instance, of Charles Darwin’s 1881 study to test whether worms could solve small puzzles, like choosing which objects could best be used to plug a burrow.  And the question felt important; it’d be hard to write a compelling story when working with the drab emotional palette and unreflective inner life of a jellyfish.  Jellyfish, see, have no brains.

“They do, I think,” she told me.  “But I don’t think they’re very cephalated.”

“Oh,” I said, thinking the idea of an in-between state, brain-bearing yet decentrilized-decision-making, sounded perfectly reasonable.  After all, that organizational scheme has led to considerable success for terrorist organizations like al Qaeda, if “success” means propagation despite environmental adversity, so why not believe that evolution could’ve stumbled into the same schema employed biologically?  “But then, what would the worm feel?”

“Worm!  Where is my worm?”

“You set it over there, honey?”

He scampered over to the peas and peered.  No worm, apparently, was found.

“Worm went away!”

“That’s what they do.  They dig.  Now the worm is underground.”

“Underground,” he mused.  And set a dirt-flecked hand upon his chin, philosophically.

At the time I worried that an uncephalated worm (i.e. cognitive function was never fully localized to the head, as opposed to our decephalated hero post encounter with the nemesis mole) would make a lousy protagonist.  Being a brain-in-head-type fellow, I am somewhat biased toward the emotional experiences of my own kind.  Now, though, I’m not so sure.  Because head-centered cognition might well result in a worse, emotionally flattened story; the most dramatic action occurs while our protagonist’s head is missing, after all.

And I’m still concerned about my original question, what would a worm feel?  If I’m going through all the bother of writing a story, I’d like for people to enjoy it.  And I’ve seen many reviews that criticize human male writers, say, for attempting to inhabit the inner voice of a woman in fiction, or an iphone.  Although those perspectives both seem easier to project myself into than that of a worm.  The life of an iPhone seems so similar to my own.  Talk to people; look up facts; draw maps; listen to snippets of music and try to guess the song; spend aggravatingly long periods of time thinking, thinking, thinking, with no apparent progress visible from the outside.  Or perhaps that last one is not what you think of when you contemplate such devices, but my younger brother has one and he also has a tendency toward dropping things, and of forgetting things in his pants’ pockets when he puts them in the wash (you may have read previously his très bourgeois tragicomedy, “Another Bagful of Rice”).  His phone spends as much time as I do staring idly into space, unresponsive.

But, a worm?  How would I write a worm?


Some of the information above as relayed by the narrator and his friend is not true.  Earthworms will not, for instance, regrow their heads.  An earthworm can regenerate some fraction of the lower half of its body, but not the top half.  It’s possible that the narrator’s friend was thinking of planaria, from which a fraction of tail can in fact be used to create an entire regenerated animal, and in which the nervous system has a concentrated mass of neurons in the head that seems brain-like, but doesn’t seem to have a true central nervous system.

Her slight error does not invalidate the story, however; according to A. C. Evans’ article “The Identity of Earthworms Stored by Moles,” it would seem that our heroic earthworm might not require a whole new head.  To quote Evans regarding the potential status of our hero, “The earthworms could not burrow their way out of the holes because the anterior three to five segments had been bitten off or at least mutilated.”

The worms whose heads were bitten off?  They are doomed.  They will not regenerate their heads and will eventually be eaten (unless some larger predator finds the mole, in which case they’ll die fruitlessly… although even then they’ll still be eaten, I suppose, as long as you’re willing to use the verb “eat” to describe decomposition effected by bacteria).  But if our hero was simply mutilated, then there is still a chance!  Come on, little buddy!  You can do it!  Escape, escape!

And, in case you’re curious about earthworm cognition, Eileen Crist wrote a lovely article describing Charles Darwin’s experiments; it was published in Bekoff, Allen, and Burghardt’s The Cognitive Animal and is very accessible (I even convinced K to have her high school biology class read it one year) and, to my mind, very fun.  Well worth a read, even if you don’t yet care about worm thoughts.  But you will!  Just you wait.