Blanket octopuses also have extreme sexual dimorphism – a female’s tentacles can span seven feet wide, whereas the males are smaller than an inch.
But, wait, there’s more! In a 1963 article for Science magazine, marine biologist Everet Jones speculated that blanket octopuses might use jellyfish stingers as weapons.
While on a research cruise, Jones installed a night-light station to investigate the local fish.
“Among the frequent visitors to the submerged light were a number of immature female blanket octopuses. I dip-netted one of these from the water and lifted it by hand out of the net. I experienced sudden and severe pain and involuntarily threw the octopus back into the water.”
“To determine the mechanism responsible for this sensation, 10 or 12 small octopuses were captured and I purposely placed each one on the tender areas of my hands. The severe pain occurred each time, but careful observation indicated that I was not being bitten.
“The pain and resulting inflammation, which lasted several days, resembled the stings of the Portuguese man-of-war jellyfish, which was quite abundant in the area.”
tl;dr – “It really hurt! So I did it again.”
My spouse teaches high school biology. An important part of her class is addressing misconceptions about what science is.
Every so often, newspapers will send a reporter to interview my father about his research. Each time, they ask him to put on a lab coat and pipette something:
I mean, look at that – clearly, SCIENCE is happening here.
But it’s important to realize that this isn’t always what science looks like. Most of the time, academic researchers aren’t wearing lab coats. And most of the time, science isn’t done in a laboratory.
Careful observation of the natural world. Repeated tests to discover, if I do this, what will happen next? There are important parts of science, and these were practiced by our ancestors for thousands of years, long before anyone had laboratories. Indigenous people around the world have known so much about their local varieties of medicinal plants, and that’s knowledge that can only be acquired through scientific practice.
A nine month old who keeps pushing blocks off the edge of the high chair tray to see, will this block fall down, too? That’s science!
And this octopus article, published in the world’s most prestigious research journal? The experiment was to scoop up octopuses by hand and see how much it hurt.
It hurt a lot.
The article that I linked to earlier, the Scientific American blog post that my friend had sent me, includes a video clip at the bottom. Here’s a direct link to the video:
I should warn, you, though. The first section of the video shows a blanket octopus streaming gracefully through the ocean. She’s beautiful. But then the clip continues with footage of a huge school of fish.
Obviously, I was hoping that they’d show the octopus lurch forward, wielding those jellyfish stingers like electrified nun-chucks to incapacitate the fish. I mean, yes, I’m vegan. I don’t want the fish to die. But an octopus has to eat. And, if the octopus is going to practice wicked cool tool-using martial arts, then I obviously want to see it.
But I can’t. Our oceans are big, and deep, and dark. We’re still making new discoveries when we send cameras down there. So far, nobody has ever filmed a blanket octopus catching fish this way.
Every time I learn something new about octopuses, I think about family reunions.
About twenty years ago, I attended a family reunion in upstate New York. My grandparents were celebrating their fiftieth wedding anniversary. Many people were there whom I’d never met before, and whom I haven’t seen since. But most of us shared ancestors, often four or five or even six generations back.
And we all shared ancestors at some point, even the people who’d married in. From the beginning of life on Earth until 150,000 years ago, you could draw a single lineage – _____ begat ______ who begat ______ – that leads up to every single human alive today. We have an ancestor in common who lived 150,000 years ago, and so every lineage that leads to her will be shared by us all.
There’s also an ancestor that all humans alive today share with all octopuses alive today. So we could host a family reunion for all of her descendants – we humans would be invited, and blanket octopuses would be, too.
I would love to meet a blanket octopus. They’re brilliant creatures. If we could find a way to communicate, I’m sure there’d be lots to talk about.
But there’s a problem. You see, not everyone invited to this family reunion would be a scintillating conversationalist.
That ancestor we share? Here’s a drawing of her from Jian Han et al.’s Naturearticle.
She was about the size of a grain of rice.
And, yes, some of her descendants are brilliant. Octopuses. Dolphins. Crows. Chimpanzees. Us.
But this family reunion would also include a bunch of worms, moles, snails, and bugs. A lot of bugs. Almost every animals would’ve been invited, excluding only jellyfish and sponges. Many of the guests would want to lay eggs in the potato salad.
So, sure, it’d be cool to get to meet up with the octopuses, our long-lost undersea cousins. But we might end up seated next to an earthworm instead.
I’m sure that worms are very nice. Charles Darwin was fascinated by the intelligence of earthworms. Still, it’s hard to have a conversation with somebody when you don’t have a lot of common interests.
I recently borrowed my local library’s copy of Tao Lin’s Trip. I read ten pages before a business card fell out. I didn’t find the other until about a hundred pages later. The cards were really crammed in there – I often read at nap- and bedtime, lying on my back, with little feet kicking my books, belly, neck, etc. I’m surprised the second card wasn’t ejected earlier.
In Trip, Lin writes about drugs and some of the people who frequently ingest them. For instance, Lin spent several months reading the oeuvre of Terrance McKenna, a passionate advocate for the legalization of psychedelic drugs (which I support) who argued that his chemical-induced visions (language elves, fractal time) represent tangible features of our universe (which I think is asinine). At other times, McKenna self-described as a “psychonaut,” which I think is a better term – compounds that perturb the workings of a mind do reveal truths about that mind.
That’s the essence of the scientific method, after all. First, formulate a predictive model about how something works. Then, perturb your system. If your prediction holds up, try to think of a different test you could make to try to prove yourself wrong. If your prediction is off, try to think of a new model. Repeat ad infinitum (physicus usque ad mortem).
In an undergrad-designed psychology experiment, the perturbation might be to compel a study subject to think about death by mixing a lot of photographs of car wrecks into a slide show. Does a person exposed to these images seem more inclined to spend time with close family members (based on the results of a 30-question survey) than equivalent study subjects who were instead shown photographs of puppies?
(A man who has been attending my poetry class for the past few months also self-describes as a Buddhist psychonaut – his favorite psychedelic is LSD, but he also struggles with a nagging impulse to shoot heroin. He’s a vegetarian and has been writing poetry for twenty years, ever since his first friend died of overdose. The only way for him to avoid prison time is to enroll at a court-mandated Christian-faith-based rehabilitation clinic where everyone works daily at the Perdue Meats slaughterhouse. He’s just waiting on a bed before they ship him out there. Personally, I think that having a recovering addict decapitate hundreds of turkeys daily would be an unhealthy perturbation of the mind.)
As Lin researched pharmacology, he realized that he’d made the same error in thinking about his body that our society has made in thinking about our environment, especially the oceans. He’d assumed that his body was so large, and each drug molecule so small, that he’d be relatively unchanged as the pills he swallowed were metabolized away. But he was wrong. He’d turned his own body into a degraded environment that felt terrible to live inside.
He realized that corporations shouldn’t have free license to destroy the world that we all share. And he realized that he needed to practice better stewardship of his body, his own personal environs. He changed his diet and his lifestyle and no longer felt like garbage all the time.
Lin also provides some useful information about this country’s War on Drugs. If someone was looking for an accessible way to learn more about this, I can see myself recommending either Trip (for the dudes in jail) or Ayelet Waldman’s A Really Good Day (for the harried parents working alongside me in the YMCA snack room).
And those business cards? They made convenient bookmarks. Verdant green, the front advertised a local hydroponics supply store, the back listed the store manager’s name and telephone number.
This seemed like a great advertising strategy. Much more precise (and less evil) than Facebook’s targeted ads.
I won’t be buying any hydroponics supplies, but I’ll probably put those business cards back before I return the book.
Most of what I’ve found in books has been less directly relevant to the subject matter. I felt dismayed to find a business card for a local artist / writer / model / actor – the front showed her in pinup-style undergarments with the cord for a video game controller entwining one stockinged leg – inside a library copy of Against Our Will by Susan Brownmiller.
When I flipped through one of Deepak Chopra’s new-age self-help books (that I pulled off the secondhand inventory shelf at Pages to Prisoners to mail to someone who’d requested stuff about UFOs, Wicca, and conspiracies), I found a Valentine’s Day note (written by a small child in crayon) and a polaroid of a tired-looking bare-breasted woman staring at the camera from atop a camper’s bed. MWPP totally would’ve gotten dinged if I’d mailed the book with that picture still inside.
And I’ve written previously about the time I found an acceptance letter from Best of Photojournalism inside a previous year’s edition of the book as I selected books to mail to a prisoner interested in photography.
But I didn’t mention that I visited the university library to find the accepted photograph (of a stretch of highway closed for the emergency landing of a small plane in distress) …
… or that I then put together a package of books to send to that photographer, because it turned out that he was also in prison after murdering his son-in-law.
The impression I got from news reports was that this man had a daughter whom he’d raised alone. When his daughter was 13 years old, she fell in love with an abusive, oft-unemployed 19-year-old. She soon became pregnant. As it happens, this boyfriend took too many drugs. I’ve met many men in jail who are totally charming while sober but (“allegedly!”) wail on women when they’re not. Some are quite frequently not sober.
During this man’s trial, several witnesses testified to the violent physical abuse his daughter was subject to. His daughter’s boyfriend “would grab ____, jerk her by the face, force her to go places, cuss her out if she didn’t do the right thing … “
Not that this is a reason to shoot somebody.
Still, I wondered how a book from the man’s personal library had wound up in the inventory of the Pages to Prisoners bookstore. The murder occurred in August of 2012. Mid-autumn, 2015, his book was on our shelves.
I like to imagine that his daughter made the donation. That perhaps, by then, she’d forgiven her father. That she’d realized how miserable U.S. incarceration can be and wanted to do a little something to make it better.
I certainly hope that his book helped people at the prison where I sent it.
The scientific method is the best way to investigate the world.
Do you want to know how something works? Start by making a guess, consider the implications of your guess, and then take action. Muck something up and see if it responds the way you expect it to. If not, make a new guess and repeat the whole process.
This is slow and arduous, however. If your goal is not to understand the world, but rather to convince other people that you do, the scientific method is a bad bet. Instead you should muck something up, see how it responds, and then make your guess. When you know the outcome in advance, you can appear to be much more clever.
A large proportion of biomedical science publications are inaccurate because researchers follow the second strategy. Given our incentives, this is reasonable. Yes, it’s nice to be right. It’d be cool to understand all the nuances of how cells work, for instance. But it’s more urgent to build a career.
Both labs I worked in at Stanford cheerfully published bad science. Unfortunately, it would be nearly impossible for an outsider to notice the flaws because primary data aren’t published.
A colleague of mine obtained data by varying several parameters simultaneously, but then graphed his findings against only one of these. As it happens, his observations were caused by the variable he left out of his charts. Whoops!
(Nobel laureate Arieh Warshel quickly responded that my colleague’s conclusions probably weren’t correct. Unfortunately, Warshel’s argument was based on unrealistic simulations – in his model, a key molecule spins in unnatural ways. This next sentence is pretty wonky, so feel free to skip it, but … to show the error in my colleague’s paper, Warshel should have modeled multiple molecules entering the enzyme active site, not molecules entering backward. Whoops!)
Another colleague of mine published his findings about unusual behavior from a human protein. But then his collaborator realized that they’d accidentally purified and studied a similarly-sized bacterial protein, and were attempting to map its location in cells with an antibody that didn’t work. Whoops!
No apologies or corrections were ever given. They rarely are, especially not from researchers at our nation’s fanciest universities. When somebody with impressive credentials claims a thing is true, people often feel ready to believe.
Indeed, for my own thesis work, we wanted to test whether two proteins are in the same place inside cells. You can do this by staining with light-up antibodies for each. If one antibody is green and the other is red, you’ll know how often the proteins are in the same place based on how much yellow light you see.
Before conducting the experiment, I wrote a computer program that would assess the data. My program could identify various cellular structures and check the fraction that were each color.
As it happened, I didn’t get the results we wanted. My data suggested that our guess was wrong.
But we couldn’t publish that. And so my advisor told me to count again, by hand, claiming that I should be counting things of a different size. And then she continued to revise her instructions until we could plausibly claim that we’d seen what we expected. We made a graph and published the paper.
This is crummy. It’s falsehood with the veneer of truth. But it’s also tragically routine.
One of these nightmares is driven by the perverse incentives facing early neurosurgeons. Perhaps you noticed, above, that an essential step of the scientific method involves mucking things up. You can’t tell whether your guesses are correct until you perform an experiment. Dittrich provides a lovely summary of this idea:
The broken illuminate the unbroken.
An underdeveloped dwarf with misfiring adrenal glands might shine a light on the functional purpose of these glands. An impulsive man with rod-obliterated frontal lobes [Phineas Gage] might provide clues to what intact frontal lobes do.
This history of modern brain science has been particularly reliant on broken brains, and almost every significant step forward in our understanding of cerebral localization – that is, discovering what functions rely on which parts of the brain – has relied on breakthroughs provided by the study of individuals who lacked some portion of their gray matter.
. . .
While the therapeutic value of the lobotomy remained murky, its scientific potential was clear: Human beings were no longer off-limits as test subjects in brain-lesioning experiments. This was a fundamental shift. Broken men like Phineas Gage and Monsieur Tan may have always illuminated the unbroken, but in the past they had always become broken by accident. No longer. By the middle of the twentieth century, the breaking of human brains was intentional, premeditated, clinical.
Dittrich was dismayed to learn that his own grandfather had participated in this sort of research, intentionally wrecking at least one human brain in order to study the effects of his meddling.
Lacking a specific target in a specific hemisphere of Henry’s medial temporal lobes, my grandfather had decided to destroy both.
This decision was the riskiest possible one for Henry. Whatever the functions of the medial temporal lobe structures were – and, again, nobody at the time had any idea what they were – my grandfather would be eliminating them. The risks to Henry were as inarguable as they were unimaginable.
The risks to my grandfather, on the other hand, were not.
At that moment, the riskiest possible option for his patient was the one with the most potential rewards for him.
By destroying part of a brain, Dittrich’s grandfather could create a valuable research subject. Yes, there was a chance of curing the patient – Henry agreed to surgery because he was suffering from epileptic seizures. But Henry didn’t understand what the proposed “cure” would be. This cure was very likely to be devastating.
At other times, devastation was the intent. During an interview with one of his grandfather’s former colleagues, Dittrich is told that his grandmother was strapped to the operating table as well.
“It was a different era,” he said. “And he did what at the time he thought was okay: He lobotomized his wife. And she became much more tractable. And so he succeeded in getting what he wanted: a tractable wife.”
Compared to slicing up a brain so that its bearer might better conform to our society’s misogynistic expectations of female behavior, a bit of scientific fraud probably doesn’t sound so bad. Which is a shame. I love science. I’ve written previously about the manifold virtues of the scientific method. And we need truth to save the world.
Which is precisely why those who purport to search for truth need to live clean. In the cut-throat world of modern academia, they often don’t.
Dittrich investigated the rest of Henry’s life: after part of his brain was destroyed, Henry became a famous study subject. He unwittingly enabled the career of a striving scientist, Suzanne Corkin.
Dittrich writes that
Unlike Teuber’s patients, most of the research subjects Corkin had worked with were not “accidents of nature” [a bullet to the brain, for instance] but instead the willful products of surgery, and one of them, Patient H.M., was already clearly among the most important lesion patients in history. There was a word that scientists had begun using to describe him. They called him pure. The purity in question didn’t have anything to do with morals or hygiene. It was entirely anatomical. My grandfather’s resection had produced a living, breathing test subject whose lesioned brain provided an opportunity to probe the neurological underpinnings of memory in unprecedented ways. The unlikelihood that a patient like Henry could ever have come to be without an act of surgery was important.
. . .
By hiring Corkin, Teuber was acquiring not only a first-rate scientist practiced in his beloved lesion method but also by extension the world’s premier lesion patient.
. . .
According to [Howard] Eichenbaum, [a colleague at MIT,] Corkin’s fierceness as a gatekeeper was understandable. After all, he said, “her career is based on having that exclusive access.”
Because Corkin had (coercively) gained exclusive access to this patient, most of her claims about the workings of memory would be difficult to contradict. No one could conduct the experiments needed to rebut her.
Which makes me very skeptical of her claims.
Like most scientists, Corkin stumbled across occasional data that seemed to contradict the models she’d built her career around. And so she reacted in the same was as the professors I’ve worked with: she hid the data.
Dittrich: Right. And what’s going to happen to the files themselves?
She paused for several seconds.
Dittrich: Shredded? Why would they be shredded?
Corkin: Nobody’s gonna look at them.
Dittrich: Really? I can’t imagine shredding the files of the most important research subject in history. Why would you do that?
. . .
Corkin: Well, the things that aren’t published are, you know, experiments that just didn’t … [another long pause] go right.
Peering with the unwavering focus of a watchful overlord.
A cat could seem to be many different things, and Brendan Wenzel’s recent picture book They All Saw a Cat conveys these vagrancies of perception beautifully. Though we share the world, we all see and hear and taste it differently. Each creature’s mind filters a torrential influx of information into manageable experience; we all filter the world differently.
They All Saw a Cat ends with a composite image. We see the various components that were focused on by each of the other animals, amalgamated into something approaching “cat-ness.” A human child noticed the cat’s soft fur, a mouse noticed its sharp claws, a fox noticed its swift speed, a bird noticed that it can’t fly.
All these properties are essential descriptors, but so much is blurred away by our minds. When I look at a domesticated cat, I tend to forget about the sharp claws and teeth. I certainly don’t remark on its lack of flight – being landbound myself, this seems perfectly ordinary to me. To be ensnared by gravity only seems strange from the perspective of a bird.
There is another way of developing the concept of “cat-ness,” though. Instead of compiling many creatures’ perceptions of a single cat, we could consider a single perceptive entity’s response to many specimens. How, for instance, do our brains learn to recognize cats?
My friend looked at me with a mix of puzzlement and pity and said, “No.” Then added, as regards Philosophical Investigations, “You read it too fast.”
One of Wittgenstein’s aims is to show how humans can learn to use language… which is complicated by the fact that, in my friend’s words, “Any group of objects will share more than one commonality.” He posits that no matter how many red objects you point to, they’ll always share properties other than red-ness in common.
Or cats… when you’re teaching a child how to speak and point out many cats, will they have properties other than cat-ness in common?
In some ways, I agree. After all, I think the boundaries between species are porous. I don’t think there is a set of rules that could be used to determine whether a creature qualifies for personhood, so it’d be a bit silly if I also claimed that cat-ness could be clearly defined.
But when I point and say “That’s a cat!”, chances are that you’ll think so too. Even if no one had ever taught us what cats are, most people in the United States have seen enough of them to think “All those furry, four-legged, swivel-tailed, pointy-eared, pouncing things were probably the same type of creature!”
Even a computer can pick out these commonalities. When we learn about the world, we have a huge quantity of sensory data to draw upon – cats make those noises, they look like that when they find a sunny patch of grass to lie in, they look like that when they don’t want me to pet them – but a computer can learn to identify cat-ness using nothing more than grainy stills from Youtube.
Quoc Le et al. fed a few million images from Youtube videos to a computer algorithm that was searching for commonalities between the pictures. Even though the algorithm was given no hints as to the nature of the videos, it learned that many shared an emphasis on oblong shapes with triangles on top… cat faces. Indeed, when Le et al. made a visualization of the patterns that were causing their algorithm to cluster these particular videos together, we can recognize a cat in that blur of pixels.
The computer learns in a way vaguely analogous to the formation of social cliques in a middle school cafeteria. Each kid is a beautiful and unique snowflake, sure, but there are certain properties that cause them to cluster together: the sporty ones, the bookish ones, the D&D kids. For a neural network, each individual is only distinguished by voting “yes” or “no,” but you can cluster the individuals who tend to vote “yes” at the same time. For a small grid of black and white pixels, some individuals will be assigned to the pixels and vote “yes” only when their pixels are white… but others will watch the votes of those first responders and vote “yes” if they see a long line of “yes” votes in the top quadrants, perhaps… and others could watch those votes, allowing for layers upon layers of complexity in analysis.
And I should mention that I feel indebted to Liu Cixin’s sci-fi novel The Three-Body Problem for thinking to humanize a computer algorithm this way. Liu includes a lovely description of a human motherboard, with triads of trained soldiers hoisting red or green flags forming each logic gate.
In the end, the algorithm developed by Le et al. clustered only 75% of the frames from Youtube cat videos together – it could recognize many of these as being somehow similar, but it was worse at identifying cat-ness than the average human child. But it’s pretty easy to realize why: after all, Le et al. titled their paper “Building high-level features using large scale unsupervised learning.”
When Wittgenstein writes about someone watching builders – one person calls out “Slab!”, the other brings a large flat rock – he is also considering unsupervised learning. And so it is easy for Wittgenstein to imagine that the watcher, even after exclaiming “Now I’ve got it!”, could be stymied by a situation that went beyond the training.
Unsupervised learning may be sufficient to prepare children for life in an agrarian village. Unsupervised learning is sufficient for chimpanzees learning how to crack nuts. And unsupervised learning is sufficient to for a computer to develop an idea about what cats are.
But the best human learning employs the scientific method – purposefully seeking out “no.”
I assume most children reflexively follow the scientific method – my daughter started shortly after her first birthday. I was teaching her about animals, and we started with dogs. At first, she pointed primarily to creatures that looked like her Uncle Max. Big, brown, four-legged, slobbery.
Eventually she started pointing to creatures that looked slightly different: white dogs, black dogs, small dogs, quiet dogs. And then the scientific method kicked in.
She’d point to a non-dog, emphatically claiming it to be a dog as well. And then I’d explain why her choice wasn’t a dog. What features cause an object to be excluded from the set of correct answers?
Eventually she caught on.
Seems toddler & I will just have to agree to disagree whether certain animals are Canis lupus (“Daa!”) or Sus scrofa (“Naw, that’s a pig!”).
Many adults, sadly, are worse at this style of thinking than children. As we grow, it becomes more pressing to seem competent. We adults want our guesses to be right – we want to hear yes all the time – which makes it harder to learn.
The New York Times recently presented a clever demonstration of this. They showed a series of numbers that follow a rule, let readers type in new numbers to see if their guesses also followed the rule, and asked for readers to describe what the rule was.
A scientist would approach this type of puzzle by guessing a rule and then plugging in numbers that don’t follow it – nothing is ever really proven in science, but we validate theories by designing experiments that should tell us “no” if our theory is wrong. Only theories that all “falsifiable” fall under the purvey of science. And the best fields of science devote considerable resources to seeking out opportunities to prove ourselves wrong.
But many adults, wanting to seem smart all the time, fear mistakes. When that New York Times puzzle was made public, 80% of readers proposed a rule without ever hearing that a set of numbers didn’t follow it.
Wittgenstein’s watcher can’t really learn what “Slab!” means until perversely hauling over some other type of rock and being told, “no.”
We adults can’t fix the world until we learn from children that it’s okay to look ignorant sometimes. It’s okay to be wrong – just say “sorry” and “I’ll try to do better next time.”
Despite my disagreements with a lot of its details, I thoroughly enjoyed Ara Norenzayan’s Big Gods. The book posits an explanation for the current global dominance of the big three Abrahamic religions: Christianity, Islam, and Judaism.
Instead of the “quirks of history & dumb luck” explanation offered in Jared Diamond’s Guns, Germs, and Steel, Norenzayan suggests that the Abrahamic religions have so many adherents today because beneficial economic behaviors were made possible by belief in those religions.
Here’s a rough summary of the argument: Economies function best in a culture of trust. People are more trustworthy when they’re being watched. If people think they’re being watched, that’s just as good. Adherents to the Abrahamic faiths think they are always being watched by God. And, because anybody could claim to believe in an omnipresent, ever-watchful god, it was worthwhile for believers to practice costly rituals (church attendance, dietary restrictions, sexual moderation, risk of murder by those who hate their faith) in order to signal that they were genuine, trustworthy, God-fearing individuals.
When evolution gets around to creating agents that can learn, and reflect, and consider rationally what they ought to do next, it confronts these agents with a new version of the commitment problem: how to commit to something and convince others you have done so. Wearing a cap that says “I’m a cooperator” is not going to take you far in a world of other rational agents on the lookout for ploys. According to [Robert] Frank, over evolutionary time we “learned” how to harness our emotions to the task of keeping us from being too rational, and–just as important–earning us a reputation for not being too rational. It is our unwanted excess of myopic or local rationality, Frank claims, that makes us so vulnerable to temptations and threats, vulnerable to “offers we can’t refuse,” as the Godfather says. Part of becoming a truly responsible agent, a good citizen, is making oneself into a being that can be relied upon to be relatively impervious to such offers.
I think that’s a beautiful passage — the logic goes down so easily that I hardly notice the inaccuracies beneath the surface. It makes a lot of sense unless you consider that many other species, including relatively non-cooperative species, have emotional lives very similar to our own, and will like us act in irrational ways to stay true to those emotions (I still love this clip of an aggrieved monkey rejecting its cucumber slice).
Maybe that doesn’t seem important to Dennett, who shrugs off decades of research indicating the cognitive similarities between humans and other animals when he asserts that only we humans have meaningful free will, but that kind of detail matters to me.
You know, accuracy or truth or whatever.
Similarly, I think Norenzayan’s argument is elegant, even though I don’t agree. One problem is that he supports his claims with results from social psychology experiments, many of which are not credible. But that’s not entirely his fault. Arguments do sound more convincing when there’s experimental data to back them up, and surely there are a few tolerably accurate social psychology results tucked away in the scientific literature. The problem is that the basic methodology of modern academic science produces a lot of inaccurate garbage (References? Here & here & here & here... I could go on, but I already have a half-written post on the reasons why the scientific method is not a good persuasive tool, so I’ll elaborate on this idea later).
For instance, many of the experiments Norenzayan cites are based on “priming.” Study subjects are unconsciously inoculated with an idea: will they behave differently?
The author of the original priming study also published a few apoplecticscreeds denouncing the researchers who attempted to replicate his work — here’s a quote from Ed Yong’s analysis:
Bargh also directs personal attacks at the authors of the paper (“incompetent or ill-informed”), at PLoS (“does not receive the usual high scientific journal standards of peer-review scrutiny”), and at me (“superficial online science journalism”). The entire post is entitled “Nothing in their heads”.
Personally, I am extremely skeptical of any work based on the “priming” methodology. You might expect the methodology to be sound because it’s been used in so many subsequent studies. I don’t think so. Scientific publishing is sufficiently broken that unsound methodologies could be used to prove all sorts of untrue things, including precognition.
Academia rewards researchers who can successfully hunt for publishable results. But the optimal strategy for obtaining something publishable (collect lots of data, analyze it repeatedly using different mathematical formula, discard all the data that look “wrong”) is very different from the optimal strategy for uncovering truth.
Here’s one way to understand why much of modern academic publishing isn’t really science: in general, results are publishable only if they are positive (i.e. a treatment causes a change, as opposed to a treatment having no effect) and significant (i.e. you would see the result only 1 out of 20 times if the claim were not actually true). But that means that if twenty labs decide to test the same false idea, 19 of them will get negative results and be unable to publish their findings, whereas 1 of them will see a false positive and publish. Newspapers will announce that the finding is real, and there will be a published record of only the incorrect lab’s result.
Because academic training is set up like a pyramid scheme, we have a huge glut of researchers. For any scientific question, there are probably enough laboratories studying it to nearly guarantee that significance testing will provide one of them an untrue publishable result.
And that’s even if everyone involved were 100% ethical. Even then, a huge quantity of published research would be incorrect. In our world, where many researchers are not ethical, the situation is even worse.
Norenzayan even documents this sort of unscientific over-analysis of data in his book. One example appears in his chapter on anti-atheist prejudice:
In addition to assessing demographic information and individual religious beliefs, we asked [American] participants to rate the degree to which they viewed both atheists and gays with either distrust or with disgust.
. . .
It is possible that, for whatever reason, people may have felt similarly toward both atheists and gays, but felt more comfortable openly voicing distrust of atheists than of gays. In addition, our sample consisted of American adults, overall a quite religious group. To address these concerns, we performed additional studies in a population with considerable variability in religious involvement, but overall far less religious on the whole than most Americans. We studied the attitudes of university students in Vancouver, Canada. To circumvent any possible artifacts that result from overtly asking people about their prejudices, we designed studies that included more covert ways of measuring distrust.
When I see an explanation like that, it suggests that the researchers first conducted their study using the same methodology for both populations, obtained data that did not agree with their hypothesis, then collected more data for only one group in order to build a consistent, publishable story (if you’re interested, you can see their final paper here).
Because researchers can (and do!) collect data until they see what they want — until they have results that agree with a pet hypothesis, perhaps one they’ve built their career around — it’s not hard to obtain publishable data that appear to support any claim. Doesn’t matter whether the claim is true or not. And that, in essence, is why the practices that masquerade as the scientific method in the hands of modern researchers are not convincing persuasive tools.
I think it’s unfair to denounce people for not believing scientific results about climate change, for instance. Because modern scientific results simply are not believable.
Which is a shame. The scientific method, used correctly, is the best way to understand the world. And many scientists are very bright, ethical people. And we should act upon certain research findings.
For instance, even if the reality underlying most climate change studies is a little less dire than some papers would lead you to believe, our world will be better off — more ecological diversity, less asthma, less terrorism, and, yes, less climate destabilization — if we pretend the results are real.
So it’s tragic, in my opinion, that a toxic publishing culture has undermined the authority of academic scientists.
And that’s one downside to Norenzayan’s book. He supports his argument with a lot of data that I’m disinclined to believe.
The other problem is that he barely addresses historical information that doesn’t agree with his hypothesis. For instance, several cultures developed long-range trust-based commerce without believing in omnipresent, watchful, morality-enforcing gods, including ancient Kanesh, China, the pre-Christian Greco-Roman empires, some regions of Polynesia.
There’s also historical data demonstrating that trust is separable from religion (and not just in contemporary secular societies, where Norenzayan would argue that a god-like role is played by the police… didn’t sound so scary the way he wrote it). The most heart-wrenching example of this, in my opinion, is presented in Nunn & Wantchekon’s paper, “The Slave Trade and the Origins of Mistrust in Africa.” They suggest a casual relationship between kidnapping & treachery during the transatlantic slave trade and contemporary mistrust in the plundered regions. Which would mean that slavery in the United States created a drag on many African nations’ economies that persists to this day.
Is it so wrong to wish Norenzayan had addressed some of these issues? I’ll admit that complexity might’ve sullied his clever logic. But, all apologies to Keats, sometimes it’s necessary to introduce some inelegance in the pursuit of truth.
Still, the book was pleasurable to read. Definitely gave me a lot to think about, and the writing is far more lucid and accessible than I’d expected. Check out this passage on the evolutionary flux — replete with dead ends — that the world’s religions have gone through:
This cultural winnowing of religions over time is evident throughout history and is occurring every day. It is easy to miss this dynamic process, because the enduring religious movements are all that we often see in the present. However, this would be an error. It is called survivor bias. When groups, entities, or persons undergo a process of competition and selective retention, we see abundant cases of those that “survived” the competition process; the cases that did not survive and flourish are buried in the dark recesses of the past, and are overlooked. To understand how religions propagate, we of course want to put the successful religions under the microscope, but we do not want to forget the unsuccessful ones that did not make it — the reasons for their failures can be equally instructive.
This idea, that the histories we know preserve only a lucky few voices & occurrences, is also beautifully alluded to in Jurgen Osterhammel’s The Transformation of the World (trans. Patrick Camiller). The first clause here just slays me:
The teeth of time gnaw selectively: the industrial architecture of the nineteenth century has worn away more quickly than many monuments from the Middle Ages. Scarcely anywhere is it still possible to gain a sensory impression of what the Industrial “Revolution” meant–of the sudden appearance of a huge factory in a narrow valley, or of tall smokestacks in a world where nothing had risen higher than the church tower.