On happiness and mind control.

On happiness and mind control.

Many of us would like to feel happier.

In the United States, people are having sex less often.  And between alcohol, marijuana, recreational painkillers – not to mention anti-depressants and anti-anxiety medication – we take a lot of drugs. 

Many of us work long hours at jobs we dislike so that we can afford to buy things that promise to fill some of the emptiness inside.  The most lucrative businesses are advertising companies … one of which, Facebook, is designed to make you feel worse so that you’ll be more susceptible to its ads.

The suicide rate has been rising.

From Dan Diamond’s Forbes blog post
Stopping The Growing Risk Of Suicide: How You Can Help.”

It might seem as though we don’t know how to make people happier.  But, actually, we do.

Now, I know that I’ve written previously with bad medical advice, such as the suggestion that intentionally infecting yourself with the brain parasite Toxoplasma gondii could make you happier.  This parasite boosts dopamine levels in your brain (dopamine is a neurotransmitter that conveys feelings of pleasure and mirth) and makes you feel bolder (in controlled laboratory experiments, infected mice show less stress when making risky decisions, and observational data suggests the same to be true for infected humans).  You also might become more attractive (infected rodents have more sex, and portrait photographs of infected human men are perceived as more dominant and masculine).

There are drawbacks to Toxoplasma infection, of course.  Infected rodents are more likely to be killed by cats.  Infected humans may become slower as well, both physically and intellectuallyToxoplasma forms cysts in your brain.  It might increase the chance of developing schizophrenia.  It can kill you if you’re immunocompromised.  And the surest way to contract toxoplasmosis, if incidental exposure hasn’t already done it for you, is by eating cat excrement.

My advice today is different.  No feces required! 

And I’m not suggesting anything illegal.  I mentioned, above, that people in the United States take a lot of drugs.  Several of these boost dopamine levels in your brain.  Cocaine, for instance, is a “dopamine re-uptake inhibitor,” ensuring that any momentary sensation of pleasure will linger, allowing you to feel happy longer.

But cocaine has a nasty side effect of leading to incarceration, especially if the local law enforcement officers decide that your epidermal melanin concentration is too high.  And jail is not a happy place.

Instead, you could make yourself happier with a bit of at-home trepanation, followed by the insertion of an electrode into the nucleus accumbens of your brain.  Now, I know that sounds risky, what with the nucleus accumbens being way down near the base of your brain.  But your brain is rather squishy – although you’ll sheer some cells as you cram a length of conductive wire into your cranium, the hope is that many neurons will be pushed out of the way.

The nucleus accumbens tends to show high activity during pleasure.  For instance, cocaine stimulates activity in this part of your brain.  So does money — tell research subjects that they’ve won a prize and you’ll see this region light up.  If rats are implanted with an electrode that lets them jolt their own nucleus accumbens by pushing a lever, they’ll do it over and over.  Pressing that lever makes them happier than eating, or drinking water, or having sex.  They’ll blissfully self-stimulate until they collapse.  From James Olds’s Science paper, “Self-Stimulation of the Brain”:

If animals with electrodes in the hypothalamus were run for 24 hours or 48 hours consecutively, they continued to respond as long as physiological endurance permitted.

Setup for Olds’s experiment.

Perhaps I should have warned you – amateur brain modification would carry some risks.  Even if you have the tools needed to drill into your own skull without contracting a horrible infection, you don’t want to boost your mood just to die of dehydration.

After all, happiness might have some purpose.  There might be reasons why certain activities – like eating, drinking water, having sex … to say nothing of strolling outdoors, or volunteering to help others – make us feel happy.  After discussing several case studies in their research article “How Happy Is Too Happy,” Matthis Synofzik, Thomas Schlaepfer, and Joseph Fins write that using deep brain stimulation for the “induction of chronic euphoria could also impair the person’s cognitive capacity to respond to reasons about which volitions and preferences are in his or her best interests.

When an activity makes us feel happy, we’re likely to do it again.  That’s how people manage to dedicate their lives to service.  Or get addicted to drugs.

And it’s how brain stimulation could be used for mind control.

If you show me a syringe, I’ll feel nervous.  I don’t particularly like needles.  But if you display that same syringe to an intravenous drug user, you’ll trigger some of the rush of actually shooting up.  The men in my poetry classes have said that they feel all tingly if they even see the word “needle” written in a poem.

For months or years, needles presaged a sudden flush of pleasure.  That linkage was enough for their brains to develop a fondness for the needles themselves.

If you wanted to develop a taste for an unpalatable food, you could do the same thing.  Like bittermelon – I enjoy bittermelons, which have a flavor that’s totally different from anything else I’ve ever eaten, but lots of people loathe them.

Still, if you used deep brain stimulation to trigger pleasure every time a person ate bittermelon, that person would soon enjoy it.

Bittermelon. Image by [cipher] in Tokyo, Japan on Wikimedia.

Or you could make someone fall in love. 

Far more effective than any witch’s potion, that.  Each time your quarry encounters the future beloved, crank up the voltage.  The beloved’s presence will soon be associated with a sense of comfort and pleasure.  And that sensation – stretched out for long enough that the pair can build a set of shared memories – is much of what love is.

Of course, it probably sounds like I’m joking.  You wouldn’t really send jolts of electricity into the core of somebody’s brain so that he’d fall in love with somebody new … right?

Fifty years passed between the discovery of pleasure-inducing deep brain stimulation and its current use as a treatment for depression … precisely because one of the pioneering researchers decided that it was reasonable to use the electrodes as a love potion.

In 1972, Charles Moan and Robert Heath published a scientific paper titled “Septal stimulation for the initiation of heterosexual behavior in a homosexual male.”  Their study subject was a 24-year-old man who had been discharged from the military for homosexuality.  Moan and Heath postulated that the right regimen of electrode stimulation – jolted while watching pornography, or while straddled by a female prostitute whom Moan and Heath hired to visit their lab – might lead this young man to desire physical intimacy with women.

Moan and Heath’s paper is surprisingly salacious:

After about 20 min of such interaction she begun [sic] to mount him, and though he was somewhat reticent he did achieve penetration.  Active intercourse followed during which she had an orgasm that he was apparently able to sense.  He became very excited at this and suggested that they turn over in order that he might assume the initiative.  In this position he often paused to delay orgasm and to increase the duration of the pleasurable experience.  Then, despite the milieu [inside a lab, romping under the appraising eyes of multiple fully-clothed scientists] and the encumbrance of the electrode wires, he successfully ejaculated.  Subsequently, he expressed how much he had enjoyed her and how he hoped that he would have sex with her again in the near future.

The science writer Lone Frank recently published The Pleasure Shock, a meticulously researched book in which she concludes that Heath was unfairly maligned because most people in the 1970s were reticent to believe that consciousness arose from the interaction of perfectly ordinary matter inside our skulls.  Changing a person’s mood with electricity sounds creepy, especially if you think that a mind is an ethereal, inviolable thing.

But it isn’t.

The mind, that is. The mind isn’t an ethereal, inviolable thing.

Zapping new thoughts into somebody’s brain, though, is definitely still understood (by me, at least) to be creepy.

Discussing the contemporary resurgence of electrical brain modification, Frank writes that:

In 2013, economist Ernst Fehr of Zurich University experimented with transcranial direct current stimulation, which sends a weak current through the cranium and is able to influence activity in areas of the brain that lie closest to the skull. 

Fehr had sixty-three research subjects available.  They played a money game in which they each were given a sum and had to take a position on how much they wanted to give an anonymous partner.  In the first round, there were no sanctions from the partner, but in the second series of experiments, the person in question could protest and punish the subject. 

There were two opposing forces at play.  A cultural norm for sharing fairly – that is, equally – and a selfish interest in getting as much as possible for oneself.  Fehr and his people found that the tug of war could be influenced by the right lateral prefrontal cortex.  When the stimulation increased the brain activity, the subjects followed the fairness norm to a higher degree, while they were more inclined to act selfishly when the activity was diminished.

Perhaps the most thought-provoking thing was that the research subjects did not themselves feel any difference.  When they were asked about it, they said their idea of fairness had not changed, while the selfishness of their behavior had changed. 

Apparently, you can fiddle with subtle moral parameters in a person without the person who is manipulated being any the wiser.

The human brain evolved to create elaborate narratives that rationalize our own actions.  As far as our consciousness is concerned, there’s no difference between telling a just so story about a decision we made un-aided, versus explaining a “choice” that we were guided toward by external current.

Frank believes that Heath was a brilliant doctor who sincerely wanted to help patients. 

When bioethicist Carl Elliott reviewed The Pleasure Shock for the New York Review of Books, however, he pointed out that even – perhaps especially – brilliant doctors who sincerely want to help patients can stumble into rampantly unethical behavior.

The problem isn’t just that Heath pulsed electricity into the brain of a homosexual man so that he could ejaculate while fooling around with a woman.  Many of Heath’s patients – who, it’s worth acknowledging, had previously been confined to nightmarish asylums – developed infections from their electrode implantations and died.  Also, Heath knowingly promoted fraudulent research findings because he’d staked his reputation on a particular theory and was loathe to admit that he’d been wrong (not that Heath has been the only professor to perpetuate falsehoods this way).

Elliott concludes that:

Heath was a physician in love with his ideas. 

Psychiatry has seen many men like this.  Heath’s contemporaries include Ewen Cameron, the CIA-funded psychiatrist behind the infamous “psychic driving” studies at McGill University, in which patients were drugged into comas and subjected to repetitive messages or sounds for long periods, and Walter Freeman, the inventor of the icepick lobotomy and its most fervent evangelist.

These men may well have started with the best of intentions.  But in medical research, good intentions can lead to the embalming table.  All it takes is a powerful researcher with a surplus of self-confidence, a supportive institution, and a ready supply of vulnerable subjects.

Heath had them all.

It’s true that using an electrode to stimulate the nucleus accumbens inside your brain can probably make you feel happier.  By way of contrast, reading essays like this one make most people feel less happy.

Sometimes it’s good to feel bad, though.

As Elliott reminds us, a lot of vulnerable people were abused in this research.  A lot of vulnerable people are still treated with cavalier disregard, especially when folks with psychiatric issues are snared by our country’s criminal justice system.  And the torments that we dole upon non-human animals are even worse.

Consider this passage from Frans De Waal’s Mama’s Last Hug, discussing empathy:

[University of Chicago researcher Inbal Ben-Ami Bartal] placed one rat in an enclosure, where it encountered a small transparent container, a bit like a jelly jar.  Squeezed inside it was another rat, locked up, wriggling in distress. 

Not only did the free rat learn how to open a little door to liberate the other, but she was remarkably eager to do so.  Never trained on it, she did so spontaneously. 

Then Bartal challenged her motivation by giving her a choice between two containers, one with chocolate chips – a favorite food that they could easily smell – and another with a trapped companion.  The free rat often rescued her companion first, suggesting that reducing her distress counted more than delicious food.

Is it possible that these rats liberated their companions for companionship?  While one rat is locked up, the other has no chance to play, mate, or groom.  Do they just want to make contact?  While the original study failed to address this question, a different study created a situation where rats could rescue each other without any chance of further interaction.  That they still did so confirmed that the driving force is not a desire to be social. 

Bartal believes it is emotional contagion: rats become distressed when noticing the other’s distress, which spurs them into action. 

Conversely, when Bartal gave her rats an anxiety-reducing drug, turning them into happy hippies, they still knew how to open the little door to reach the chocolate chips, but in their tranquil state, they had no interest in the trapped rat.  They couldn’t care less, showing the sort of emotional blunting of people on Prozac or pain-killers. 

The rats became insensitive to the other’s agony and ceased helping. 

You could feel happier.  We know enough to be able to reach into your mind and change it.  A miniscule flow of electrons is enough to trigger bliss.

But should we do it?  Or use our unhappiness as fuel to change the world instead?

On stuttering.

On stuttering.

CaptureDuring his first year of graduate school at Harvard, a friend of mine was trying to pick a research advisor.  This is a pretty big deal — barring disaster, whoever you choose will have a great deal of control over your life for the next five to eight years.

My friend found someone who seemed reasonable.  The dude was conducting research in an exciting field.  He seemed personable.  Or, well, he seemed human, which can be what passes for personable among research professors at top-tier universities.  But while my friend and the putative advisor-to-be were talking, they got onto the topic of molecular dynamics simulations.

My friend mentioned that his schoolmate’s father studies simulations of cellular membranes.  And that guy, the father, is incredibly intelligent and very friendly — when I showed up at a wedding too broke for a hotel, he let me sleep on the floor of the room he’d booked for himself and his wife.

But the putative advisor corrected my friend when he mentioned the guy’s name.  “Oh, you mean duh, duh, duh, duh, Doctor ________.”  And smiled, as though my friend was going to chuckle too.

stutter_by_visualtextproject-d49ak0vThat’s when my friend realized, okay, I don’t wanna talk to you no more.  He found a different advisor.  He never regretted his choice.

Well, no, that’s not true.  All graduate students regret their choice of advisor sometimes.  But my friend never wished he’d worked for the jerk.

Yes, some people, with a huge amount of effort and probably an equal measure of luck, are able to get over stuttering.  But most can’t.  So it’s crummy that even well-educated, ostensibly sophisticated people would feel entitled to mock somebody for a stutter.  Presumably even that jerk would’ve refrained from an equivalent comment if my friend’s schoolmate’s father was blind or confined to a wheelchair.

But stuttering, along with a few other conditions like depression and obsessive compulsive disorder, still gets treated like a moral failing.  Like a sufferer should be able to try harder and just get over it.

That attitude is especially bad as regards stuttering, because mockery and castigation seems to make the condition worse.  There are genetic factors that confer a predilection toward stuttering, but (unpublished, evil) work from Dr. Wendell Johnson showed that sufficiently vituperative abuse can cause children of any genetic background to become stutterers.

CaptureYou’ve read about the “monster” study, right?  Dr. Johnson stuttered, and he had a theory that his stuttering had been exacerbated by people’s well-meaning attempts to cure him.  His parents would correct his speech, draw attention to his mistakes, exhort him to be more mindful when talking.  Dr. Johnson thought that the undue attention placed on his speech patterns made him more likely to freeze up and stutter.  And, once that cycle had begun, his brain dug itself into a rut.  He began to castigate himself for his mistakes, perpetuating the condition.

Of course, that was just a theory.  To test it, you’d want to show two things.  First, that by not paying attention to the mistakes of an incipient stutterer, you can help that person evade or cure the condition.  And, second, that you could cause well-spoken people to develop stutters by convincing them and their interlocutors that they already were stuttering, and castigating them for it.

It’s totally ethical to conduct the first experiment.  The process itself would cause no harm, and the intention is to improve someone’s life.  If you can help someone get over a stutter, you’ll smooth future social interactions.  Stave off some mockery from colleagues at Harvard.  That sort of thing.

But the second experiment?  The process is miserable for the study subjects — you’re cutting them off all the time, criticizing them, forcing them to say things over and over until their thoughts are expressed perfectly.  And, worse, if you succeed, you’ve saddled them with burdens they’ll have to deal with for the rest of their lives.  Let the mockery commence!

CaptureDr. Johnson made one of his students conduct that second experiment on six orphaned children.  In the end, none of the children developed the syllabic repetition typical of most stutterers, but they became extremely self-conscious and reluctant to speak — symptoms that stayed with them for the rest of their lives.

Indeed, the symptoms triggered in those children are equivalent to the symptoms monitored for a stuttering model in mice.  One of the genetic factors associated with stuttering was recreated in mice, and those mice exhibited a condition somewhat analogous to human stuttering.

Dr. Dolittle did not participate in this new study, which made matters much more difficult for Barnes & colleagues.  If you don’t know what a mouse is saying, how do you know whether it’s studying?  They did measure variance from one vocalization to the next — in humans, repeating the initial syllable of a word lowers total syllabic variance — and saw that their mice with the stuttering gene repeated sounds more often.

Their best measurements, though, were the rate of squeaking, and the length of pauses between squeaks.  Like an oft-badgered child, the mice with the stuttering gene talked less and spent more time waiting, maybe thinking, between statements.

And it pleases me, given my pre-existing biases, to see more data showing that, if somebody stutters, it’s not that person’s fault.  Genetic predilection certainly isn’t the same thing as destiny, but it’s a nice corrective to the mocking jerks.  Sure, you can speak fine, Mister Mockingpants, but are you fighting against the current of a lysosomal targeting mutation?

(Oh, right, sorry, my mistake. Doctor Mockingpants. You jerk.)

*************

Capturep.s. As it happens, the mutation Barnes et al. introduced into mice is involved in the pathway I studied for my thesis work.  They introduced a mutation in the Gnptab gene (trust me, you don’t want me to write out the full name that Gnptab stands for), which is supposed to produce a protein that links a targeting signal onto lysosomal enzymes.  In less formal terms, Gnptab is supposed to slap shipping labels onto machinery destined for the cell’s recycling plants.  Without Gnptab function, bottles & cans & old televisions pile up in the recycling plant. The machinery to process them never arrives.

Which does seem a little strange to me… stuttering is a very specific phenotype, and that is such a general cellular function.  Lysosomal targeting is needed for all cells, not just neurons in speech areas of the brain.  It’s a sufficiently common function that biologists often refer to Gnptab as a “housekeeping” gene.  And proper lysosome function is sufficiently important that problems typically cause major neurodegeneration, seizures, blindness, and death, typically at a very young age.  Compared to that litany of disasters, stuttering doesn’t sound so bad.

On watchful gods, trust, and how academic scientists undermined their own credibility.

On watchful gods, trust, and how academic scientists undermined their own credibility.

k10063Despite my disagreements with a lot of its details, I thoroughly enjoyed Ara Norenzayan’s Big Gods.  The book posits an explanation for the current global dominance of the big three Abrahamic religions: Christianity, Islam, and Judaism.

Instead of the “quirks of history & dumb luck” explanation offered in Jared Diamond’s Guns, Germs, and Steel, Norenzayan suggests that the Abrahamic religions have so many adherents today because beneficial economic behaviors were made possible by belief in those religions.

Here’s a rough summary of the argument: Economies function best in a culture of trust.  People are more trustworthy when they’re being watched.  If people think they’re being watched, that’s just as good.  Adherents to the Abrahamic faiths think they are always being watched by God.  And, because anybody could claim to believe in an omnipresent, ever-watchful god, it was worthwhile for believers to practice costly rituals (church attendance, dietary restrictions, sexual moderation, risk of murder by those who hate their faith) in order to signal that they were genuine, trustworthy, God-fearing individuals.

A clever argument.  To me, it calls to mind the trustworthiness passage of Daniel Dennett’s Freedom Evolves:

When evolution gets around to creating agents that can learn, and reflect, and consider rationally what they ought to do next, it confronts these agents with a new version of the commitment problem: how to commit to something and convince others you have done so.  Wearing a cap that says “I’m a cooperator” is not going to take you far in a world of other rational agents on the lookout for ploys.  According to [Robert] Frank, over evolutionary time we “learned” how to harness our emotions to the task of keeping us from being too rational, and–just as important–earning us a reputation for not being too rational.  It is our unwanted excess of myopic or local rationality, Frank claims, that makes us so vulnerable to temptations and threats, vulnerable to “offers we can’t refuse,” as the Godfather says.  Part of becoming a truly responsible agent, a good citizen, is making oneself into a being that can be relied upon to be relatively impervious to such offers.

I think that’s a beautiful passage — the logic goes down so easily that I hardly notice the inaccuracies beneath the surface.  It makes a lot of sense unless you consider that many other species, including relatively non-cooperative species, have emotional lives very similar to our own, and will like us act in irrational ways to stay true to those emotions (I still love this clip of an aggrieved monkey rejecting its cucumber slice).

Maybe that doesn’t seem important to Dennett, who shrugs off decades of research indicating the cognitive similarities between humans and other animals when he asserts that only we humans have meaningful free will, but that kind of detail matters to me.

You know, accuracy or truth or whatever.

Similarly, I think Norenzayan’s argument is elegant, even though I don’t agree.  One problem is that he supports his claims with results from social psychology experiments, many of which are not credible.  But that’s not entirely his fault.  Arguments do sound more convincing when there’s experimental data to back them up, and surely there are a few tolerably accurate social psychology results tucked away in the scientific literature. The problem is that the basic methodology of modern academic science produces a lot of inaccurate garbage (References? Here & here & here & here... I could go on, but I already have a half-written post on the reasons why the scientific method is not a good persuasive tool, so I’ll elaborate on this idea later).

For instance, many of the experiments Norenzayan cites are based on “priming.”  Study subjects are unconsciously inoculated with an idea: will they behave differently?

Naturally, Norenzayan includes a flattering description of the first priming experiment, the Bargh et al. study (“Automaticity of Social Behavior: Direct Effects of Trait Construct and Stereotype Activation on Action”) in which subjects walked more slowly down a hallway after being unconsciously exposed to words about old age.  But this study is terrible!  It’s a classic in the field, sure, and its “success” has resulted in many other laboratories copying the technique, but it almost certainly isn’t meaningful.

Look at the actual data from the Bargh paper: they’ve drawn a bar graph that suggests a big effect, but that’s just because they picked an arbitrary starting point for their axis.  There are no error bars.  The work couldn’t be replicated (unless a research assistant was “primed” to know what the data “should” look like in advance).

fig2

The author of the original priming study also published a few apoplectic screeds denouncing the researchers who attempted to replicate his work — here’s a quote from Ed Yong’s analysis:

Bargh also directs personal attacks at the authors of the paper (“incompetent or ill-informed”), at PLoS (“does not receive the usual high scientific journal standards of peer-review scrutiny”), and at me (“superficial online science journalism”).  The entire post is entitled “Nothing in their heads”.

Personally, I am extremely skeptical of any work based on the “priming” methodology.  You might expect the methodology to be sound because it’s been used in so many subsequent studies.  I don’t think so.  Scientific publishing is sufficiently broken that unsound methodologies could be used to prove all sorts of untrue things, including precognition.

If you’re interested in the failings of modern academic science and don’t want to wait for my full post on the topic, you should check out Simmons et al.’s “False-Positive Psychology: Undisclosed Flexibility in Data Collection and AnalysNais Allows Presenting Anything as Significant.”  This paper demonstrates that listening to the Beatles will make you chronologically younger.

Wait.  No.  That can’t be right.

The_Beatles_in_America

The Simmons et al. paper actually demonstrates why so many contemporary scientific results are false, a nice experimental supplement to the theoretical Ioannidis model (“Why Most Published Research Findings Are False”).  The paper pre-emptively rebuts empty rationalizations such as those given in Lisa Feldman Barrett’s New York Times editorial (“Psychology Is not in Crisis,” in which she incorrectly argues that it’s no big deal that most findings cannot be replicated).

Academia rewards researchers who can successfully hunt for publishable results.  But the optimal strategy for obtaining something publishable (collect lots of data, analyze it repeatedly using different mathematical formula, discard all the data that look “wrong”) is very different from the optimal strategy for uncovering truth.

28-1

Here’s one way to understand why much of modern academic publishing isn’t really science: in general, results are publishable only if they are positive (i.e. a treatment causes a change, as opposed to a treatment having no effect) and significant (i.e. you would see the result only 1 out of 20 times if the claim were not actually true).  But that means that if twenty labs decide to test the same false idea, 19 of them will get negative results and be unable to publish their findings, whereas 1 of them will see a false positive and publish.  Newspapers will announce that the finding is real, and there will be a published record of only the incorrect lab’s result.

Because academic training is set up like a pyramid scheme, we have a huge glut of researchers.  For any scientific question, there are probably enough laboratories studying it to nearly guarantee that significance testing will provide one of them an untrue publishable result.

And that’s even if everyone involved were 100% ethical.  Even then, a huge quantity of published research would be incorrect.  In our world, where many researchers are not ethical, the situation is even worse.

Norenzayan even documents this sort of unscientific over-analysis of data in his book.  One example appears in his chapter on anti-atheist prejudice:

In addition to assessing demographic information and individual religious beliefs, we asked [American] participants to rate the degree to which they viewed both atheists and gays with either distrust or with disgust.

. . .

It is possible that, for whatever reason, people may have felt similarly toward both atheists and gays, but felt more comfortable openly voicing distrust of atheists than of gays.  In addition, our sample consisted of American adults, overall a quite religious group.  To address these concerns, we performed additional studies in a population with considerable variability in religious involvement, but overall far less religious on the whole than most Americans.  We studied the attitudes of university students in Vancouver, Canada.  To circumvent any possible artifacts that result from overtly asking people about their prejudices, we designed studies that included more covert ways of measuring distrust.

When I see an explanation like that, it suggests that the researchers first conducted their study using the same methodology for both populations, obtained data that did not agree with their hypothesis, then collected more data for only one group in order to build a consistent, publishable story (if you’re interested, you can see their final paper here).

Because researchers can (and do!) collect data until they see what they want — until they have results that agree with a pet hypothesis, perhaps one they’ve built their career around — it’s not hard to obtain publishable data that appear to support any claim.  Doesn’t matter whether the claim is true or not.  And that, in essence, is why the practices that masquerade as the scientific method in the hands of modern researchers are not convincing persuasive tools.

I think it’s unfair to denounce people for not believing scientific results about climate change, for instance.  Because modern scientific results simply are not believable.

scientists_montageWhich is a shame.  The scientific method, used correctly, is the best way to understand the world.  And many scientists are very bright, ethical people.  And we should act upon certain research findings.

For instance, even if the reality underlying most climate change studies is a little less dire than some papers would lead you to believe, our world will be better off — more ecological diversity, less asthma, less terrorism, and, yes, less climate destabilization — if we pretend the results are real.

So it’s tragic, in my opinion, that a toxic publishing culture has undermined the authority of academic scientists.

And that’s one downside to Norenzayan’s book.  He supports his argument with a lot of data that I’m disinclined to believe.

The other problem is that he barely addresses historical information that doesn’t agree with his hypothesis.  For instance, several cultures developed long-range trust-based commerce without believing in omnipresent, watchful, morality-enforcing gods, including ancient Kanesh, China, the pre-Christian Greco-Roman empires, some regions of Polynesia.

CaptureThere’s also historical data demonstrating that trust is separable from religion (and not just in contemporary secular societies, where Norenzayan would argue that a god-like role is played by the police… didn’t sound so scary the way he wrote it).  The most heart-wrenching example of this, in my opinion, is presented in Nunn & Wantchekon’s paper, “The Slave Trade and the Origins of Mistrust in Africa.” They suggest a casual relationship between kidnapping & treachery during the transatlantic slave trade and contemporary mistrust in the plundered regions.  Which would mean that slavery in the United States created a drag on many African nations’ economies that persists to this day.

That legacy of mistrust persists despite the once-plundered nations (untrusting, with high economic transaction costs to show for it) & their neighbors (trusting, with greater prosperity) having similar proportions of believers in the Abrahamic faiths.

Is it so wrong to wish Norenzayan had addressed some of these issues?  I’ll admit that complexity might’ve sullied his clever logic.  But, all apologies to Keats, sometimes it’s necessary to introduce some inelegance in the pursuit of truth.

Still, the book was pleasurable to read.  Definitely gave me a lot to think about, and the writing is far more lucid and accessible than I’d expected.  Check out this passage on the evolutionary flux — replete with dead ends — that the world’s religions have gone through:

CaptureThis cultural winnowing of religions over time is evident throughout history and is occurring every day.  It is easy to miss this dynamic process, because the enduring religious movements are all that we often see in the present.  However, this would be an error.  It is called survivor bias.  When groups, entities, or persons undergo a process of competition and selective retention, we see abundant cases of those that “survived” the competition process; the cases that did not survive and flourish are buried in the dark recesses of the past, and are overlooked.  To understand how religions propagate, we of course want to put the successful religions under the microscope, but we do not want to forget the unsuccessful ones that did not make it — the reasons for their failures can be equally instructive.

This idea, that the histories we know preserve only a lucky few voices & occurrences, is also beautifully alluded to in Jurgen Osterhammel’s The Transformation of the World (trans. Patrick Camiller).  The first clause here just slays me:

The teeth of time gnaw selectively: the industrial architecture of the nineteenth century has worn away more quickly than many monuments from the Middle Ages.  Scarcely anywhere is it still possible to gain a sensory impression of what the Industrial “Revolution” meant–of the sudden appearance of a huge factory in a narrow valley, or of tall smokestacks in a world where nothing had risen higher than the church tower.

Indeed, Norenzayan is currently looking for a way to numerically analyze oft-overlooked facets of history.  So, who knows?  Perhaps, given more data, and a more thorough consideration of data that don’t slot nicely into his favored hypothesis, he could convince me yet.