On Darwin and free love.

On Darwin and free love.

For the moment, let’s set aside the question of why I was reading a review titled “Plants Neither Possess nor Require Consciousness.”  Instead, I’d like to share a passage from the end of the article:

Plant neurobiologists are hardly the first biologists to ascribe consciousness, feelings, and intentionality to plants.

Erasmus Darwin, [Charles] Darwin’s grandfather and a believer in free love, was so taken with the Linnaean sexual system of classification that he wrote an epic poem, The Loves of Plants, in which he personified stamens and pistils as ‘swains’ and ‘virgins’ cavorting on their flower beds in various polygamous and polyandrous relationships.

Maybe you were startled, just now, to learn about the existence of risqué plant poetry.  Do some people log onto Literotica to read about daffodils or ferns?

But what caught my attention was Erasmus Darwin’s designation as a believer in free love. 

In a flash, an entire essay composed itself in my mind.  Charles Darwin’s grandfather was a polyamorist!  Suddenly, the origin of The Origin of the Species made so much more sense!  After all, exposure to polyamory could help someone notice evolution by natural selection.  An essential component of polyamory is freedom of choice – during the 1800s, when nobody had access to effective birth control, people might wind up having children with any of their partners, not just the one with whom they were bound in a legally-recognized and church-sanctioned marriage. 

Evolution occurs because some individuals produce more offspring than others, and then their offspring produce more offspring, and so on.  Each lineage is constantly tested by nature – those that are less fit, or less fecund, will dwindle to a smaller and smaller portion of the total population.

Similarly, in relationships where choice is not confined by religious proscription, the partners are under constant selective pressure if they hope to breed.  When people have options, they must stay in each other’s good graces.  They must practice constant kindness, rather than treating physical affection as their just desserts.

I felt proud of this analogy.  To my mind, Erasmus Darwin’s belief in free love had striking parallels with his grandson’s theory.

And it’s such a pleasure when essays basically write themselves.  All I’d need to do was skim a few biographies.  Maybe collect some spicy quotes from Erasmus himself.  And I’d try to think of a clever way to explain evolution to a lay audience.  So that my readers could understand why, once I’d learned this juicy tidbit about Erasmus, his connection to Charles Darwin’s theory seemed, in retrospect, so obvious.


My essay failed.

I wish it hadn’t, obviously.  It was going to be so fun to write!  I was ready to compose some sultry plant poetry of my own.

And I feel happy every time there’s another chance to explain evolution.  Because I live in a part of the United States where so many people deny basic findings from science, I talk about this stuff in casual conversations often.  We regularly discuss evolutionary biology during my poetry classes in jail.

But my essay wasn’t going to work out.  Because the underlying claim – Erasmus Darwin believed in free love! – simply isn’t true.


Maybe you have lofty ideals about the practice of science.  On the children’s record Science Is for Me, Emmy Brockman sings:

I am a scientist

I explore high and low

I question what I know

Emmy is great. Find her at emmybrockman.com.

That’s the goal.  A good scientist considers all the possibilities.  It’s hard work, making sure that confirmation bias doesn’t cause you to overlook alternative explanations.

But scientists are human.  Just like anybody else, we sometimes repeat things we’ve heard without considering whether any evidence ever justified it.

In The Human Advantage, neuroscientist Suzana Herculano-Houzel describes how baffled she felt when she began reading scientific papers about the composition of our brains. 

Although the literature held many studies on the volume and surface area of the brain of different species, and various papers on the densities of neurons in the cerebral cortex, estimates of numbers of neurons were scant.  In particular, I could find no original source to the much-repeated “100 billion neurons in the human brain.”

I later ran into Eric Kandel himself, whose textbook Principles of Neural Science, a veritable bible in the field, proffered that number, along with the complement “and 10-50 times more glial cells.”  When I asked Eric where he got those numbers, he blamed it on his coauthor Tom Jessel, who had been responsible for the chapter in which they appeared, but I was never able to ask Jessel himself.

It was 2004, and no one really knew how many neurons could be found on average in the human brain.

Unsatisfied with the oft-repeated numbers, Herculano-Houzel liquified whole brains in order to actually count the cells.  As it happens, human brains have about 86 billion neurons and an equal number of glial cells.

Or, consider the psychology experiments on behavioral priming.  When researchers “prime” a subject, they inoculate a concept into that person’s mind.

The basic idea here is relatively uncontroversial.  It’s the principle behind advertising and paid product placement – our brains remember exposure while forgetting context.  That’s why political advertisements try to minimize the use of opponents’ names.  When people hear or see a candidate’s name often, they’re more likely to vote for that candidate.

Facebook has also demonstrated again and again that minor tweaks to the inputs that your brain receives can alter your behavior.  One shade of blue makes you more likely to click a button; there’s a size threshold below which people are unlikely to notice advertisements; the emotional tenor of information you’re exposed to will alter your mood.

When research psychologists use priming, though, they’re interested in more tenuous mental links.  Study subjects might be primed with ideas about economic scarcity, then assessed to see how racist they seem.

The first study of this sort tested whether subconsciously thinking about elderlies could make you behave more like an elderly person.  The researchers required thirty undergraduate psychology students to look at lists of five words and then use four of these words to construct a simple sentence.  For fifteen of these students, the extra word was (loosely) associated with elderly people, like “Florida,” “worried,” “rigid,” or “gullible.”  For the other fifteen, the words were deemed unrelated to elderlies, like “thirsty,” “clean,” or “private.”

(Is a stereotypical elderly person more gullible than private? After reading dozens of Mr. Putter and Tabby books — in which the elderly characters live alone — I’d assume that “private” was the priming word if I had to choose between these two.)

After completing this quiz, students were directed toward an elevator.  The students were timed while walking down the hallway, and the study’s authors claimed that students who saw the elderly-associated words walked more slowly.

There’s even a graph!

This conclusion is almost certainly false.  The graph is terrible – there are no error bars, and the y axis spans a tiny range in order to make the differences look bigger than they are.  Even aside from the visual misrepresentation, the data aren’t real.  I believe that a researcher probably did use a stopwatch to time those thirty students and obtain those numbers.  Researchers probably also timed many more students whose data weren’t included because they didn’t agree with this result.  Selective publication allows you to manipulate data sets in ways that many scientists foolishly believe to be ethical.

If you were to conduct this study again, it’s very unlikely that you’d see this result.

Some scientists are unconcerned that the original result might not be true.   After all, who really cares whether subconscious exposure to words vaguely associated with old people can make undergraduates walk slowly?

UCLA psychology professor Matthew Lieberman wrote,

What we care about is whether priming-induced automatic behavior in general is a real phenomenon.  Does priming a concept verbally cause us to act as if we embody the concept within ourselves?  The answer is a resounding yes.  This was a shocking finding when first discoveredin 1996.

Lieberman bases this conclusion on the fact that “Hundreds of studies followed showing that people primed with a stereotype embodied it themselves.”  Continued success with the technique is assumed to validate the initial finding.

Unfortunately, many if not most of those subsequent studies are flawed in the same way as the original.  Publication biases and lax journal standards allow you to design studies that prove that certain music unwinds time (whose authors were proving a point) or that future studying will improve your performance on tests today (whose author was apparently sincere).

Twenty years of mistaken belief has given the walking speed study – and its general methodology – an undeserved veneer of truth.


Erasmus Darwin didn’t believe in free love.  But he did have some “radical” political beliefs that people were unhappy about.  And so, to undermine his reputation, his enemies claimed that he believed in free love.

Other people repeated this slander so often that Erasmus Darwin is now blithely described as a polyamorist in scientific review articles.


So, why did conservative writers feel the need to slander Erasmus Darwin?  What exactly were his “radical” beliefs?

Erasmus Darwin thought that the abject mistreatment of black people was wrong.  He seems to have thought it acceptable for black people to be mistreated – nowhere in his writings did he advocate for equality – but he was opposed to the most ruthless forms of torture. 

Somewhat.  His opposition didn’t run so deep that he’d deny himself the sugar that was procured through black people’s forced labor.

And, when Erasmus Darwin sired children out of wedlock – which many upper-class British men did – he scandalously provided for his children.

In British society, plenty of people had affairs.  Not because they believed in free love, but because they viewed marriage as a fundamentally economic transaction and couldn’t get a divorce.  But good British men were supposed to keep up appearances.  If a servant’s child happened to look a great deal like you, you were supposed to feign ignorance. 

Even worse, the illegitimate children that Erasmus Darwin provided for were female.  Not only did Darwin allow them to become educated – which was already pretty bad, because education made women less malleable spouses – but he also helped them to establish a boarding school for girls.  The contagion of educated women would spread even further!

This was all too much for Britain’s social conservatives.  After all, look at what happened in France.  The French were unduly tolerant of liberal beliefs, and then, all of a sudden, there was murderous revolution!

And so Erasmus Darwin had to be stopped.  Not that Darwin had done terribly much.  He was nationally known because he’d written some (mediocre) poetry.  The poetry was described as pornographic.  It isn’t.  Certain passages anthropomorphize flowers in which there are unequal numbers of pistils and stamens.  It’s not very titillating, unless you get all hot and bothered by the thought of forced rhymes, clunky couplets, and grandiloquent diction.  For hundreds of pages.


While reading about Erasmus Darwin, I learned that some people also believe that he was the actual originator of his grandson’s evolutionary theories.  In a stray sentence, Erasmus Darwin did write that “The final course of this contest between males seems to be, that the strongest and most active animal should propagate the species which should thus be improved.”  This does sound rather like evolution by natural selection.  But not quite – that word “improved” hints at his actual beliefs.

Erasmus Darwin did believe all life had originated only once and that the beautiful variety of creatures extant today developed over time.  But he thought that life changed from simple to complex out of a teleological impulse.  In his conception, creatures were not becoming better suited to their environment (which is natural selection), but objectively better (which isn’t).

I’m not arguing that Charles Darwin had to be some kind of super genius to write The Origin of the Species.  But when Charles Darwin described evolution, he included an actual mechanism to rationalize why creatures exist in their current forms.  Things that are best able to persist and make copies of themselves eventually become more abundant. 

That’s it.  Kind of trivial, but there’s a concrete theory backed up by observation.

Erasmus Darwin’s belief that life continually changed for the better was not unique, nor did it have much explanatory power. 

In the biography Erasmus Darwin, Patricia Fara writes that,

By the end of the eighteenth century, the notion of change was no longer in itself especially scandalous.  For several decades, the word ‘evolution’ had been in use for living beings, and there were several strands of evidence arguing against a literal interpretation of the Bible.  Giant fossils – such as mammoths and giant elks – suggested that the world had once been inhabited by distant relatives, now extinct, of familiar creatures. 

Animal breeders reinforced particular traits to induce changes carried down through the generations – stalwart bulldogs, athletic greyhounds, ladies’ lapdogs.  Geological data was also accumulating: seashells on mountain peaks, earthquakes, strata lacking fossil remains – and the most sensible resolution for such puzzles was to stretch out the age of the Earth and assume that it is constantly altering.

Charles Darwin thought deeply about why populations of animals changed in the particular way that they did.  Erasmus Darwin did not.  He declaimed “Everything from shells!” and resumed writing terrible poetry.  Like:

IMMORTAL LOVE! who ere the morn of Time,

On wings outstretch’d, o’er Chaos hung sublime;

Warm’d into life the bursting egg of Night,

And gave young Nature to admiring Light!

*

Erasmus Darwin didn’t develop the theory of evolution.  You could call him an abolitionist, maybe, but he was a pretty half-hearted one, if that.  By the standards of his time, he was a feminist.  By our standards, he was not.

He seems like a nice enough fellow, though.  As a doctor, he treated his patients well.  And he constantly celebrated the achievements of his friends.

Patricia Fara writes that,

After several years of immersion in [Erasmus] Darwin’s writing, I still have a low opinion of his poetic skills.  On the other hand, I have come to admire his passionate commitment to making the world a better place.


And, who knows?  If Erasmus Darwin was alive today, maybe he would be a polyamorist.  Who’s to say what secret desires lay hidden in a long-dead person’s soul?

But did Darwin, during his own lifetime, advocate for free love?  Nope.  He did not.  No matter what his political opponents – or our own era’s oblivious scientists – would have you believe.

Header image from the Melbourne Museum. Taken by Ruth Ellison on Flickr.

On CRISPR and the future of humanity.

On CRISPR and the future of humanity.

I think most laypeople understand that academic scientists, in order to keep their jobs, have to publish new findings.  I assume most people also intuitively understand that not all venues for publication are equal.  Not to malign my hometown newspaper, but it’s less impressive to write an editorial for Bloomington’s Herald Times than the New York Times.

In the research world, journals are ranked by “impact factor.”  At the top of the heap are journals like Cell, Nature, and Science; these have “impact factors” in the 30s.  The Journal of Cell Biology, where I published my thesis work, has an impact factor around 10.

And the Journal of Assisted Reproduction & Genetics?  Its impact factor is slightly below 2.  My local university’s medical library doesn’t even subscribe.

So I was puzzled: why did the research paper with one of the flashiest single-sentence summaries land in J Assist Reprod Genet?

 

journals
Research journals: tiny nudges to the frontier of human knowledge, & a whole lotta people who got to keep their jobs.

Here’s the summary, in case you missed it: a new genome editing technique was used to insert an HIV-resistance gene into human IVF embryos.

To my mind, that’s a pretty big deal.  It’s not that genetically-modified organisms are anything new.  The big difference is that the technique this group used, CRISPR, makes the whole process incredibly fast, precise, and cheap.  The difference is that sculpting the genome of a human embryo will be easy soon.

CRISPR-Cas9_mode_of_action
A charming schematic of CRISPR from Wikipedia. To use CRISPR for a new gene modification, only the short blue / orange targeting strand in the schematic above needs to be synthesized. Eazy-peezy, right?

At the moment, nobody understands the human genome well enough to propose the sort of editing that shows up routinely in science fiction movies — probably the best way to convince you quickly, without getting into too much detail, is to slap up the title of a recent paper: “Most reported genetic associations with general intelligence are probably false.”  We know that many aspects of human physiology and personality are partially controlled by genetics, but we haven’t yet decoded which genes in which combination give any particular effect.

superman_autism_by_sircle-d5zm8k8
Not all differences are detriments.  Clever image by sircle on Deviantart.

I don’t think we even understand fully the trade-offs inherent in human personality.  We’ve recently begun to understand that many traits designated “mental illnesses” exist on a spectrum and that the challenges are inextricably linked to good qualities — creativity and schizophrenia, puzzle solving and autism, awareness and ADHD.  It’s unlikely that any recipe for a “perfect” human brain exists.

Still, there are traits that parents prefer.  Male height.  Facial symmetry.  Disease resistance.  We’ll soon know which genes modulate these.

Which was why, I assume, Xiangjin Kang et al. wrote their paper, “Introducing precise genetic modifications into human embryos by CRISPR/Cas-mediated genome editing.”  They may have felt a moral imperative to draw attention to these issues.

As far as I can tell, this is also the explanation for why their super-flashy experiment landed in a low impact factor journal.  It’s not a typical research paper.  They wrote an opinion piece about scientific ethics with a somewhat-unsuccessful experiment grafted on in order to get the thing published.

I don’t mean that as criticism.  I think they’ve done the right thing.  If anything, the problem is with scientific publishing; I assume their paper was rejected by a higher impact factor journal.  This paper, with its focus on ethics, is not what fancy journals typically publish.

For instance, the reason why their experiment was somewhat unsuccessful?  Kang et al. were using CRISPR to introduce HIV resistance into a human embryo.  But, because they think that using CRISPR on human embryos is unethical, they specifically chose polyploid embryos — these are non-viable cells produced when two sperm fuse with a single egg.  They have too much DNA and can’t possibly become people.

Because CRISPR uses a DNA-reading guide strand to direct a DNA-modifying enzyme to a particular location, and because the experiment would be “successful” only if all copies of a gene were modified, using a polyploid embryo with more copies of each gene increases the chance of “failure.”  In basketball, making three free throws in a row is obviously more difficult than making two in a row.  That’s what they were trying to do.

Which is why, even though the typical way to read a research paper is to look at the pictures, then read the captions, then maybe read the results section — to wit, ignoring the bulk of the text — the most important part of Kang et al.’s paper is the discussion section.  From their paper:

Because human in vitro fertilization methods are well established and site-specific nuclease technologies are readily available, it is foreseeable that a genetically modified human could be generated.  We believe that any attempt to generate genetically modified humans through the modification of early embryos needs to be strictly prohibited until we can resolve both ethical and scientific issues.

That’s a sentiment a lot of people probably agree with.  But I think it carries more weight in a paper that demonstrates just how easy this process is.

And, sure, they did not sequence the full genomes of their modified embryos.  One risk with CRISPR genome editing is that you’ll have “off target effects” — you might change more of the genome than you were intending.  But there are plenty of very smart people working to make the technology more precise.  Within five years, I’d guess, you’ll be able to change single target genes reliably.

Gattaca chillingly illustrates the dystopia of unregulated genetic manipulation, but even that film understates what we’ll soon be capable of.  The premise of Gattaca is that, by sequencing IVF embryos, parents can choose what sort of child they want.  From hundreds of options, parents pick one.

Scary, sure.  But not this scary.  CRISPR could let parents sculpt the child they want.

GFP_Mice_01
Not that you’d want this, but it wouldn’t be that hard to make your kid glow in the dark.  Maybe you’d want your progeny to be eight-feet tall and brilliant, too.  You could do it.  But, should you?

On watchful gods, trust, and how academic scientists undermined their own credibility.

On watchful gods, trust, and how academic scientists undermined their own credibility.

k10063Despite my disagreements with a lot of its details, I thoroughly enjoyed Ara Norenzayan’s Big Gods.  The book posits an explanation for the current global dominance of the big three Abrahamic religions: Christianity, Islam, and Judaism.

Instead of the “quirks of history & dumb luck” explanation offered in Jared Diamond’s Guns, Germs, and Steel, Norenzayan suggests that the Abrahamic religions have so many adherents today because beneficial economic behaviors were made possible by belief in those religions.

Here’s a rough summary of the argument: Economies function best in a culture of trust.  People are more trustworthy when they’re being watched.  If people think they’re being watched, that’s just as good.  Adherents to the Abrahamic faiths think they are always being watched by God.  And, because anybody could claim to believe in an omnipresent, ever-watchful god, it was worthwhile for believers to practice costly rituals (church attendance, dietary restrictions, sexual moderation, risk of murder by those who hate their faith) in order to signal that they were genuine, trustworthy, God-fearing individuals.

A clever argument.  To me, it calls to mind the trustworthiness passage of Daniel Dennett’s Freedom Evolves:

When evolution gets around to creating agents that can learn, and reflect, and consider rationally what they ought to do next, it confronts these agents with a new version of the commitment problem: how to commit to something and convince others you have done so.  Wearing a cap that says “I’m a cooperator” is not going to take you far in a world of other rational agents on the lookout for ploys.  According to [Robert] Frank, over evolutionary time we “learned” how to harness our emotions to the task of keeping us from being too rational, and–just as important–earning us a reputation for not being too rational.  It is our unwanted excess of myopic or local rationality, Frank claims, that makes us so vulnerable to temptations and threats, vulnerable to “offers we can’t refuse,” as the Godfather says.  Part of becoming a truly responsible agent, a good citizen, is making oneself into a being that can be relied upon to be relatively impervious to such offers.

I think that’s a beautiful passage — the logic goes down so easily that I hardly notice the inaccuracies beneath the surface.  It makes a lot of sense unless you consider that many other species, including relatively non-cooperative species, have emotional lives very similar to our own, and will like us act in irrational ways to stay true to those emotions (I still love this clip of an aggrieved monkey rejecting its cucumber slice).

Maybe that doesn’t seem important to Dennett, who shrugs off decades of research indicating the cognitive similarities between humans and other animals when he asserts that only we humans have meaningful free will, but that kind of detail matters to me.

You know, accuracy or truth or whatever.

Similarly, I think Norenzayan’s argument is elegant, even though I don’t agree.  One problem is that he supports his claims with results from social psychology experiments, many of which are not credible.  But that’s not entirely his fault.  Arguments do sound more convincing when there’s experimental data to back them up, and surely there are a few tolerably accurate social psychology results tucked away in the scientific literature. The problem is that the basic methodology of modern academic science produces a lot of inaccurate garbage (References? Here & here & here & here... I could go on, but I already have a half-written post on the reasons why the scientific method is not a good persuasive tool, so I’ll elaborate on this idea later).

For instance, many of the experiments Norenzayan cites are based on “priming.”  Study subjects are unconsciously inoculated with an idea: will they behave differently?

Naturally, Norenzayan includes a flattering description of the first priming experiment, the Bargh et al. study (“Automaticity of Social Behavior: Direct Effects of Trait Construct and Stereotype Activation on Action”) in which subjects walked more slowly down a hallway after being unconsciously exposed to words about old age.  But this study is terrible!  It’s a classic in the field, sure, and its “success” has resulted in many other laboratories copying the technique, but it almost certainly isn’t meaningful.

Look at the actual data from the Bargh paper: they’ve drawn a bar graph that suggests a big effect, but that’s just because they picked an arbitrary starting point for their axis.  There are no error bars.  The work couldn’t be replicated (unless a research assistant was “primed” to know what the data “should” look like in advance).

fig2

The author of the original priming study also published a few apoplectic screeds denouncing the researchers who attempted to replicate his work — here’s a quote from Ed Yong’s analysis:

Bargh also directs personal attacks at the authors of the paper (“incompetent or ill-informed”), at PLoS (“does not receive the usual high scientific journal standards of peer-review scrutiny”), and at me (“superficial online science journalism”).  The entire post is entitled “Nothing in their heads”.

Personally, I am extremely skeptical of any work based on the “priming” methodology.  You might expect the methodology to be sound because it’s been used in so many subsequent studies.  I don’t think so.  Scientific publishing is sufficiently broken that unsound methodologies could be used to prove all sorts of untrue things, including precognition.

If you’re interested in the failings of modern academic science and don’t want to wait for my full post on the topic, you should check out Simmons et al.’s “False-Positive Psychology: Undisclosed Flexibility in Data Collection and AnalysNais Allows Presenting Anything as Significant.”  This paper demonstrates that listening to the Beatles will make you chronologically younger.

Wait.  No.  That can’t be right.

The_Beatles_in_America

The Simmons et al. paper actually demonstrates why so many contemporary scientific results are false, a nice experimental supplement to the theoretical Ioannidis model (“Why Most Published Research Findings Are False”).  The paper pre-emptively rebuts empty rationalizations such as those given in Lisa Feldman Barrett’s New York Times editorial (“Psychology Is not in Crisis,” in which she incorrectly argues that it’s no big deal that most findings cannot be replicated).

Academia rewards researchers who can successfully hunt for publishable results.  But the optimal strategy for obtaining something publishable (collect lots of data, analyze it repeatedly using different mathematical formula, discard all the data that look “wrong”) is very different from the optimal strategy for uncovering truth.

28-1

Here’s one way to understand why much of modern academic publishing isn’t really science: in general, results are publishable only if they are positive (i.e. a treatment causes a change, as opposed to a treatment having no effect) and significant (i.e. you would see the result only 1 out of 20 times if the claim were not actually true).  But that means that if twenty labs decide to test the same false idea, 19 of them will get negative results and be unable to publish their findings, whereas 1 of them will see a false positive and publish.  Newspapers will announce that the finding is real, and there will be a published record of only the incorrect lab’s result.

Because academic training is set up like a pyramid scheme, we have a huge glut of researchers.  For any scientific question, there are probably enough laboratories studying it to nearly guarantee that significance testing will provide one of them an untrue publishable result.

And that’s even if everyone involved were 100% ethical.  Even then, a huge quantity of published research would be incorrect.  In our world, where many researchers are not ethical, the situation is even worse.

Norenzayan even documents this sort of unscientific over-analysis of data in his book.  One example appears in his chapter on anti-atheist prejudice:

In addition to assessing demographic information and individual religious beliefs, we asked [American] participants to rate the degree to which they viewed both atheists and gays with either distrust or with disgust.

. . .

It is possible that, for whatever reason, people may have felt similarly toward both atheists and gays, but felt more comfortable openly voicing distrust of atheists than of gays.  In addition, our sample consisted of American adults, overall a quite religious group.  To address these concerns, we performed additional studies in a population with considerable variability in religious involvement, but overall far less religious on the whole than most Americans.  We studied the attitudes of university students in Vancouver, Canada.  To circumvent any possible artifacts that result from overtly asking people about their prejudices, we designed studies that included more covert ways of measuring distrust.

When I see an explanation like that, it suggests that the researchers first conducted their study using the same methodology for both populations, obtained data that did not agree with their hypothesis, then collected more data for only one group in order to build a consistent, publishable story (if you’re interested, you can see their final paper here).

Because researchers can (and do!) collect data until they see what they want — until they have results that agree with a pet hypothesis, perhaps one they’ve built their career around — it’s not hard to obtain publishable data that appear to support any claim.  Doesn’t matter whether the claim is true or not.  And that, in essence, is why the practices that masquerade as the scientific method in the hands of modern researchers are not convincing persuasive tools.

I think it’s unfair to denounce people for not believing scientific results about climate change, for instance.  Because modern scientific results simply are not believable.

scientists_montageWhich is a shame.  The scientific method, used correctly, is the best way to understand the world.  And many scientists are very bright, ethical people.  And we should act upon certain research findings.

For instance, even if the reality underlying most climate change studies is a little less dire than some papers would lead you to believe, our world will be better off — more ecological diversity, less asthma, less terrorism, and, yes, less climate destabilization — if we pretend the results are real.

So it’s tragic, in my opinion, that a toxic publishing culture has undermined the authority of academic scientists.

And that’s one downside to Norenzayan’s book.  He supports his argument with a lot of data that I’m disinclined to believe.

The other problem is that he barely addresses historical information that doesn’t agree with his hypothesis.  For instance, several cultures developed long-range trust-based commerce without believing in omnipresent, watchful, morality-enforcing gods, including ancient Kanesh, China, the pre-Christian Greco-Roman empires, some regions of Polynesia.

CaptureThere’s also historical data demonstrating that trust is separable from religion (and not just in contemporary secular societies, where Norenzayan would argue that a god-like role is played by the police… didn’t sound so scary the way he wrote it).  The most heart-wrenching example of this, in my opinion, is presented in Nunn & Wantchekon’s paper, “The Slave Trade and the Origins of Mistrust in Africa.” They suggest a casual relationship between kidnapping & treachery during the transatlantic slave trade and contemporary mistrust in the plundered regions.  Which would mean that slavery in the United States created a drag on many African nations’ economies that persists to this day.

That legacy of mistrust persists despite the once-plundered nations (untrusting, with high economic transaction costs to show for it) & their neighbors (trusting, with greater prosperity) having similar proportions of believers in the Abrahamic faiths.

Is it so wrong to wish Norenzayan had addressed some of these issues?  I’ll admit that complexity might’ve sullied his clever logic.  But, all apologies to Keats, sometimes it’s necessary to introduce some inelegance in the pursuit of truth.

Still, the book was pleasurable to read.  Definitely gave me a lot to think about, and the writing is far more lucid and accessible than I’d expected.  Check out this passage on the evolutionary flux — replete with dead ends — that the world’s religions have gone through:

CaptureThis cultural winnowing of religions over time is evident throughout history and is occurring every day.  It is easy to miss this dynamic process, because the enduring religious movements are all that we often see in the present.  However, this would be an error.  It is called survivor bias.  When groups, entities, or persons undergo a process of competition and selective retention, we see abundant cases of those that “survived” the competition process; the cases that did not survive and flourish are buried in the dark recesses of the past, and are overlooked.  To understand how religions propagate, we of course want to put the successful religions under the microscope, but we do not want to forget the unsuccessful ones that did not make it — the reasons for their failures can be equally instructive.

This idea, that the histories we know preserve only a lucky few voices & occurrences, is also beautifully alluded to in Jurgen Osterhammel’s The Transformation of the World (trans. Patrick Camiller).  The first clause here just slays me:

The teeth of time gnaw selectively: the industrial architecture of the nineteenth century has worn away more quickly than many monuments from the Middle Ages.  Scarcely anywhere is it still possible to gain a sensory impression of what the Industrial “Revolution” meant–of the sudden appearance of a huge factory in a narrow valley, or of tall smokestacks in a world where nothing had risen higher than the church tower.

Indeed, Norenzayan is currently looking for a way to numerically analyze oft-overlooked facets of history.  So, who knows?  Perhaps, given more data, and a more thorough consideration of data that don’t slot nicely into his favored hypothesis, he could convince me yet.