In Jason Shiga’s Demon, the protagonist attempts to commit suicide. Again and again. Death never seems to take – each time, he wakes intact and offs himself again.
Eventually, the character realizes that he is cursed … or, rather, that he is a curse. Whenever his current body dies, his spirit takes possession of the next available shell. Each individual body can be snuffed, but every time that happens, his wants and desires leap into a new home.
We incarcerate drug dealers. But we make little effort to change the world enough to staunch demand. People’s lives are still broken. Impoverished, addicted, they’ll buy. When one dealer is locked up, the job leaps to someone else.
Child molesters receive less sympathy than anyone else in jail or prison. When somebody wants to complain about sentencing, he’ll say “I’m looking at seven years, and that cho-mo got out in two!” When gangs inside want to look tough, they find friendless child molesters and murder them – these murders might go unpunished. Many child molesters spend their time in solitary for their own protection, but solitary confinement is itself a form of torture.
Child molesters were often abused as children. In Joanna Conners’s I Will Find You, she realizes that her rapist was probably re-enacting abuses that he had experienced in prison.
The demon leaps from one shell to the next.
During a university commencement address, J.K. Rowling said that “There is an expiry date on blaming your parents for steering you in the wrong direction; the moment you are old enough to take the wheel, responsibility lies with you.” Perhaps this is helpful for privileged college graduates to hear, but this attitude ignores how brains work. When we have a thought, the synapses that allowed that thought grow stronger. We become better at doing things that we’ve already done.
Bad parenting makes certain choices come easier than others. And then, each time a bad choice is made, it becomes easier to make again. After a long history of bad choices, it’s difficult to do anything else. But the initial mistakes were made by a child. Then these mistakes perpetuated themselves.
We as a society could have helped that child’s parents more – we did not. We could have helped the child more, perhaps through education, or nutrition, or providing stable work for the parents – we did not. We could have helped the young adult more, perhaps, at this point, through rehabilitative jails – we did not.
After all our failures to intervene, we must accept some responsibility for the ensuing criminality.
If buying in to the illusion of agency helps you get your work done, go for it. I too believe in free will. But we have no idea what it feels like inside someone else’s brain. If born into someone else’s circumstances, with that person’s genetics, prenatal nutrition, and entire lifetime of experiences, would you have steered to a better course?
In ancient Tibetan Buddhist mythology, crimes and addiction are the province of demons. A person has been possessed – the demon is influencing choices.
This perspective does not deny free will to the afflicted. It simply implies – correctly – that some decisions will be easier to make than others. This idea was tested in an experiment asking right-handed people to touch a button near the center of a computer screen. Study subjects were not told which hand to use, and most used their right. After a powerful magnetic pulse, people could still chose either hand to touch the button … but pressing it with the left hand suddenly seemed easier, and so that’s what many people did.
Addiction makes choosing not to use drugs more difficult. Either option is available, but the demon is constantly pushing toward one.
In most mythologies, a demon can be exorcised. In Jason Shiga’s Demon, the protagonist can die permanently only if his body is killed at a time when the nearest available Homo sapiens shell is already possessed.
Existence, for this demon, is a form of torment. A villain was thrilled to find Shiga’s protagonist … not to do him harm, but as a chance to end the cycle.
Some demons might never leave the body. The brain is plastic, but synaptic connections reflect its entire history. Even after years clean, addiction lingers.
In Buddhist mythology, even demons that cannot be exorcised can be distracted. Apparently demons love to guard treasure. It’s a beautiful image – the demon is still inside, but rather than push its host toward calamity, it hides in a corner, sniggering like Gollum, fondling a jewel-encrusted box.
Addicts are shuttered in jail. The walls are concrete. Fluorescent lights shine nineteen hours a day. People weathering opiate withdrawal can’t sleep even during those few hours of dark. The block is noisy, and feels dangerous. The brain is kept in a constant high-stress state of vigilance. Often, the only thoughts that a person has enough concentration to formulate are the easy ones.
Thoughts of drugs.
But poems can be treasures. If given solace long enough to read a poem, our afflicted might find beauty there. Something for the demon to guard.
We are not helping people if we insist their penitence be bleak.
Many thanks to John-Michael, a wonderful poet & teacher. This essay was inspired by a beautiful book he’s working on.
From the beginning, artists understood that time travel either denies humans free will or else creates absurd paradoxes.
This conundrum arises whenever an object or information is allowed to travel backward through time. Traveling forward is perfectly logical – after all, it’s little different from a big sleep, or being shunted into an isolation cell. The world moves on but you do not… except for the steady depredations of age and the neurological damage that solitary confinement inevitably causes.
A lurch forward is no big deal.
Consider one of the earlier time travel stories, the myth of Oedipus. King Laius receives a prophecy foretelling doom. He strives to create a paradox – using information from the future to prevent that future, in this case by offing his son – but fails. This story falls into the “time travel denies humans free will” category. Try as they might, the characters cannot help but create their tragic future.
James Gleick puts this succinctly in his recent New York Review essay discussing Denis Villeneuve’s Arrival and Ted Chiang’s “Story of Your Life.” Gleick posits the existence of a “Book of Ages,” a tome describing every moment of the past, present, and future. Could a reader flip to a page describing the current moment and choose to evade the dictates of the book? In Gleick’s words,
Can you do that? Logically, no. If you accept the premise, the story is unchanging. Knowledge of the future trumps free will.
(I’m typing this essay on January 18th, and can’t help but note how crappy it is that the final verb in that sentence looks wrong with a lowercase “t.” Sorry, ‘merica. I hope you get better soon.)
Gleick is the author of Time Travel: A History, in which he presents a broad survey of the various tales (primarily literature and film) that feature time travel. In each tale Gleick discusses, time travel either saps free will (a la Oedipus) or else introduces inexplicable paradox (Marty slowly fading in Back to the Future as his parents’ relationship becomes less likely; scraps of the Terminator being used to invent the Terminator; a time-traveling escapee melting into a haggard cripple as his younger self is tortured in Looper.)
It’s not just artists who have fun worrying over these puzzles; over the years, more and more physicists and philosophers have gotten into the act. Sadly, their ideas are often less well-reasoned than the filmmakers’. Time Travel includes a long quotation from philosopher John Hospers (“We’re still in a textbook about analytical philosphy, but you can almost hear the author shouting,” Gleick interjects), in which Hospers argues that you can’t travel back in time to build the pyramids because you already know that they were built by someone else, followed by with the brief summary:
Admit it: you didn’t help build the pyramids. That’s a fact, but is it a logical fact? Not every logician finds these syllogisms self-evident. Some things cannot be proved or disproved by logic.
Gleick uses this moment to introduce Godel’s Incompleteness Theorem (the idea that, in any formal system, we must include unprovable assumptions), whose author, Kurt Godel, also speculated about time travel (from Gleick: If the attention paid to CTCs [closed timelike curve] is disproportionate to their importance or plausibility, Stephen Hawkins knows why: “Scientists working in this field have to disguise their real interest by using technical terms like ‘closed timelike curves’ that are code for time travel.” And time travel is sexy. Even for a pathologically shy, borderline paranoid Austrian logician).
Alternatively, Hospers’ strange pyramid argument could’ve been followed by a discussion of Timecrimes [http://www.imdb.com/title/tt0480669/], the one paradox-less film in which a character travels backward through time but still has free will (at least, as much free will as you or I have).
But James Gleick’s Time Travel: A History doesn’t mention Timecrimes. Obviously there are so many stories incorporating time travel that it’d be impossible to discuss them all, but leaving out Timecrimes is a tragedy! This is the best time travel movie (of the past and present. I can’t figure out how to make any torrent clients download the time travel movies of the future).
Timecrimes is great. It provides the best analysis of free will inside a sci-fi world of time travel. But it’s not just for sci-fi nerds – the same ideas help us understand strange-seeming human activities like temporally-incongruous prayer (e.g., praying for the safety of a friend after you’ve already seen on TV that several unidentified people died when her apartment building caught fire. By the time you kneel, she should either be dead or not. And yet, we pray).
Timecrimes progresses through three distinct movements. In the first, the protagonist believes himself to be in a world of time travel as paradox: a physicist has convinced him that with any deviation from the known timeline he might cause himself to cease to exist. And so he mimics as best he can events that he remembers. A masked man chased him with a knife, and so he chases his past self.
In the second movement, the protagonist realizes that the physicist was wrong. There are no paradoxes, but he seems powerless to change anything. He watched his wife fall to her death at the end of his first jaunt through time, so he is striving to alter the future… but his every effort fails. Perhaps he has no free will, no real agency. After all, he already remembers her death. His memory exists in the form of a specific pattern of neural connections in his brain, and those neurons will not spontaneously rearrange. His memory is real. The future seems set.
But then there is a third movement: this is the reason Timecrimes surpasses all other time travel tales. The protagonist regains a sense of free will within the constraints imposed by physics.
Yes, he saw his wife die. How can he make his memory wrong?
Similarly, you’ve already learned that the Egyptians built the pyramids. I’m pretty confident that none of the history books you’ve perused included a smiling picture of you with the caption “… but they couldn’t have done it without her.” And yet, if you were to travel back to Egypt, would it really be impossible to help in such a way that no history books (which will be written in the future, but which your past self has already seen) ever report your contributions.
Indeed, an analogous puzzle is set before us every time we act. Our brains are nothing more than gooey messes of molecules, constrained by the same laws of physics as everything else, so we shouldn’t have free will. And yet: can we still act as though we do?
We must. It’s either that or sit around waiting to die.
Because the universe sprung senselessly into existence, birthed by chance fluctuations during the long march of eternity… and then we appeared, billions of years later, through the valueless vagaries of evolution… our actions shouldn’t matter. But: can we pretend they do?
For several months, a friend and I have volleyed emails about a sprawling essay on consciousness, free will, and literature.
The essay will explore the idea that humans feel we have free will because our conscious mind grafts narrative explanations (“I did this because…”) onto our actions. It seems quite clear that our conscious minds do not originate all the choices that we then take credit for. With an electroencephalogram, you could predict when someone is about to raise an arm, for instance, before the person has even consciously decided to do so.
Which is still free will, of course. If we are choosing an action, it hardly matters whether our conscious or subconscious mind makes the choice. But then again, we might not be “free.” If an outside observer were able to scan a person’s brain to sufficient detail, all of that person’s future choices could probably be predicted (as long as our poor study subject is imprisoned in an isolation chamber). Our brains dictate our thoughts and choices, but these brains are composed of salts and such that follow the same laws of physics as all other matter.
That’s okay. It is almost certainly impossible that any outside observer could (non-destructively) scan a brain to sufficient detail. If quantum mechanical detail is implicated in the workings of our brains, it is definitely impossible: quantum mechanical information can’t be duplicated. Wikipedia has a proof of this “no cloning theorem” involving lots of bras and kets, but this is probably unreadable for anyone who hasn’t done much matrix math. An easier way to reason through it might be this: if you agree with the Heisenberg uncertainty principle, the idea that certain pairs of variables cannot be simultaneously measured to arbitrary precision, the no cloning theorem has to be true. Otherwise you could simply make many copies of a system and measure one variable precisely for each copy.
So, no one will ever be able to prove to me that I am not free. But let’s just postulate, for a moment, that the laws of physics that, so far, have correctly described the behavior of all matter outside my brain also correctly describe the movement of matter inside my brain. In which case, those inviolable laws of physics are dictating my actions as I type this essay. And yet, I feel free. Each word I type feels like a choice. My brain is constantly concocting a story that explains why I am choosing each word.
Does the same neural circuitry that deludes me into feeling free – that has evolved, it seems, to constantly sculpt narratives that make sense of our actions, the same way our dreams often burgeon to include details like a too hot room or a ringing telephone – also give me the ability to write fiction?
In other words, did free will spawn The Iliad?
The essay is obviously rather speculative. I’m incorporating relevant findings from neuroscience, but, as I’ve mentioned, it’s quite likely that no feasible experiments could ever test some of these ideas.
The essay is also unfinished. No laws of physics forbid me from finishing it. I’m just slow because K & I have two young kids. At the end of each day, once our 2.5 year old and our 3 month old are finally asleep, we exhaustedly glance at each other and murmur, “Where did the time go?”
But I am very fortunate to have a collaborator always ready to nudge me back into action. My friend recently sent me an article by Tim Christiaens on the philosophy of financial markets. He sent it because the author argues – correctly, in my opinion – that for many stock market actions it’s sensible to consider the Homo sapiens trader + the nearby multi-monitor computer as a single decision-making entity. Tool-wielding is known to change our brains – even something as simple as a pointing stick alters our self-perception of our reach. And the algorithms churned through by stock traders’ computers are incredibly complex. There’s not a good way for the human to check a computer’s results; the numbers it spits out have to be trusted. So it seems reasonable to consider the two together as a single super-entity that collaborates in choosing when to buy or sell. If something in the room has free will, it would be the tools & trader together.
Which isn’t as weird as it might initially sound. After all, each Homo sapiens shell is already a multi-species super-entity. As I type this essay, the choice of which word to write next is made inside my brain, then signals are sent through my nervous system to my hands and fingers commanding them to tap the appropriate keys. The choice is influenced by all the hormones and signaling molecules inside my brain. It so happens that bacteria and other organisms living in my body excrete signaling molecules that can cross the blood-brain barrier and influence my choice.
The milieu of intestinal bacteria living inside each of us gets to vote on our moods and actions. People with depression seem to harbor noticeably different sets of bacteria than people without. And it seems quite possible that parasites like Toxoplasma gondii can have major influences on our personalities.
Indeed, in his article on stock markets, Christiaens mentions the influence of small molecules on financial behavior, reporting that “some researchers study the trader’s body through the prism of testosterone levels as an indicator of performance. It turns out that traders who regularly visit prostitutes consequently have higher testosterone levels and outperform other traders.”
Now, I could harp on the fact that we designed these markets. That they could have been designed in many different ways. And that it seems pretty rotten to have designed a system in which higher testosterone (and the attendant impulsiveness and risky decision-making) would correlate with success. Indeed, a better, more equitable market design would probably quell the performance boost of testosterone.
I could rant about all that. But I won’t. Instead I’ll simply mention that Toxoplasma seems to boost testosterone. Instead of popping into brothels after work, traders could snack on cat shit.
On the topic of market design, Christiaens also includes a lovely description of the interplay between the structure of our economy and the ways that people are compelled to live:
The reason why financial markets are able to determine the viability of lifestyles is because most individuals and governments are indebted and therefore need a ‘creditworthy’ reputation. As the [U.S.] welfare state declined during the 1980s, access to credit was facilitated in order to sustain high consumption, avoid overproduction and stimulate economic growth. For Lazzarato [a referenced writer], debt is not an obligation emerging from a contract between free and equal individuals, but is from the start an unequal power relation where the creditor can assert his force over the debtor. As long as he is indebted, the latter’s rights are virtually suspended. For instance, a debtor’s property rights can be superseded when he fails to reimburse the creditor by evicting him from his home or selling his property at a public auction. State violence is called upon to force non-creditworthy individuals to comply. We [need] not even jump to these extreme cases of state enforcement to see that debt entails a disequilibrium of power. Even the peaceful house loan harbors a concentration of risk on the side of the debtor. When I take a $100,000 loan for a house that, during an economic crisis, loses its value, I still have to pay $100,000 plus interests to the bank. The risk of a housing crash is shifted to the debtor’s side of the bargain. During a financial crisis this risk concentration makes it possible for the creditors to demand a change of lifestyle from the debtor, without the former having to reform themselves.
Several of my prioressays have touched upon the benefits of a guaranteed basic income for all people, but I think this paragraph is a good lead-in for a reprise. As Christiaens implies, there is violence behind all loans – both the violence that led to initial ownership claims and the threat of state violence that compels repayment. Not that I’m against the threat of state violence to compel people to follow rules in general – without this threat we would have anarchy, in which case actual violence tends to predominate over the threat of incipient enforcement.
We all need wealth to live. After all, land holdings are wealth, and at the very least each human needs access to a place to collect fresh water, a place to grow food, a place to stand and sleep. But no one is born wealthy. A fortunate few people receive gifts of wealth soon after birth, but many people foolishly choose to be born to less well-off parents.
The need for wealth curtails the choices people can make. They need to maintain their “creditworthiness,” as in Christiaens’s passage, or their hire-ability. Wealth has to come from somewhere, and, starting from zero, we rely on others choosing to give it to us. Yes, often in recompense for labor, but just because you are willing and able to do a form of work does not mean that anyone will pay you for it.
Unless people are already wealthy enough to survive, they are at the mercy of others choosing to give them things. Employers are not forced to trade money for salaried working hours. And there isn’t wealth simply waiting around to be claimed. It all starts from something – I’d argue that all wealth stems originally from land holdings – but the world’s finite allotment of land was claimed long ago through violence.
A guaranteed basic income would serve to acknowledge the brutal baselessness of those initial land grabs. It is an imperfect solution, I know. It doesn’t make sense to me that everyone’s expenses should rise whenever a new child is born. But a world where people received a guaranteed basic income would be better than one without. The unluckily-born populace would be less compelled to enter into subjugating financial arrangements. We’d have less misery – feeling poor causes a lot of stress. We’d presumably have less crime and drug abuse, too, for similar reasons.
And, of course, less hypocrisy. It’s worth acknowledging that our good fortune comes from somewhere. No one among us created the world.
During high school, I read dozens of Agatha Christie novels. But, recently, I rarely read mysteries. Like everybody else, I plowed through The Da Vinci Code and the Girl with the Dragon Tattoo books, but I’ve picked up few others in the past decade.
So it was a rare treat to set aside a few hours over the weekend for Rudolph Fisher’s The Conjure-Man Dies (1932). It’s a lovely book, wonderful even though Fisher was writing with one hand metaphorically tied behind his back. His was the first mystery novel published by an African-American writer, so the writing style is reserved, even staid. If the whole narrative were written with the linguistic inventiveness that Fisher was capable of, he might not have found a publisher.
Within dialogue, though, Fisher lets his writing crackle. The following passage shows off this dichotomy:
On he strolled past churches, drugstores, ice-cream parlors, cigar stores, restaurants, and speakeasies. Acquaintances standing in entrances or passing him by offered the genial insults which were characteristic Harlem greetings:
“What you say, blacker’n me?”
“How you doin’, short-order?”
“Ole Eight-Ball! Where you rollin’, boy?”
In each instance, Bubber returned some equivalent reply, grinned, waved, and passed on. He breathed deeply of the keen sweet air, appraised casually the trim, dark-eyed girls, admired the swift humming motors that flashed down the Avenue.
Though the novel is nearly a hundred years old, its concerns are strikingly modern. For instance, the narrative digresses into an investigation of free will, the relationship between quantum-mechanical uncertainty and human thought, the limitations of medical diagnosis — all topics that still confound contemporary philosophers. Fisher was remarkably up-to-date: the Heissenberg uncertainty principle was first proposed a mere five years before The Conjure-Man Dies was published, and yet the novel incorporates the central idea more accurately than many contemporary writers. Some of this can be seen in a short dialogue between the characters Dr. Archer — Fisher’s simulacrum within the novel — and Frimbo, a brilliant, highly-educated man who makes his living as a fortune teller.
Easily and quickly they began to talk with that quick intellectual recognition which characterizes similarly reflective minds. Dr. Archer’s apprehensions faded away and shortly he and his host were eagerly embarked on discussions that at once made them old friends: the hopelessness of applying physico-chemical methods to psychological problems; the nature of matter and mind and the possible relations between them; the current researches of physics, in which matter apparently vanished into energy, and Frimbo’s own hypothesis that probably the mind did likewise. Time sped. At the end of an hour Frimbo was saying:
“But as long as this mental energy remains mental, it cannot be demonstrated. It is like potential energy — to be appreciated it must be transformed into heat, light, motion — some form that can be grasped and measured. Still, by assuming its existence, just as we do that of potential energy, we harmonize psychology with mechanistic science.”
“You astonish me,” said the doctor. “I thought you were a mystic, not a mechanist.”
“This,” returned Frimbo, “is mysticism — an undemonstrable belief. Pure faith in anything is mysticism. Our very faith in reason is a kind of mysticism.”
And so, when I reached the end of the book, I expected to find a few pages with a catalog of other mystery novels. Instead, there was a list that began, “BLACK HISTORY: Other Books of Interest. Individual titles in Series I, II, and III of the Amo Press collection THE AMERICAN NEGRO: HIS HISTORY AND LITERATURE are listed in the following pages.” The selections were almost all academic history books, things like Modern Negro Art and Religion in Higher Education Among Negros (to choose only those two titles that bracket the page on which The Conjure-Man Dies is listed.)
Methinks this listing is not the way for The Conjure-Man Dies to find its audience. Which I could elaborate upon, but, as it happens, I don’t need to. Percival Everett, in his novel Erasure, explained this better than I could:
While Lisa wandered off to the garden book section, I stood in the middle of Border’s thinking how much I hated the chain and chains like it. I’d talked to too many owners of little, real bookstores who were being driven to the poorhouse by what they called the WalMart of books. I decided to see if the store had any of my books, firm in my belief that even if they did, my opinion about them would be unchanged. I went to Literature and did not see me. I went to Contemporary Fiction and did not find me, but when I fell back a couple of steps I found a section called African American Studies and there, arranged alphabetically and neatly, read undisturbed, were four of my books including my Persians of which the only thing ostensibly African American was my jacket photograph. I became quickly irate, my pulse speeding up, my brow furrowing. Someone interested in African American Studies would have little interest in my books and would be confused by their presence in the section. Someone looking for an obscure reworking of a Greek tragedy would not consider looking in that section any more than the gardening section. The result in either case, no sale. That fucking store was taking food from my table.
Saying something to the poor clone of a manager was not going to fix anything, so I resigned to keep quiet.
I learned about Erasure from Parul Sehgal’s lovely essay in the New York Times Magazine. Erasure is a satirical novel about an ambitious black writer who struggles to have his work taken seriously — he’s losing his struggle, though, because, although his work is good, his writing does not match what people expect from someone with his skin tone. From the opening pages:
While in college I was a member of the Black Panther Party, defunct as it was, mainly because I felt I had to prove I was black enough. Some people in the society in which I live, described as being black, tell me I am not black enough. Some people whom the society calls white tell me the same thing. I have heard this mainly about my novels, from editors who have rejected me and reviewers whom I have apparently confused and, on a couple of occasions, on a basketball court when upon missing a shot I muttered Egads. From a reviewer:
The novel is finely crafted, with fully developed characters, rich language and subtle play with the plot, but one is lost to understand what this reworking of Aeschylus’ The Persians has to do with the African American experience.
One night at a party in New York, one of the tedious affairs where people who write mingle with people who want to write and with people who could help either group begin or continue to write, a tall, thin, rather ugly book agent told me I could sell many books if I’d forget about writing retellings of Euripides and parodies of French poststructuralists and settle down to write the true, gritty stories of black life. I told him that I was living a black life, far blacker than he could ever know, that I had lived one, that I would be living one. He left me to chat with an on-the-rise perfomance artist / novelist who had recently posed for seventeen straight hours in front of the governor’s mansion as a lawn jockey. He familiarly flipped one of her braided extensions and tossed a thumb back in my direction.
The hard, gritty truth of the matter is that I hardly ever think about race. Those times when I did think about it a lot I did so because of my guilt for not thinking about it. I don’t believe in race. I believe there are people who will shoot me or hang me or cheat me and try to stop me because they do believe in race, because of my brown skin, curly hair, wide nose and slave ancestors. But that’s just the way it is.
Sehgal has written several excellent essays about the phenomenon of erasure, or silenced voices, recently. Two paragraphs from her essay on the student protests at elite universities cut deep.
In Tablet, James Kirchick wrote, “When I hear, in 2015, students complain about feeling ‘marginalized’ at Yale due to their racial, ethnic, religious, sexual, or any other identity … I can’t help but think of James Meredith.” In 1962, flanked by federal marshals, Meredith became the first black student to enroll at the University of Mississippi.
“When I see photographs of Meredith and other black students of the civil rights era,” Kirchick wrote, “I don’t see people pleading for dean’s excuses so they can huddle in a ‘safe space’ to recover from ‘traumatic racial events.’ I see unbelievably courageous young men and women.”
Of course, it’s one thing to look at a photograph of James Meredith and concoct a fantasy of his bravery and resilience — a photograph is silent; it cannot clarify or correct. To listen to James Meredith is a different thing entirely. “Ole Miss kicked my butt, and they’re still celebrating,” he said in an interview with Esquire in 2012. “Because every black that’s gone there since me has been insulted, humiliated, and they can’t even tell their story. Everybody has to tell James Meredith’s story — which is a lie. The powers that be in Mississippi understand this very clearly.” He continued, “They’re gonna keep on doin’ it because it makes it impossible for blacks there now to say anything about what’s happened to them.”
What a masterful reversal of logic.
Passages like this hurt so much for me to read because I, too, tacitly assented to our systematic silencing of minority voices for many years. During my twenty-some years of formal education, I hardly ever read the work of black authors, learned almost nothing about African-American history except than the usual narrative about how Martin Luther King, Jr. strove mightily and was sacrificed but everything is all better now. Which is, it seems, not exactly correct.
Indeed, even when I began to learn more history and investigate silenced voices for my own work, I came at the problem through mythology. Canonical texts typically related only one side of stories, and even then include only the voices of a privileged few; the lives of others are submerged by time. Even in epic poetry like The Iliad, the cares and concerns of women disappear: Helen, for instance, is used as a mouthpiece for male sentiment. After leaving her rampantly-unfaithful husband for a more charming lover, she says (in the Stephen Mitchell translation):
“But come in, dear brother-in-law,
sit down on this chair and rest yourself for a while,
since the burden falls upon you more than the others,
through my fault, bitch that I am, and through Paris’s folly.
Zeus has brought us an evil fate, so that poets
can make songs about us for all future generations.”
Really, Homer? “Bitch that I am?” I’m well aware that many women who leave violent, abusive husbands suffer self-recriminations for years, but this strikes me as a decidedly male sentiment, as though the “face that launch’d a thousand ships” were really the inanimate wood of a ventriloquist’s dummy.
This phenomenon is part of what drew me to the Ramayana. This myth burbles with unheard stories at the periphery of the main narrative. Through the years, numerous writers have attempted to bring these admurmerations to the fore, but their work has been similarly neglected. From an essay by Nabaneeta Dev Sen,
Similarly, Candravati Ramayana [composed circa 1600] has been neglected and rejected for years by our male custodians of Bengali literature as an incomplete work. This is what we call a silenced text. The editors decided it was a poor literary work because it was a Ramayana that did not sing of Rama. Its eccentricity confused not only the editors but also historians of Bengali literature to such an extent that they could not even see the complete epic narrative pattern clearly visible in it. It got stamped as an incomplete text. Today, a rereading of the narrative exposes an obvious failure of the male critics and historians: to recognize Candravati Ramayana as a personal interpretation of the Rama-tale, seen specifically from the wronged woman’s point of view.
And, linking the Ramayana with the issues described at the beginning of this post, the villainized dark-skinned king’s side of the story is never told. I’ve been enamored with the peripheral stories in the Ramayana ever since learning of the Dravida Kazhagam interpretation, which recasts the dark-skinned villain as a hero and the entire narrative as a tragedy.
To put this into perspective for someone from the United States, this is akin to a retelling of the Bible in which God is a tyrannical oppressor and Satan the tragic hero (and, to differentiate this hypothetical work from Paradise Lost, Satan would have to think of himself & his efforts to enlighten humanity as fundamentally good). To wit: a radical, and oft-denounced, retelling.
What with recasting the erudite, beleaguered dark-skinned man as a hero, you could reasonably draw parallels between the DK Ramayana and, say, the upcoming Nat Turner film. The struggles of a man rebelling against the invention of “race” in the United States.
Why, after all, should the presence of more melanin in someone’s skin curtail opportunities? Which is yet another idea presented beautifully in The Conjure-Man Dies. Here, I’ll end this post with one last quotation, again drawn from the conversation between the sleuthing doctor and the fortune teller (who was presumed to have died, but somehow returned to life to investigate his own murder):
“I had really intended to discuss the mystery of this assault,” the doctor declared. “Perhaps we can do that tomorrow?”
“Mystery? That is no mystery. It is a problem in logic, and perfectly calculable. I have one or two short-cuts which I shall apply tomorrow night, of course, merely to save time. But genuine mystery is incalculable. It is all around us — we look upon it every day and do not wonder at it at all. We are fools, my friend. We grow excited over a ripple, but exhibit no curiosity over the depth of the stream. The profoundest mysteries are those things which we blandly accept without question. See. You are almost white. I am almost black. Find out why, and you will have solved a mystery.”
“You don’t think the causes of a mere death a worthy problem?”
“The causes of a death? No. The causes of death, yes. The causes of life and death and variation, yes. But what on earth does it matter who killed Frimbo — except to Frimbo?”
They stood a moment in silence. Presently Frimbo added in an almost bitter murmur:
“The rest of the world would do better to concern itself with why Frimbo was black.”
I might spend too much time thinking about how brains work. Less than some people, sure — everybody working on digital replication of human thought must devote more energy than I do to the topic, and they’re doing it in a more rigorous way — but for a dude with no professional connection to cognitive science or neurobiology or what-have-you, I spend an unreasonable amount of time obsessing over ’em.
Most of my “obsessing over brains” time is devoted to thinking about how humans work, but studies on animal cognition always floor me as well. A major focus of these studies, though, is often how similar human minds are to those of other animals… for instance, my recent hamsters & poverty essay was about the common response of most mammalian species to unfair, unrectifiable circumstance, and I’m planning a piece on the (mild) similarities between prairie dog language and our own.
The only post I’ve slapped up lately on differences between human and animal cognition was about potential rattlesnake misconceptions, but even that piece hinged upon a difference in the way they see, not the way they think.
Honestly, if we asked Superman to spin our planet backward some twenty billion times in order to re-run evolution, I think cephalopods could give apes a run for their money on potential planetary dominance. Cephalopods are quite intelligent, adept problem solvers, have tentacles sufficiently agile for tool use, and can communicate by changing colors (although with much less finesse than the octospiders in Arthur C. Clarke’s Rama series. The octospiders used a language based on shifting striations of color displayed on their skin).
The biggest obstacle holding octopi back from world domination is the difficulty for a water-dwelling species to harness fire or electricity. But octopi can make brief sojourns onto dry land… and even land-dwelling apes took something like 20 million years to discover fire and some 22 million for electricity.
Sure, that’s faster than octopi — they’ve had a hundred million years already and still no fire — but once Superman spins the planet (first he fought crime! Now he’ll muck up our timeline to investigate evolution!), there’ll be a chance for him to stop that asteroid and save the dinosaurs. I imagine that living in constant terror of T-Rex & friends would slow the apes down a little.
I’ve never had to work under that kind of pressure, but it’s probably much more difficult to discover fire if you’re worried that a dinosaur will stomp by, demolish your laboratory, and eat you.
Octopi ingenuity might be similarly stymied by pervasive fear of giant monsters: sharks, dolphins, sea lions, seals, eels, and, yes, those ostensibly land-bound hairless apes. Voracious, vicious predators all… especially those apes.
And yet. Despite the fear, octopi are extremely clever. They have a massive genome, too. In itself, genome size is not a measure of complexity, in part because faulty cell division machinery sometimes results in the duplication of entire genomes — no matter how many copies of Fuzzy Bee & Friends you staple together, even if you create a 1,000+ page monstrosity, you won’t create a narrative with the complexity of The Odyssey.
That’s what researchers thought had happened with the octopus genome. Sure, they have more genes than us, but they’re probably all duplicates! Albertin et al. were the first to actually test that hypothesis, though… and it turns out to be wrong. The octopus genome underwent massive expansion specifically for neural proteins & regulatory regions. Which suggests that their huge genome is not dreck, that it is actually the product of intense selection for cognitive performance. It isn’t proof, but it’s definitely consistent with selection for greater mental capacities.
There isn’t any octopus literature yet, but evolution isn’t done. As long as octopus survival & mating success is bolstered by intelligence, there’s a chance the species will continue to slowly “improve.”
But even if a species derived from contemporary octopi eventually gains cognitive capacities equivalent to our own, we may never grasp the way they perceive the world. Their brains are organized very differently from our own. Our minds are highly centralized — our actions result from decisions passed down from on high.
For most human actions, it seems that the mind subconsciously initiates movement, firing off instructions to the appropriate muscles, and then the conscious mind notices what’s going on and concocts a story to rationalize that action. For instance, if you touch something hot, nociceptors (pain receptors) in your hand send an “Ouch!” signal to your brain, your brain relays back “Pull yer damn hand away!”, then the conscious mind types up a report, “I decided to pull my hand away because that was too hot.”
(Some people have argued that this sequence of timing indicates that we lack free will, by the way. Which seems silly. Our freedom doesn’t need to be at the level of conscious decision-making to be worthwhile. Indeed, your subconscious is as much you as your consciousness. Your subconscious reflexes reflect who you are, and with concerted effort you can modify most if not all of them.)
Octopi minds are different. They seem to be much more decentralized. Each tentacle has a significant neural network and can act independently. Octopus tentacles can still move and make minor decisions even if cleaved away… like the zombie movie trope where a severed arm continues to strangle someone.
Since we have no good way to communicate with octopi, we don’t know whether their minds are wired for storytelling the way ours are. Whether they also construct elaborate internal rationalizations for every action (does this help explain why I’m so fascinated by free will? Even if our freedom is illusory, the ability to maintain that illusion underpins our ability to tell stories).
But if octopi do explain their world with stories, the types of stories they tell would presumably seem highly chaotic to us humans. Our brains are building explanations for decisions made internally, whereas an octopus would be constructing a narrative from the actions of eight independently-acting entities.
Who knows? Someday, many many years from now, if octopi undergo further selection for brain power & communication, we might find octopus literature to be exceptionally rambunctious. Brimming with arbitrary twists & turns. If their minds also tend toward narrative storytelling (and it’s worth mentioning that octopi also process time in a cascade of short-term and long-term memory the way mammals do), their stories would likely veer inexorably toward the inexplicable.
Toward, that is, actions & consequences that a human reader would perceive to be inexplicable.
Octopi might likewise condemn our own classics as overly regimented. Lifeless, stilted, formulaic. And it’d be devilishly hard to explain to an octopus why I think In Search of Lost Time is so good.
p.s. I should offer a brief mea culpa for having listed different lengths of time that apes & octopi have had with which to discover fire. All known life uses the same genetic code, so it’s extremely likely that we all share a common ancestor. Everything alive today — bacteria, birds, octopi, humans — have had the same length of time to evolve.
This is part of why it sounds so silly when people refer to contemporary bacteria as being “lower” life forms or somehow less evolved. Current bacteria have had just as long to perfect themselves for their environments as we have, and they simply pursued a different strategy for survival than humans did. (For more on this topic, feel free to read this previous post.)
I listed different numbers, though… mostly because it seemed funny to imagine a lineage of octopi racing the apes in that “decent of man” cartoon. Who will conquer the planet first?!
I chose my times based on the divergence of great apes from their nearest common ancestor (gibbons, whom we’ve rudely declared to be “lesser apes”) and the divergence of octopi from theirs (squids, ca. 135 million years ago). The numbers themselves are pretty accurate, but the choice of those particular numbers was arbitrary. You could easily rationalize instead starting the clock for apes in their quest for fire as soon as the first primates appeared, ca. 65 million years ago… then octopi don’t look so bad. Perhaps only two-fold slower than us. Or you could start the apes’ clock at the appearance of the very first mammals… in which case octopi might beat us yet.
The other interesting finding that came from Bem’s work was also related to academic publishing: even if a result is blatantly untrue, it’s difficult to correct the scientific literature. Several researchers wasted their time attempting to reproduce Bem’s result, and as expected they found that none of the work was correct … but then they could not publish their findings. Their rejection from the Journal of Personality and Social Psychology read, “This journal does not publish replication studies, whether successful or unsuccessful.”
Anyway, that’s the kind of “science” I was expecting when K asked if I’d seen the new study on future events dictating the past.
The basic gist of why these are described as “mind blowing”: there are numerous results in quantum mechanics that can seem silly if you think of objects as being either particle or wave and somehow “choosing” which to be at any given time. Matter has a wave nature, and the behavior we think of as particle-like arises from the state of an object being linked to the state of other objects. The common phrasing for this is to say that observation causes a shift from wave-like to particle-like behavior, but the underlying explanation is that our observational techniques result in a state-restricting coupling.
Quantum mechanics is difficult to write about using English-language metaphors — translating from the language of mathematics into English seems to have all the problems of translating between two spoken languages, and then some — but here’s a crude way to think about this type of result:
If you’re standing with your back to two narrow hallways (sufficient for only one person to walk through at a time) and a friend walks through and taps you on the shoulder, you won’t know which hallway your friend came through. Unless your friend tells you. Let’s just imagine that your friend is as cagey with his or her secrets as the average helium atom tends to be.
If your friend then leaves, however, and at the same time a second buddy of yours walks through to tap you on the shoulder and say hello, then your friend’s history becomes coupled to this second buddy’s. If your friend walked through the northern hallway, your buddy had to be in the southern, and vice versa. Their positions are coupled because they can’t occupy the same space at the same time. If you never ask who walked where, though, there’s a residual probability that each walked through each hallway — and if you ever query one, because their histories are coupled, the other’s history suddenly snaps into focus. No matter how far away that second person might be. Learning which route either took tells you immediately about the other.
In some ways this reminds me of the scene from Bottle Rocket, wherein a character is told “You’re like paper. You know, you’re trash,” and then, “You know, you’re like paper falling by, you know… It doesn’t sound that bad in Spanish.”
A lot of results from quantum mechanics sound weird, but they don’t sound that weird in mathematics.
But I’ll admit that the way some of these results are written up in the popular press is bizarre. Here’s a quote from Jay Kuo’s article (which K alerted me to after it was featured on George Takei’s webpage) about the recent helium atom experiment:
“What they found is weirder than anything seen to date: Every time the two grates were in place, the helium atom passed through, on many paths in many forms, just like a wave. But whenever the second grate was not present, the atom invariably passed through the first grate like a particle. The fascinating part was, the second grate’s very existence in the path was random. And what’s more, it hadn’t happened yet.”
From a passage like that, it’d be hard to tell that this is an experiment that was first conducted nearly a decade ago, and a result that was exactly what you’d expect. Honestly, I had trouble even parsing the above paragraph, and could barely understand the experiment from the description given in the article. And I studied quantum mechanics! I spent my junior and senior years of college doing research in the field! (My research was on the electronic structure of DNA bases, not entanglement specifically, but still.) I don’t know how people without that background were supposed to follow the science here. Or get through it without their eyes glazing over.
So, as to people’s excitement about this result: it’s a little bit weirder to think about the wavelength of big things (“big” here meaning the helium atoms; they’re big compared to photons), but it’s mostly weird in English. Or any other metaphor-based language. Our day-to-day perceptions don’t yield the metaphorical fodder we’d need to properly describe these phenomena in words.
Because, yeah, I like to think that I’m sitting still in a chair, typing this. But I have a wavelength too. So do you. You might be anywhere within the boundaries roughly transcribed by your wavelength! And of course, there aren’t really any boundaries, because the probability of finding you in a place never quite drops to zero. Even if we consider locations far away from your moments-prior center of mass. But your probability peak on a likelihood vs. location graph is very, very steep. You, my friend, are rather large: your wavelength is very small.
p.s. If you happened across Jay Kuo’s article and were baffled, and would like an explanation that describes the experimental set-up used (I purposefully left out all the experimental details because I thought they’d distract from my two main points, that translating from mathematics to English is hard and inevitably introduces inaccuracies, and that for coupled pairs of objects [the real word for this is “entangled”] information can be transfered instantaneously), you could check out Tim Wogan’s summary on Physics World. Wogan alludes to the idea that identifying the state of one object out of an entangled pair causes something reminiscent of faster-than-light travel:
“Indeed, the results of both Truscott and Aspect’s experiments [show] that [an object]’s wave or particle nature is most likely undefined until a measurement is made. The other less likely option would be that of backward causation — that the particle somehow has information from the future — but this involves sending a message faster than light, which is forbidden by the rules of relativity.”
I don’t really like the use of the word “measurement” above (sure, I changed a few other words in that quotation, but only to improve readability — I didn’t want to change anything that might alter Wogan’s ideas), because to me this sounds excessively human-centric, as though quantum collapse couldn’t happen without us.
Over time, the state of an object can become coupled to the states of others (if two blue billiard balls collide, for instance, then you know that at some point in time they were in the same place) or uncoupled from the states of prior interaction partners (if one of those blue billiard balls then collides with a third red ball, the trajectories of the two blue balls will no longer be coupled).
In this double-slit experiment, the coupling between helium atom and detector (when the detector either chirups or doesn’t, that making-sound-or-not state is coupled to the position of the helium atom) which unveils information about objects entangled with the helium.
Maybe this seems less confusing if you think about it in terms of progressively revealing clues instead of causing behavior? But, again, the English descriptions are never going to exactly match the math.
After no more than three pages of Philip Kitcher’s Life After Faith, a sentence gave me pause. “Secular humanism begins, after all, with doubt.”
I had never heard the phrase “secular humanism” before arriving at college. The first time was two months into fall quarter my freshman year, sitting in the dining hall near the science and engineering building. I was eating dinner with Ravi, my brilliant friend…
(To be more precise, my only friend. I was perhaps over-shy. My first three weeks of college, I did not speak. I was enrolled in mostly lecture courses, so no need to fear being called on and having to answer. And I didn’t realize this was a problem until one afternoon the cafeteria card swipe woman asked how I was doing, and I opened my mouth but could not answer. My voice would not work. The next day, resolving to make a change, I waited after completing a quiz to turn it in at the same time as someone who looked nice, and walked out of the classroom with her, attempting to have a conversation. That was when Ravi barreled into me, announced we were in the same organic chemistry class, asked me for advice as to where to take someone on a date in town, told me he liked my shoelaces [pink], and said I was meeting him at the gym later that afternoon. The woman I’d waylaid excused herself in terror, and she and I never did become friends. But I had Ravi instead.)
…and a woman he liked. We were talking about whether or not one ought to eat meat. The wooed woman (who would later become my roommate, for the final two years of college) said something, and my reply, which I no longer remember, made it obvious that I’d assumed she was Christian.
I was from Indiana! In Indiana, that was always a safe assumption.
Ravi gently informed me that she was a secular humanist. And so was he, he said. And wasn’t I, he asked. I was nonplused. “No,” I told him, “I’m an atheist.” At which point he laughed at me. Which I suppose was fair. Shouldn’t students at that sort of fancy college be expected to know the word secular?
But I didn’t. I didn’t grow up with doubt. I didn’t grow up with a contrast between secular and sacred. My mother even took my sister and me to church every year on Christmas Eve, but that was just a short bearded man reading stories in a big room. Teachers read stories to my classmates and me every day at school. And my mother never told me, this is different, so I never realized that, for other people, it was.
One of my elementary school teachers said that we could bring in our favorite books and she would read them to the class. I brought The Burro Benedicto. I no longer remember the story, but I still remember loving it. My teacher told me quietly that she couldn’t read it. God was a character. I thought that if she didn’t like that, she could use a different name. I believe that’s what I suggested, but she told me no and I took the book back home. Still not understanding; plenty of other books that were read had magic characters, and to me it was just a story.
As far as I can recall, most of my education progressed similarly. I was generally oblivious, and surrounded by Christians, and was eventually informed by one of them that I was an atheist. But all the while, it never felt like doubt. Why would it? I always loved reading, but since my parents had started me on diary writing well before I could actually write (glancing back at the diaries, it seems that I would scribble and then my parents would translate, asking me what I’d written and jotting the words more legibly beneath. Unfortunately, numerous entries consist of full-page, looping scribbles that were translated as “I had a bad day”), I always had a sense of words as created things. Math allows you to find something that pre-exists you in the world, I believed. In math, one discovers truths. Whereas language allows you to create sentences.
I assume that dichotomy of belief on my part is why Borges’s “Library of Babel” (see illustration below!) so thoroughly wrecked me when I first read it. And it still has me pinned! Yes, obviously I know the numbers involved are so big as to dwarf our to-date consumption of any language, dwarf it into a rounding error away from zero, but it hurt to realize that words possess less magic than I thought they did. It is implausible but not impossible that a computer program could one day achieve linguistic beauty by eliminating one by one inharmonious strings. “Writing” by deletion, chiseling away from the ridiculously large yet finite set of options rather than building up from undifferentiated clay.
So I suppose, despite my atheism, that I was not particularly secular. I slept with what I felt to be a magic rock. I still do, in fact. The same rock. About the size of my palm. Light green, found on a beach in Michigan when I was four, carried by me, despite my father’s prediction of failure, up a precarious sand dune. In slumber, it nestles against my belly, and my hope is that it brings me luck.
Some nights before she falls asleep my daughter will grab the rock and giggle. Each evening, it is cool to the touch. And by now very very smooth.
My wife, also a secular humanist, has since informed me that I am not an atheist. She refers to, among other things, the rock. An atheist, she feels, would not believe in luck. Sometimes she clarifies: well, an atheist could, but not a secularist. Because the point of secularism is that you don’t believe in the supernatural.
I fail to see how luck is supernatural. Or rather, I can imagine one worldview in which luck is supernatural, but within that framework, free will is supernatural too. Because I believe in free will, I have no qualms extending the same latitude to luck. Both involve a belief that the immaterial (consciousness, karma, what have you) might somehow influence — even slightly — the material world. And neither is strictly precluded by any findings from science; finite samplings from a truly random process could always be postulated to reflect a hidden driver. Likewise, finite samplings from a wide variety of near-random drivers could be presumed causeless.
If all existing evidence gives you a choice to either believe in free will or not, I don’t see why anyone would choose not to. Yes, it’s quasi-mystical. But so what? It contradicts no scientific principles, and, if you’re anything like me, it’ll make you feel better about the world.
Believing there is a me helps me be good.
And, for me, that is where secular humanism begins. Not with doubt, but with belief. I believe that I have a choice. And I know that my presence comes at a cost to the world. To the universe. This physical meat shell that encases the consciousness I call me is a very orderly structure. But the second law of thermodynamics states…
(I like thermodynamics, but I’ve always been thrown off by the word “law” as used to describe its principles. You can test the second one with a simple thought experiment, for instance, that’s led me to both not believe in it and wish it was phrased differently. My preferred phrasing would be this: “One over infinity is effectively zero.” And the thought experiment is, imagine there are several gas molecules bouncing around inside a box. Seemingly at random. But if they ever bounce off the walls and all align in the same directions, they will do work. The second law is invalidated. For two molecules, this has a small chance. If we assign a vector for the velocity of the first, then the chance the vector for the second will align is one over the number of possible options. How many directions could the second molecule fly? Well, if reality is coursegrained, many. It it is not, infinitely many. The probability of alignment is clearly very small. But it isn’t quite zero. With more molecules, the chance of alignment becomes exponentially smaller, but still: not quite zero.)
…that the entropy of the universe is always increasing. Simply by existing, I limit the order possible for the rest of our universe. Once I die and decompose, some of my order will be freed for the rest of the universe to use. And, respiring, I use up gaseous oxygen. As a heterotroph, I have to kill to eat (well, maim, at least). I take up space. Worse, I like to post essays on the internet, which uses a lot of electricity (look up the energy consumption numbers for a few servers sometime, if you want), and we’re still acquiring most of that non-renewably.
Given that I believe I have free will, and that I cause harm by existing, but would like to continue existing, how can I do enough good to counterbalance that harm? To me, that is the root of secular humanism. Not doubt.
Still, I’m looking forward to reading the rest of Kitcher’s book (as in, beyond page three).