On free will.

Brain_powerI like thinking about free will.  Talking about it.  Writing about it.

Even though it’s a waste of time.  And really kills parties.  Try it sometime, if you don’t believe me… wait until you’re hanging out with some people, having a great time, talking, laughing, and then try to mire everybody in a pedantic discussion about whether they have free will.  That good time will dry up fast.

But in some ways, I have to think about it.  Because I’m working on a retelling of the Ramayana, and that story is in many ways a story about fate.  About destiny and how the world turns out the way it does.  You can see hints of this throughout – Vishnu is born into the world specifically to stop Ravana, after which a long series of coincidences leads him to marry Sita, be exiled, have Sita be abducted, find Ravana’s kingdom… and Rama’s helpers along the way string him along, give him only meager hints about where Ravana could be found, etc., even though they clearly should know, due to things like the one-time alliance between Ravana and the king of monkeys…

How convenient, that the myth I’m writing a retelling is also a venue for discussing this pedantic topic that I already liked talking about.  Almost as though it were destiny… except, wait, I chose to work with the Ramayana.

And, don’t get me wrong… I *realize* there’s no reason to debate free will.  Because I feel like I have free will – look, I’m typing whatever comes into my head, or if I wanted to I could purposefully type some gobbledygook – and you, reading this, probably feel like you have free will also.  And even if we don’t have free will – and, despite everything, I still believe we do – there’s only one experiment I can image that’d convince me that we didn’t, and that experiment is probably impossible.

I’ll describe that one in a moment, although honestly I’m not sure the word “experiment” is appropriate for a hypothetical description of something that can’t be done, but first I’ll mention some of the experiments (feasible, conducted, & published) that people think addresses whether we have free will.  Libet et al. recorded electroencephalograms while letting people choose when to move their fingers – the subjects were supposed to watch a clock, too, and report when they decided to move, and the time of that decision was compared to the time at which the electroencephalograms could be used to predict incipient movement.  The predictions could be made prior to reported times.  And a similar conclusion was drawn from an experiment that took away the subject’s need to watch a clock.  Matsuhashi & Hallett had subjects make movements whenever they wanted, unless they heard an external – randomly timed – stop command at a time when they were thinking about moving.  And there is, of course, a gap time – a small window in which the stop command is issued but the subject still performs a motion, because they don’t have time to consciously inhibit the motion.

But I don’t really care, honestly.  Yes, the brain has many parts, and consciousness is fairly late in a pathway of thought – we layer stories over our world to explain it to ourselves.  But even if decisions are made first, and then our conscious mind explains why we made those decisions, I don’t think that’s any strike against our freedom in choosing those decisions.  I take responsibility for the thoughts and actions compelled by activity inside my own cranium, whether I’ve already explained to myself why I’m doing something or not.

Incidentally, and as a bonus curiosity, apparently people who believe less strongly in free will seem to be less inclined toward retributive punishment of wrongdoers (see here and here).  Which I suppose is reasonable if you think “people don’t have free will because everything is random” but does not seem reasonable if you think “people don’t have free will because we’re automata interacting with our environment.”  Although I can see why people might be lenient even with the latter belief – if someone wasn’t in control, why punish that person?

This does, of course, presume that you, an actor determining whether and how to punish, *do* have a choice.  Because, again, if *you* also are entirely the product of mindless physics, then anything I type is irrelevant.  Or, rather, I’m bound to type it, and you’re bound to read it or not, and the universe will evolve through time as it was meant to do.

But, setting that aside, in the automaton conception of no-free-will, we would believe the criminal’s actions were the product of its environment.  And part of that environment is whether or not you are going to punish – so by being likely to punish, you reduce crime whether or not the criminal had a choice, simply by tilting the payoff landscape that the automaton is operating within.

This just happens to break down if you believe everything is utterly random.  But I’d like to think that coherent use of language contradicts total randomness, and, well, partial randomness is something we might have to live with.  Maybe we should all play some electric football and have a laugh.

free-willBut, right, the experiment I do think would convince someone that they don’t have free will is this: they go into a laboratory and have their brain scanned to whatever level of detail is necessary to determine its current state.  So, right, this is why I don’t think the word “experiment” is entirely appropriate, because I think the level of detail you’d need is fairly precise positioning of all the atoms, and hopefully you’re doing this experiment in a way that doesn’t destroy brain tissue, and you’re doing the scan quickly so that the state doesn’t change mid-scan… not feasible now, and probably not feasible ever, since atoms don’t even *have* precise positions and even if they do, ascertaining what those positions were would involve imparting quite a bit of energy.

No matter!

We determine the state and leave the subject in a room and there’s a pre-recorded voice emitted from a speaker (it needs to be pre-recorded because if you’re assessing whether one subject has free will, you want them to interact with only inanimate objects that don’t during the “experiment”) saying “sit at the table and write an essay about XXX.  Use the back side of this sheet of paper” and a sheet is extruded from a machine.  There’s a pencil sitting next to it.  After a set time elapses, the speaker says “please turn over your sheet of paper.”

If the printed essay on the back side of the sheet is identical to what you wrote, produced by the computer after your brain scan was completed, you don’t have free will.

But there’s another way you might not have free will, other than a deterministic way.  Because maybe one neuron in your brain will either fire or not based on binding of neurotransmitters, and the concentration of that transmitter is exactly equal to its Kd.  You still don’t have a choice, but there’s a 50% chance you’ll do one thing, 50% chance you’ll do something else.  Then the machine would have to print two essays and say there was a 50% chance of each one matching your essay.  Or if the neurotransmitter concentration was slightly above the Kd, maybe there’s a slightly higher than 50% chance of one of the essays, and 100-X% chance of the other.

Or if there are many neurons that might fire or not, then there would be many essays, and a % chance for each to have been produced.  But if there are enough, then even if the computer is entirely accurate, and predicted correctly that there was a 0.003% chance of *this* essay being written by me (and, incidentally, lesser or equal probabilities of any other essay), I don’t think I’d be convinced.  Because I *did* write this essay, so to me, after having the rationalization layered over by my consciousness, I might well think it was inevitable, and that teensy probability assigned by the computer would make me think it was just guessing.  Even though it wasn’t.  It was accurately assessing every possible interaction that could happen inside my brain between the scan and the time the voice told me to flip the sheet.

scott6-smAlthough it’s strange to me that Scott Aaronson does not agree that this experiment would convince people that they don’t have free will.  As in, even the first deterministic outcome.  He wrote: “However, I’m aware that many people sharply reject the idea that unpredictability is a necessary condition for free will.  Even if a computer in another room perfectly predicted all of their actions, days in advance, these people would still call their actions “free,” so long as “they themselves chose” the actions that the computer also predicted for them.”

I don’t think a reasonable argument could be made for having free will if your actions were entirely predictable.  Because, yes, you can always make up a story for why something happened, but if your actions are predictable, the real reason is “the flow of salts inside your skull dictated it, and any mind starting from the same initial state as yours would’ve done the same.”  No different from making up whatever mythological stories you want to explain why lightning happens – Zeus is angry! – even though the underlying physics behind the phenomenon is unaltered by your storytelling.

And, right, I should point out that it’s not that Aaronson doesn’t agree with this claim… he just seems overly pessimistic, to me, that *other people* would not abandon their belief in free will if they were shown to be entirely predictable.  To quote Aaronson again: “Just as displaying intelligent behavior (by passing the Turing Test or some other means) might be thought a necessary condition for consciousness if not a sufficient one, so I tend to see Knightian unpredictability as a necessary condition for free will.”

(The phrases “Knightian unpredictability” / “Knightian uncertainty” will show up a few more times when I quote Aaronson.  I’m writing a crappy essay for a personal website, not a scientific paper, so I get to be sloppier and just say “random.”  But what he means is that something’s random *and* no one could reasonably assign a probability to the outcomes.)

Personally, I think that a belief in free will has to include believing that consciousness somehow influences quantum collapse.  Which sounds somewhat unreasonable, almost mystical… I willed the sodium into being here, where it could traverse the channel, instead of over there!… but otherwise you’re stuck with either deterministic classical mechanics or entirely random, uncontrollable quantum mechanics.  Neither of which I think is compatible with free will.

Of course, in Max Tegmark’s book “Our Mathematical Universe” he argues that quantum mechanics is not involved with free will.  But I think he addresses the wrong concept in his argument.  Because he is writing about whether or not it makes sense to consider the brain as a quantum computer – a neuron or set of neurons as a whole being simultaneously in a state of having fired and having not fired – and then, because decoherence would happen too quickly, claims that the system does not incorporate quantum effects.  But this claim doesn’t seem reasonable – for instance: in Double-slita two-slit experiment with electrons, the system obviously decoheres when the electrons hit the detector, but that doesn’t retroactively eliminate the prior contribution of quantum mechanical uncertainty.

Of course, despite my qualms with the claims made in Tegmark’s book – which in his defense was written for a general audience, so probably required occasional glossing over of details – and in the abstract of his decoherence paper, he does describe matters in a way I’m fairly happy with in the text.  For instance he specifically addresses the fact that there is still room for quantum mechanics in neural processes even if all his assumptions are correct:

“The only remnant from quantum mechanics is the apparent randomness that we subjectively perceive every time the subject system evolves into a superposition, but this can be simply modeled by including a random number generator in the simulation.  In other words, the recipe used to prescribe when a given neuron should fire and how synaptic coupling strengths should be updated may have to involve some classical randomness to correctly mimic the behavior of the brain.”

And, to get back to Aaronson’s paper – I think he might scoff at the idea that I think free will necessitates influencing quantum collapse.  To quote him again, “I don’t think quantum mechanics, or anything else, lets us “bend the universe to our will,” except through interacting with our external environments in the ordinary casual ways.”  Which seems reasonable… commendable, almost, to believe, because I mentioned that the idea that a mind could influence a particle’s position seems mystical, and as a scientist, I feel squeamish believing in something mystical.

But in my opinion, Aaronson doesn’t get around believing in concepts that sound mystical either.  First, I should quote his passage that dispenses with what I believe to be the most reasonable source of free will:

“The second proposition is that, in current physics, there appears to be only one source of Knightian uncertainty that could possibly be both fundamental and relevant to human choices.  That source is uncertainty about the microscopic, quantum-mechanical details of the universe’s initial conditions (or the initial conditions of our local region of the universe).  In classical physics, there’s no known fundamental principle that prevents a predictor from learning the relevant initial conditions to whatever precision it likes, without disturbing the system to be predicted.  But in quantum mechanics there is such a principle, namely the uncertainty principle (or from a more “modern” standpoint, the No-Cloning Theorem).  It’s crucial to understand that this source of uncertainty is separate from the randomness of quantum measurement outcomes: the latter is much more often invoked in free-will speculations, but in my opinion it shouldn’t be.  If we know a system’s quantum state p, then quantum mechanics lets us calculate the probability of any outcome of any measurement that might later be made on the system.  But if we don’t know the state p, then p itself can be thought of as subject to Knightian uncertainty.”

He thinks that wavefunction collapse itself would not be an appropriate medium for free will, since it is probabilistic and therefore predictable.  And, look, that’s totally reasonable.  I really like his essay, which is part of why I’ve put off writing my own for a while – I read his shortly after it was released, and figured I should look it over again in order to do it justice for this.  But I think that probability is itself a strange concept as regards free will, because any argument that could be made as to whether or not someone has free will would be made after the fact – you did that because XXXX – rather than a prior Oedipus-style prognostication – you *will* do YYYY – since the latter can be evaded by an informed actor.  And after the fact, it’s weird assigning probabilities to actions.  One thing *did* happen, other things did not.  So, yes, maybe quantum mechanics would assign a 50% probability to either wavefunction collapse.  But, honestly, I think that a strict believe in quantum mechanics as currently understood would mean that humans *do not* have free will.  Including me.  And since I’d rather believe that I have free will, since I foolishly believe that interpreting my actions as my own choices makes me behave better, I slap that mystical interpretation onto wavefunction collapse with the caveat that I’m only influencing matter inside the confines of my skull.

Which, right, that’s pretty arbitrary, right?  Thinking that I can influence something, and then drawing a bubble around the sphere of matter I can influence.  But I already feel bad about the little wedge of mysticism that I allow myself, so I have no great Yoda 2desire to believe in a Star Wars style force that I might use to jump a lightsaber to my hand (or, in situations more relevant to my current life, jump a book into my hand while holding a sleeping baby and trying not to wake her).

But to me, the workaround that Aaronson suggests – his idea of “freebits,” uncertainties about the state of the universe at conception, seems equally mystical.  Because, sure, I think there are features about our universe that we won’t ever know.  But the idea of consciously *using* those features – I fail to see how that’s any less mystical that consciously influencing which of two outcomes a wavefunction collapses into.

Unless, of course, Aaronson doesn’t believe in free will.  Only that our choices are, and will always be, immune to outside prediction.  And that’s fine… that’s probably what I *should* believe also, given the rest of my beliefs.  But I don’t.  I think I can lift my arm whenever I want, I lift it, I convince myself I have free will… and then slapped some mystical gibberish onto my understanding of science in order to accommodate that belief.  Is that so wrong?

And there are, of course, problems with my belief.  Like, I began as a zygote.  When, exactly, do I think an *I* arose who started exerting control?  And how many other objects (people, animals, plants, computers, rocks) should I believe have similar control?

Which, again, would lead to a lot of ridiculous-sounding mystical claims.  And I suppose I have to be fine with that.  Because, well, gosh darn it, I *like* believing in free will.  So I’m going to do it.  Scientifically ridiculous or not.

And, right, if you like thinking about these issues, you should read Aaronson’s excellent free-will paper.  Almost all of it is written at an appropriate level for a general audience, and it’s the best treatment of these ideas that I’ve read.  Yes, I disagree with his final conclusions, but only because I don’t really think his “freebit” idea qualifies as science (although it’s certainly more scientific than my belief, and he does address falsifiability) and because I have a preconceived notion of free will that, despite everything, I’m unwilling to give up.