Mystery Meat
…being a term generally applied to the entree at South Residence back at the University of Guelph. This time, though, I’m talking about the mysteries of thinking meat, and a couple of thoughts thereupon which have sprung up elsewhere since I got back.
The first is over at ChiZine‘s blog, which is currently running a series of guest posts as a lead-in to their upcoming SpecFic Colloquium on October 15th. The overall theme for the day is “Modern Mythologies”, but recoiling from the faint stench of anything Classical I have opted to twist it around to suit my own interests. You can see what I’m on about over here; it won’t come as much of a surprise to anyone who knows my stuff. (I should probably get started on that talk, now that I think of it…)
The second is an installment of “Too Hard for Science“, another continuing series of blog postings over at Scientific American. Blog-wrangler Charles Choi grills a variety of luminaries (and me) for their wish-list of cool scientific questions they’d love to see explored, but which are currently out of reach for reasons of technology, ethics, or the lack of a particle-accelerator the size of the Andromeda galaxy. Given my interest in consciousness issues, it was for me an easy call: I advocated a breeding program for conjoined twins fused at the head. And to give cqchoi his due, he actually included the bit where I insisted there couldn’t be any kind of down side…
Sounds like it will be good if that ChiZine bit was a preview. As usual, your words are very hard hitting.
It’s funny, because I became atheist about two years ago after the deaths of two close family members. I was never really religious, but it did take me the longest time to really deal with things. I’ve always been an analytical type, wanting to know how things work. If things don’t add up, then I can’t deal with giving in to blind faith. It’s difficult missing them, and occasionally still hoping the impossible isn’t so impossible.
I know this is probably odd to bring up, but the subject matter of the article I read reminded me of it. Sorry if it’s a bit on the sad/personal side.
Makes me think too about how our brains perceive time. We can think intellectually of eternity or an end to time, but we can’t really comprehend it fully. Maybe that keeps us from going insane, who knows.
“Watts is joking about experimenting on unborn children. — CQC.” Good to know, that 😉
I’ve read interesting book that sheds some light on the development of self-conscious thinker. Namely “The Ego Tunnel” by Thomas Metzinger. While he grounds his work in experimental data, in my opinion he manages to present a fresh approach.
Would love to talk about it, maybe we will have an opportunity in Lublin when you visit Poland for Falkon.
I tried to comment over at the ChiZine site but it didn’t seem to work. With regards to “I have no more influence over your actions than you do.”. I would say instead that we are nothing but constellations of influences believing that we are influence-free intentional selves.
While I’m in no shape mentally to comment on this post’s content, I love that its headline sits right below this: http://onionlike.tumblr.com/post/10487506959/peta-plans-porn-website-to-promote-message Onion-Like Headlines post in my google reader.
Made my day (-:
South Residence? That’s what we called it at Creelman.
Is there really a difference between having an intensional self and believing you have one? Certainly the one whose it would be if it were there can’t tell the difference, and I’m not convinced anyone else can, either.
I think there is a difference. You can apply the right trickle of juice to the skull and not only will your subject move on command, she will swear up and down that she intended to move that way, even though it’s obvious that she’s been puppeted. When an indisputably involuntary act is accompanied by strong subjective sense of agency, I gotta say the belief in an intentional self is worlds apart from the actual item.
Showing that the intentional self can be fooled is not the same as showing it doesn’t exist. To touch on another of your myths, having a hole in your visual field is not the same as being blind.
Aren’t scientists supposed to be cautious about generalizing from experimental results?
I think you confuse latency (delay) and bandwidth (data rate) a little bit in the SciAm article, but that points to some interesting questions / experiments.
Lets say you replace the corpus callosum with a controllable proxy, with two dials on it, one to change rate, one to change bandwidth. Is integrated consciousness a function of latency or bandwidth or some combination? Could you split someone’s hemispheres in half and move them apart by miles using electronic links? Can decreasing the latency somehow change consciousness?
argh, should have been ” two dials, one to change **latency**, one to change bandwidth”
@Doubter:
“To touch on another of your myths, having a hole in your visual field is not the same as being blind.”
Uh, yeah it is. That’s what a hole in your visual field *is*, by definition: that’s why they call it a “blind spot”. You don’t perceive it, but you don’t perceive the lack of sight out of the back of your head, either; it just doesn’t register cognitively.
“Showing that the intentional self can be fooled is not the same as showing it doesn’t exist.”
We haven’t shown that the intentional self can be fooled. We have shown that the self can be fooled into thinking that it is intentional. Whole different thing.
Look, what do we have to support the idea of free will? A subjective sense of agency. Once it’s been demonstrated that this sense of agency does *not* necessarily correlate with free will, you’ve lost your supporting evidence; the onus is now on the free-will lobby to come up with more, not on the rest of us to prove the negative. I can’t prove that a crew of hamsters partying it up in a teacup-shaped spaceship out around the Pleiades “doesn’t exist”, any more than I can prove that God “doesn’t exist”; but I’d still be a credulous idiot to accept either premise without some kind of evidence.
Really, you’d think that someone who called himself “Doubter” would know this stuff.
OK, the scientist crack was a bit snarky.
Hey, I’ve been known to be a bit of a grump myself on occasion.
“We haven’t shown that the intentional self can be fooled. We have shown that the self can be fooled into thinking that it is intentional. Whole different thing.”
I am not sure I understand why you are disagreeing.
What if the self is intentional and can also be fooled into a different feeling of intentionality?
That is why I agreed with the comment you are disagreeing with: because by analogy it is not the case that giving someone the equivalent of a phantom limb means that they do not have or have not had limbs.
I just don’t know how to falsify this. I don’t see how it is falsified by the “feeling”.
The original wording of the claim implies a priori that the self is intentional, and can be fooled. But the whole issue is whether true intentionality exists, so you can’t assume it up front.
That’s possible: now go out and find me some evidence to that effect.
This is a common (and, I think, erroneous) response to these kinds of studies. A is held up in evidence to support conclusion Z; A gets shot down; someone pipes up with “Well, B or C would also be consistent with Z”. But the fact that something is consistent with an idea does not mean that it is evidence in support of that idea. The existence of Portobello mushrooms is consistent with the existence of the Democratic Party, but it doesn’t prove the existence of said party.
In this case, the feeling of agency is pretty much all we’ve got in the “Evidence for” category; and we’ve got shitloads of objective data from Libet on up, not to mention philosophical arguments deriving from the assumption of a material universe, in the “Evidence against” box. Once that one feeble leg has been kicked out from under the Pro-will side, you have to come up with another leg to stand on. Simply suggesting another mechanism that might yet lead to the vindication of the disproved model is putting the cart before the horse; we should be basing our conclusions on the evidence, not starting out with the conclusions we want and looking for ways to prop them up.
Nihilism! Fuck me. I mean, say what you like about the tenets of the Tea Party, Watts, but at least it’s an ethos.
“That’s possible: now go out and find me some evidence to that effect.”
I have no clue how to do that, and the phantom limb analogy I made is very weak in that regard given that we can easily have evidence for limbs, so can corroborate the experience of the subjective experience of having a limb with the evidence. (proprioception, visual perception, touch, etc. all of which can be fooled, but all of which we can measure when they are not)
Thank you for the excellent response which I will now mull over.
I have some related problems with respect to my atheism. I do not feel I can rightfully claim to be one, given that I have no way to falsify a claim depending on the particular claim, so wouldn’t I then be an epistemological agnostic?
to go off on tangent: I do not like to tell people I am agnostic, since what I think it means is not the common parlance (people seem to think it is a “wait until you have the evidence to convince you” rather than “it is not possible to know”). And I think Okham requires a leap of faith, but maybe I am flawed in that thinking. I haven’t figured it out.
(it becomes a rather silly and funny reification mistake to go off on a flight of fancy that the number of gods I do not believe in is an uncountable set given that I do not believe in each deity responsible for each number that belongs to the Reals.)
Here’s what Libet himself thought on this issue (from his paper “Do We Have Free Will?):
“However, we must recognize that the almost universal experience that we can act
with a free, independent choice provides a kind of prima facie evidence that conscious mental processes can causatively control some brain processes (Libet, 1994).
As an experimental scientist, this creates more difficulty for a determinist than for a
non-determinist option. The phenomenal fact is that most of us feel that we do have
free will, at least for some of our actions and within certain limits that may be
imposed by our brain’s status and by our environment. The intuitive feelings about
the phenomenon of free will form a fundamental basis for views of our human nature,
and great care should be taken not to believe allegedly scientific conclusions about
them which actually depend upon hidden ad hoc assumptions. A theory that simply
interprets the phenomenon of free will as illusory and denies the validity of this phenomenal fact is less attractive than a theory that accepts or accommodates the phenomenal fact.”
@rm3154: I would argue, but first I’d have to call up dictionary.com and find out what an ethos is. Sounds like a Star Trek alien.
@Sheila: there’s a famous line to the effect that “being an atheist is a belief system in the same way that not playing chess is a hobby”. Atheism is frequently defined as the belief that there are no gods: in fact, it is simply a lack of belief in gods. It’s a subtle but important distinction. “I don’t believe in God” is not the same as “I believe there is no God”, nor is it the same as “I don’t know if there is a God or not” (although granted, technically we have to admit that we don’t know anything beyond the fact of our own existence). Refusal to accept is not the same as active opposition.
This is also, IMO, why those who decry both atheists and believers as “equally fundamentalist because you can never really know” have their heads up their asses.
anyway, I wonder if one could temporarily cause a split brain experience? because I would wonder whether half of my brain would be a theist but I am not willing to have an accident where my brain is split in half.
@Sheila – If you look upthread, I think gawp is selling a kit on Etsy that would help.
@Doubter. heh.
I was thinking maybe an injection to paralyse part of the brain, but that would probably hit the other half too.
Peter mentioned an article the other day where social conformity was decreased when activity in (I forget) part of the brain was inhibited with TMS. If they could take that method and inhibit an entire hemisphere of the brain, then you could go through a number of sessions where the non-verbal side is trained to communicate.
I’d be worried that temporarily shutting down half of my brain might destroy my identity. What if when the two halves are reintegrated the identity is different? and all those other questions that follow from the thought experiment.
recreational simulated brain surgery might be fun
@Peter: “and we’ve got shitloads of objective data from Libet on up, not to mention philosophical arguments deriving from the assumption of a material universe, in the “Evidence against” box.”
I don’t see that the evidence is against intentionality, just against a dualist explanation. So we know there’s no “special sauce” doing nonphysical things to give us free will, but that’s not evidence against emergent behavior which is, short of being able to look at the universe from outside and compute everything like Laplace’ Observer, effectively intentional.
I think there’s an analogy with muscle movements: muscles can be forced to move involuntarily by applying electric currents to them, that doesn’t prevent them from being controlled by the central nervous system, nor does it prove that the CNS control isn’t intentional at some level of organization.
Wait a darn minute.
If you tickle my brain, and it makes me move my hand, and I think I intended to move my hand, so what’s the diff? If free will corresponds to the electric and chemical state of the brain, then that is free will.
It makes no sense to claim that your perception that you have the free will to tickle my brain is true, but when my brain is tickled, that isn’t free will. At that point where you intended to tickle, and that tickle caused my hand to move, and I perceived it as my free will, our wills were one. Do I only have free will when my will is contrary to yours? How does that work?
In other words –
How in the world can we prove free will without some outside agent that has free will. It’s all a bit circular.
There are anesthetics which knock one half of the brain out a bit before the other one, due to blood circ. Also, I think you’d have a pretty hard time trying to rTMS an entire hemisphere without spillover. But would be fun! Also, the dolphins seem to manage ok with the whole defrag…
Re. brains and the tickling thereof, isn’t the obvious conclusion that the induced sense of intention is the mechanism by which the hijacking is accomplished?
I mean, strongarm enough neurons into accepting an instruction and you have – pretty much by definition – gained the assent of the conscious mind, by dint of it being a consequence of neuronal behavior. Distort one, and you will necessarily alter the other, right?
@Doubter:
I get the sense from that paper that Libet got backed into a corner by his own findings, and was flailing around in search of a way out. He accepts his own results — even recognizes that introducing randomness into the scenario wouldn’t “free up” the will in any real sense — but then resorts to strange little claims like:
Thus Libet tries to equate two perspectives here as mere “beliefs”, neither really better than the other. The problem being that one “belief” — that of a mechanistic universe — is borne out by the sum totality of all scientific study, while the other — that shit can just sometimes happen without definable cause, that the phenomenon of consciousness is somehow outside the universe as we understand it — has no evidence at all to support it. This strikes me as the same kind of logic that allows Christians to claim that evolution is all about faith because you can’t prove that God doesn’t exist. Libet also seems use the term “phenomenal fact” to mean actual established objective fact, rather than merely “subjective experience”. He then cites this “fact” as evidence. I’m not convinced.
Seems to me the crux of the issue is pretty simple. Everything we know about neurons tells us that they don’t just pop off on their own: they only fire when stimulated by external inputs that exceed the action potential. Because neurons are reactive, the brains they comprise are also reactive: nothing pops unless poked. So all neural activity must ultimately get kick-started from signals received from outside the body. Since we cannot control those signals, we cannot control the chain of endogenous sparks that they initiate; I simply can’t see where free will could fit in to such a system, physically.
You want to make me believe in free will? Show me neurons firing spontaneously, without being stimulated in any way. In that case, the duallists were right all along.
Man, what a great story premise that would be…
@Sheila:
You’re looking for Amobarbital; it’s used to anesthestize one hemisphere at a time, in the course of mapping our brain functions prior to surgery. And in some cases, the hemisphere that remains awake turns out to have an entirely different personality than the usually-dominant one even when the brain hasn’t been surgically split.
@ Speaker-to-Managers:
See my previous answer to Doubter. I’ve got no objection to emergent behaviors, and the unpredictability thereof. And of course we all feel a sense of intentionality: but given what we understand about how neurons work, I still can’t see how any physically-based behaviour — deterministic, emergent, random, or unpredictable though it may be — can be “free”.
Or am I missing your point?
I didn’t realize one could actually anesthetize one hemisphere at a time. I thought the blood would circulate through the brain.
Search pulls up “Wada test” in case people want to find out more.
I’m going to look for articles about personality change. WIkipedia says that it is rare. There is no citation.
I’m not going to look too hard, since I need to be working on other stuff right now, but this might be an interesting case.
The association of multiple personality and temporolimbic epilepsy. Intracarotid amobarbital test observations.
The abstract claims that they were able to replicate the seizure related personality changes.
*sigh*…all I have to say, dude…you rock. You continue to rock.
@Łukasz …thanks for the mention of “The Ego Tunnel”! I have Being No One (Metzinger’s earlier acacdemic work on the same subject), and it’s quite a slog. Just Kindled the heck out it!
And thanks to our esteemed host for recommending both Metzinger and Wegner (among others) in the excellent afternotes in Blindsight (digital version only, IIRC)
Actually, the latest Tales from a Multiverse sums up the whole free-will-could-work-this-way perspective way better than I ever could…
@ Peter:
I think we’re not really in agreement on what could constitute “free will”. I think you want something to happen without a deterministic cause at the neurological level before you’ll call it “free”. I agree that’s not going to happen, but I don’t think that precludes “free will” because that’s not where I’m looking for it. If you deny the possibility of dualism (and I do), then the kind of “free will” you’re talking about can’t exist in a deterministic universe because everything is based on physics; even in a quantum non-deterministic universe any freedom is simply a lack of predictability at the micro-event level, which says nothing about the nature of human will, since you won’t find anything like a human without looking upwards many levels of organisation.
On the other hand, if we’re looking at a level where biological organisms function, especially humans, it’s not so simple. Yes, nervous systems are inherently reactive, for that matter reactivity is one major characteristic of the definition of life. But I think it’s the interaction of systems, some programmed primarily by genetics, some primarily by individual history, and some by the immediate environment, which determines behavior in complex organisms. In humans the effects of all of those factors is so complex that I claim it’s always going to be computationally intractable to predict them in detail, no matter how good your models are. And at some point, when dealing with systems so complex that you don’t know for sure how they’ll react to any given situation, it becomes useful to abandon attempts to describe the behavior deterministically and instead adopt the intensional stance. At this point using an intentional model is more effective than a non-intentional one; whether a Supreme Being with infinite observational and computational resources could tell the difference is irrelevant, since no such being exists (or can exist, in my view). And I don’t think any lesser being can be expected to tell the difference in all cases*.
* Though that raises an interesting question: would a much more intelligent being than ourselves be able to deduce enough of our inner workings simply by observation that we would seem like automatons to that being? And given that these sorts of computations probably get harder exponentially with increasing complexity of the being under examination, is there some practical limit beyond which no observer, no matter how intelligent, would not prefer the intentional stance? I think that we are already beyond that point in some respects, but not in others.
(if someone comes along after a new comic gets posted, the one he’s talking about is this one
) I couldn’t stand it. I had to close the paren.
the kind of “free will” you’re talking about can’t exist in a deterministic universe because everything is based on physics; even in a quantum non-deterministic universe any freedom is simply a lack of predictability at the micro-event level, which says nothing about the nature of human will, since you won’t find anything like a human without looking upwards many levels of organisation.
That.
Also, if you think you have free will, you do, because it’s a sensation/experience, and because until Philip Dick’s empathy machine gets built, you can never really know the experiences of another. Ever.
Also, again, using one organism with free will to puppetmaster another to show the second has no free will is petitio principii. No offense to Dr. Penfield and his electrical probes.
Also, is there fertile new ground that needs plowing on this free will discussion?
@Speaker:
If I’m reading this right, you’re saying that deterministic-and-thus-constrained behaviors in living physical systems can be unpredictably complex, only you’re saying it twice, and to slightly different ends. In your first paragraph you admit “the kind of “free will” you’re talking about can’t exist in a deterministic universe because everything is based on physics”. But then in your second paragraph you basically restate that with the chaser that we might as well regard such systems as intentional anyway, since they’re unpredictability means we wouldn’t be able to tell the difference anyway (correct me if I’m reading this wrong).
Unpredictability here isn’t an issue in terms of free will: dice rolls aren’t especially complex but they are largely unpredictable, and nobody attributes free will to them. As I see it the question of unpredictability is irrelevant; the real question is one of culpability. Can we be held accountable for our own actions? Do we have any choice as to what we do, whether those choices can be modelled by god machines or not?
Or to put it another way: regarding ourselves as physical systems, if it were possible to replicate exactly the combination of internal and external states we experienced at a given decision point, would we be able to make a different decision? If, as we both seem to agree, we are deterministic systems, then the only way to answer yes to that question would be via the introduction of some random element into the mix; and since randomness by definition is also beyond volitional control, in neither case could you say that we were actually in control of the decision. And that’s a pretty important conclusion whether a third party could predict our behavior or not.
@ Hljóðlegur:
“using one organism with free will to puppetmaster another to show the second has no free will is petitio principii.”
Once again you presuppose that the first organism has free will to begin with. You cannot prove a point by assuming it as a starting condition. Go and look at that holographic-raisin installment of “Scenes From A Multiverse” again. Do not pass Go.
@Peter: You cannot prove a point by assuming it as a starting condition.
Yes, we agree, petitio principii is bad form! And I can agree to “raisins/pluralitas non est ponenda” so long as we agree-to-disagree what constitues “sine neccesitate.” Given we mostly agree, I may have expressed myself badly; let me try again.
What we are not in accord on is the efficacy of an experiment to test free will, because I can’t see a way to test without the assumption of free will on someones part.
Free will is either A) the exercise of at least two choices between behaviors or B) the experiential sensation that A is occuring or has occurred.
and
C) We have to allow for the possibility that B can occur without A.
So can someone walk me through Dr Penfield electrically stimulating someones arm so it moves, the patient thinking they willed their arm to move, and this proving that free will does not exist, but without petitio principii.
In this scene, Penfield and the patient can both feel they have free will, but be incorrect (C). But how do we distinguish that from the scene where the exact same thing appears to happen, but they both really are exercising free will – they both meet (B), and might meet (A), but how to devise a discriminator?
I’m stuck even at what the null hypothesis is here, because if I select “no free will” I have must make a choice, using my free will. We have granted me free will a priori to select one or the other as the null. Do you all see the tar baby this is? The experimenter, even for gedenken experiments, is, I don’t know, entangled in the experiment. Is that too physics-y?
A major component of experimentation is to consciously change the conditions, ie, to select between actions. If I am the experimenter with no free will, there is only one way to go at every juncture and I haven’t performed an experiment, not really.
This is directed at the group, too, btw. I want to untangle this – if someone can walk me through the Penfield idea so it makes sense, I’d be delighted.
@Hljóðlegur: there is nothing to untangle. We are playing an unproductive word game. There is a similar circle if we debate objective vs subjective nature of reality.
I am an objectivist. Reality is real. However, all of my experience come to me through my senses. I am not aware of external reality, just what my sensory and interpretive apparatus tells me of external reality. I am now a narcissistic subjectivst. Y’all may not even be there for all I know. However, my very framework for deciding that nothing is real presupposes the existence of some things, like my senses, external stimuli, the whole perceptual apparatus actually. So if I grant that these things exist, I must be an objectivist after all. However, how do I know that my sensual apparatus exists? All of my knowledge of my senses comes to me from my senses….
and round and round we go.
Free will vs determinism is another charged couple. Round and round we go.
@Hljóðlegur: W.T.F.
A) BASIC’s IF/THEN/ELSE command can choose between more than two options. You are saying that my eighties-era Commodore-64 had free will.
B) So “feeling” that something has occurred is tantamount to the reality of the occurrence. Which means that my dreams are real, that people who subjectively experience astral projection under anesthesia really do leave their bodies and bounce around the OR ceiling like helium balloons, and that the policeman’s hat really did float off his head and spin around during Diane Boyer’s acid trip back in 1974.
You may be the only person on the planet who defines free will in this way. Any other well-established terms you’d like to redefine wholesale before we proceed?
Could everyone define what they mean by “free will” before we go any further?
For me, if you have a consciousness and you make choices, then you have free will. Notice that I’m not saying that your consciousness has to make the choices; the system comprising your consciousness, subconciousness, unconscious mental processes and your body’s glands makes the choices. The fact that your choice could be predicted, or you could be coerced or manipulated into picking a particular choice doesn’t have much bearing, I think.
A definition of free will that means that either no-one has it or a dice has it aren’t very useful, so I don’t have much truck with those.
Is it too late to contribute? Probably, but here goes anyway.
@ Hljóðlegur: You ask “But how do we distinguish that from the scene where the exact same thing appears to happen, but they both really are exercising free will – they both meet (B), and might meet (A), but how to devise a discriminator?”
Assuming I’m reading you right here, it seems to me that it is precisely this discriminator that is provided by Libet’s experiments. Think about it: the central problem in the free will debate is ascertaining whether our subjective feeling of free choice does or does not coincide with deterministic processes over which we have no control.
In the normal run of events, this is a problem because our mental states tend to be reflective of our external circumstances. This may be because we freely choose in response to our circumstances–or it may be because entirely autonomous neurological processes generate an epiphenomenal awareness of external circumstances that has no causal efficacy. It is simply impossible to say which is true from a first-person perspective.
And this is where the genius of Libet’s experiments comes in. By staging a neurological intervention that exhibits no perceptible continuity with the external environment (but which nevertheless belongs to the external environment), he allows us to discriminate between our subjective sense of free will and an external process over which we have no control. The result? Our subjective sense is every time subordinated to the external process. It is difficult to imagine a better discriminator than this.
For all that, I’m not sure what we’e supposed to lose if the idea of free will is lost. We’re the whole package: conscious awareness, unconscious processing, somatic inclinations. Our desires are to be calibrated to our best interests. At the end of the day, we always choose what we want.
I just ate a cookie.
I went to the kitchen to check on some baking, and for no particular reason decided to eat a cookie. The only problem is that by the time I “decided” my hand had already opened the pantry door. I guess that my hand decided instead of me. I hate when it does that. The fucker is supposed to be an appendage, not a processing unit. At times of even greater distraction the cookie (or its equivalent) had made it all the way to my mouth before I realized that I was munching. What am I supposed to conclude from all that? The devil made me do it? Am I a slave to my appetite? Slaves don’t have free will. But if I am a slave, then who is my master?
I haven’t given up all hope. In the words of Andrew Ryan: a man chooses, a slave obeys. I am a slave. But somewhere out there the one truly free man is waiting. One day he will come and change things. The only concern is that he will probably insist that we give up our cookies. But if that is the price of freedom… Oh well, that cookie was good. Thankfully, tonight I am still a slave.
@ NelC:
I see what you did there: you decided up front what you wanted the definition of free will to be, and then asked for a definition while preemptively denying any divergent ones.
But the fact is, definitions that mean “no-one has it or a dice has it” are very useful, insofar as they are most likely to reflect reality. I think the word you had in mind was “palatable”. But whether something’s palatable or not has no bearing on whether it’s true; and the current groundswell of scientific evidence and informed opinion to the effect that free will (conscious or otherwise) is illusory is useful in very practical ways. For example, it can inspire reformation of our justice system away from a vengeance-based “personal culpability” model and towards a more pragmatic “damage-limitation” model.
Quick-albeit-loaded question for you, Nel (and there will be a followup): you may have heard about that hypersexual pedophile down in Florida a few years back, the guy who turned out to have a tumor in his brain; once the tumor was removed, his antisocial behavior disappeared. In light of that, do you believe that he was responsible for the actions that got him incarcerated?
@rm3154:
Yeah, but look what happened to him…
@ Peter Watts
“…the current groundswell of scientific evidence and informed opinion to the effect that free will (conscious or otherwise) is illusory is useful in very practical ways. For example, it can inspire reformation of our justice system away from a vengeance-based “personal culpability” model and towards a more pragmatic “damage-limitation” model.”
So, to summarize:
1. Free will is an illusion.
2. Here’s what we should do about that.
I think that once you accept 1 (which I don’t), arguing 2 is a bit silly. You can’t change anything if you can’t change anything.
No no no. The fact that our will isn’t “free” doesn’t mean that it isn’t influenced by various historical and environmental factors — that’s the very reason it isn’t free, in fact. We make decisions based on inputs; here is some input; my deterministic conclusion is that we should reform the justice system. Maybe we will; maybe we won’t. It all depends on how all those other systems react to their inputs.
But the making of decisions doesn’t by any stretch invalidate the no-free-will position. We all make decisions (just like that C-64 that Hljóðlegur seems to think has free will). The question is how much choice we have in the matter.
I should add that I actually agree with you about changing the justice system! It fits in with my general belief in harm reduction.
@Peter Watts: If we have no choice in the matter, then our lives are essentially the playing of a cassette tape (or sequential data file of your choice) that was created by the Big Bang. Any talk of making deisions or changing anything in the face of that is ludicrous.
I can see where you’re coming from on this, but maybe we’re arguing definitions. Computer code is chock full of “decisions” — IF this is true THEN do that other thing — which alter the behavior of the program in response to changing conditions. That, to me, is a “decision”: you buy the cheaper pair of gloves, or you choose the more expensive set of brass knucks, or whatever. It doesn’t even have to derive from the cassette tape playing out after the Big Bang (although I’m certainly not averse to that model); it can be any set of decision-making algorithms that bases output on a variety of inputs, including a random-number generator or two if you like.
Any decision is a response to input. Logic gates make decisions by definition; but they do not have free will. Algorithms make choices, but these choices are governed by rules. I’d gladly admit that human behavior is frequently so complex that the rules aren’t readily apparent, but that doesn’t make it any less algorithmic — and I can’t think of a workable alternative model that doesn’t descend into dualism. Maybe you can.
Look, I don’t have a bulletproof argument or set of experimental that will convince anyone of the existence of free will. If I ever come up with one, I hope you’ll be there to clap when I pick up my Nobel.
I place myself in the philosophical camp that asserts that consciousness (including qualia and feelings of agency) is not an illusion. I guess I align most closely with David Chalmers. I think his view is called is called ‘naturalistic dualism’. So, have I come up with a workable alternative to dualism? Apparently not.
But the making of decisions doesn’t by any stretch invalidate the no-free-will position. We all make decisions (just like that C-64 that Hljóðlegur seems to think has free will). The question is how much choice we have in the matter
I said no such thing. You did.
If you saw your C-64 acting outside its programming, actually chosing between options in a way that was not determined by its hardware and software, that’s fine. But I never met the particular machine in question, and I never claimed it had free will; that was your reductio ad absurdum.
@Lodore – Thank you very much for the walk through. I was so delighted a wrote out a reply longhand while on the metro this morning – I wish I had time type it in. If you’re still around later, I will lay it on you – thanks, again!
This is one that I don’t agree with, and I have trouble seeing why you would. I’ve had this conversation with a friend when discussing the Libet experiments and he said that if there was no free will that he thinks the history of our culture would be completely different, nihilistic. But I do not see this, if it is the case that we do not have free will it wouldn’t mean that our subjective experience of the world changes, it means that things have existed that way and our current subjective experience and our culture’s subjective experience are the same and that is how they work given the physics that we exist in.
Someone maybe can come along and explain this better than I just did. I’ll ping my friend to see if he can state his position better than the gist I gave.
Great discussion, you all, btw. I wish I weren’t under the gun right now at work – have a blast!
@Sheila – I don’t think a lack of free will would lead to a nihilistic culture, but widespread belief in a lack of free will probably would.
I could be wrong on that. Susan Blackmore’s answer was to undertake a program of meditation that (she claims) gradually lead to the extinguishment of any internal feeling of free will. She’s also trying to extinguish any internal feeling of self, but admits that’s taking a bit longer (!).
(The comments in this post are better than the comments I’ve encountered elsewhere in blogs whenever Libet and such come up. People tend to come in with woo or hot air (even science blog readers!). thanks y’all)
@Hljóðlegur:
Which is why I said seems. It was an inevitable implication of your claim.
@Doubter:
A study came out recently suggesting that people do adopt a more nihilistic outlook in the wake of exposure to no-free-will arguments, but that they revert to standard free-willian attitudes in pretty short order (hours, I believe). Our sense of agency is so strong that we just keep defaulting back to it. Can’t remember the exact citation, though.
Oddly enough, this Buddhoid riff is something that’s pretty much at the heart of Dumbspeech. Assuming I ever get the damn thing finished.
This reminds me of the hedonistic treadmill and how it is the case that people tend to default back to their base state of satisfaction. Which sucks for me as I am so depressed. wah.
@Peter, finish your damn book already. slacker. (<– that was reverse reverse willfully misapplied psychology. I think.)
prosocial is a good search term on it… cannot remember the exact article but remembered the keywords so following a trail might
http://psp.sagepub.com/content/35/2/260.abstract for the antisocial aspect
don’t remember the back to base norm behavoir.
(I should get back to my damn work now and stop being a slacker. will also stop projecting slackerhood on you guys)
@Sheila – Me too. At this moment, I freely and consciously chose to stop participating in this discussion and go back to shuffling papers for the Man…
Hello again, hello drowsy afternoon.
I pinged my friend and he said he didn’t want to talk my imaginary internet friends here, but he likes chatting with me about this stuff so I got a new gist. He claims that Okham comes down as not having free will being the more complicated way to look at things so he discounts it.
I’m not sure I have anything to go with the discussion with him after that, as it would just boil down to disagreeing on which explanation is more complex and should be pared.
@ Peter et al: The nihilism (potentially!) entrained by the realisation of the lack of free will is explored fantastically by Ted Chiang in his ‘futures’ piece for Nature here. (Apologies if I’m playing the nube flagging what everyone’s already read.)
@ Hljóðlegur: Cheers, Yes, I’ll be around, and would be delighted to hear any comments you have. In fact, I’m always around. This, I think, is a problem. One day, I may actually do something about it …
@Lodore – thanks for the walk-thru. I see the general idea of demonstrating that the subjective sense of free will (the sense of conscious choice between two or more options) is not reliable in that it doesn’t map 1 to 1 to what an outside observer sees.
Where we get into the weeds is that it doesn’t disprove free will in general, just that the patient’s sense of having chosen in this case doesn’t match to the outside observer’s perception.
And we are further lost in trying to look at the experiment and say whose perception is correct? If we claim that the patient’s sense of having chosen freely and consciously among options is false, and we assume the intervention was that the doctor freely chose where to place the probe, we have a priori assumed that the doctor really does have free will. If we assume the doctor was programmed by some event (flipping a coin, random number generator) then he had to decide to abide by the decision of the random number generator, and if we decide we as the observers somehow controlled the doctor, then we have the free will.
I still do not see how the scene looks different if we assume everyone has free will versus if we assume no one does?
Also, a computer using logic gates involves no decision-making. Decision implies conscious selection between options, and computers don’t. That’s the beauty of them – they predictably, unless you are feeding fluctuating sets of paramaters into them, do the same thing over and over, make the same “decisions” over and over. The very idea of free will, spurious or not, is that the decider can consciously do this or that, freely choose between two or more options. I would never imply or think a computer can do this. Not yet, anyway.
Hljóðlegur: I still do not see how the scene looks different if we assume everyone has free will versus if we assume no one does?
That’s kind of the thing though, isn’t it: it wouldn’t look any different (I think Sheila mentioned this earlier). Whether or not free will is actually possible anywhere in this universe of ours, so long as we have that gut level perception of agency, we’ll act accordingly. Just because something is an illusion doesn’t mean the feelings you associate with it aren’t real.
Now there’s an interesting question that arises from all that: what’s this feeling of free will actually keying off of if there’s no free will?
Well, that’s an interesting question, isn’t it? Inability to see beyond three dimensions, lack of information? Do you have a theory, then?
“Do you have a theory, then?”
I might hypothesize that it’s a mixup in our conscious hardware between the correlation of thought vs action and an actual causal link (the ol’ “correlation is not causation” thing). Couldn’t say for sure though.
“Inability to see beyond three dimensions, lack of information?”
Might be that. I mean if we knew all the different variables that influence our decision making process it would be pretty simple to predict what person A would do when exposed to stimulus B in context C (actually, scratch that: it would be complicated as all fuck given the sheer ball numbing quantity of possible variables involved).
We do not need to disprove free will: there is no evidence for it beyond gut feeling, and the gut is an idiot. The onus is on the other side to prove that free will does exist — more specifically, to show how a system comprised entirely of reactive elements could ever behave proactively.
Again, you are assuming your conclusion as a starting condition. We do not assume that the doctor “freely chose” anything.
What? Why would we decide that we as the observer controlled the doctor? And if we did, why would this lead anyone to conclude that we were exercising free will, any more than (for example) a thermostat controlling the temperature of the local airspace implies an autonomy of thermostats?
Dictionary.com defines the verb “to decide” as “to make a judgment or determine a preference; come to a conclusion.” Consciousness is not a criterion, implied or otherwise — in fact, the evidence that most human decisions are made unconsciously is overwhelming regardless of whether or not you believe in free will.
So do people, frequently. Or would you withhold the attribute of “free will” from everyone who always obeys traffic signals?
So does the idea of no free will: with the added detail that under identical conditions, the decider will always “freely” make exactly the same decision. Or at least, if she doesn’t, it’s because someone introduced an element of randomness into the process. Even in that case, though, a decision based on a dice roll is no more “free” than one based on deterministic algorithm.
@Bastien Eh, could be a perceptual problem surrounding causation. I wouldn’t rule it out. We might be fooling ourselves that our intentions are more than observations. Of course, then we’re back to how to demonstrate that in a way that way that doesn’t look like we have agency. I don’t know, but fun to think about.
it would be pretty simple to predict what person A would do when exposed to stimulus B in context C (actually, scratch that: it would be complicated as all fuck given the sheer ball numbing quantity of possible variables involved).
See, that’s a really really interesting question. To me, anyway. What if predicting human behavior is a measurement problem where the number of variables really is so horrendous that you *must* use average behavior, behavior in the aggregate. So no specific behavior of a specific individual is ever truly predictable.
What would that mean? I suspect it means that we think we should have perfect prediction on certain events because 1) we can imagine such a model of measurement and get confused about the model vs. the reality and 2) we want this to be true. Don’t people get a little nervous about an unpredictible environment? Don’t we get a sense of mastery and comfort from the idea that we could predict perfectly if conditions were just right, with enough time, with enough effort and will?
@ Hljóðlegur: If I’m reading you right, you’re proposing a very unusual (and I have to say, quite innovative) strategy on the free will question. This involves extending the causal chain backwards until eventually an agentive interaction is encountered that falls outside the closed system that, in the originating situation, seems to preclude the existence of free will. In this way, you argue that you can never rule out free will.
Above, Peter suggests the onus of proof lies on proving that free will does exist, not that it can’t be ruled out. Personally, I think this is too strong a claim; my suspicion is that showing that free will can’t be disproved is sufficient to fatally upset the determinist paradigm.
For all that, however, I don’t think your strategy works. My reason for this is that your approach falls foul of a logical method called proof by induction. Most people will have encountered this in secondary school algebra, where if you can demonstrate two conditions, a statement is held to be true. These are, (i) you prove a claim about the first element of series; and (ii) you show that if this claim is true of a given element in a series, it is also true of its successor.
Applying this to your argument, if you can plausibly claim that a given instance can be explained without reference to free will, and that the next instance (moving from the patient to the scientist) can also be explained without reference to free will, it follows that any subsequent move that <i.employs the same reasoning equally rules out any need to refer to free will. Essentially, you’re postulating a destination (free will) that can never be reached, and as such, it is irrelevant so far as it comes to any finite process of argument. This is good as making the concept logically redundant. (Note: It should be said that intuitionist philosophies of mathematics reject proof by induction, but few people subscribe to this point of view.)
For what it’s worth, I actually agree that an algorithmic appreciation of consciousness is highly problematic, though I don’t think that a non-algorithmic account of consciousness logically entails free will.
@ Lodore You take my meaning! Elegantly formulated, too.
I was trying to get my head around the idea that this experiment proved that there is no free will, and, because I was confused, defaulted back to a “walk thru” where you look at a system and try to see how each item interacts. It does have an inductive n+1 aspect, and the problem of a closed system and how big it is. I was thinking that at some point as we spread back from the arm lifting, claiming that each person erroneously feels they acted of their own free will, we get to ourselves and our feeling of having free will, only we have exempted ourselves from the system. Is that cheating?
Essentially, you’re postulating a destination (free will) that can never be reached, and as such, it is irrelevant so far as it comes to any finite process of argument. This is good as making the concept logically redundant.
Well put. Me, personally, I’m not decided, but I suspect this might be true – that the question, “Do I have free will?” might be irrelevant. Unanswerable.
Do you think the idea that of n+1-ing away from the lifting arm is wrong because it is inductive, or because coming to the idea that the concept of free will is moot is objectionable, or do you think the system really is closed and we are outside?
Here’s the thing – if we are allowed to assume that each person is erroneous about having free will, then we are allowed to assume in a parallel scene that each person is correct, and his action was a choice between this vs that and he directed the choice. That at least eliminates the problem of us hogging all the free will and keeping it out of their system.