Guest Post by Rose Woodhouse
Nothing seems to piss off most scientists more than when an uninformed non-scientist makes wrong-headed scientific claims. For example, Jenny McCarthy’s anti-vaccine rants have been known to give scientists the vapors. Allow me to say to scientists, then: some of us don’t like it when you do it to us, either. That is, it is a bit irritating when you make grand and less-than-informed philosophical pronouncements based on your scientific discoveries (e.g., that neuroscience has established that there is no free will).
So what was probably a tossed off line in a Vaughan Bell article about what anesthesia can tell us about consciousness got a bit under my skin. After describing recent advances in research of anesthesia, Bell expresses optimism that such research might point us in the direction of discovering the neural correlates of consciousness. So far, so good. He suggests that such knowledge will aid greatly in treatment of patients. Huzzah!, say I. But then he closes with: “So in addition to being an essential medical tool, this technique may also help us dissect one of the greatest hard problems of cognitive science.” Um, not so much.
Suppose we discover the exact neural correlates of consciousness. Suppose we know exactly which neurons firing at what time result in what degree of consciousness. We would surely make leaps and bounds in improving anesthesia. We would, perhaps, come up with treatments of comatose patients. We would have a better understanding of who was in
fact conscious despite being behaviorally inert. What would remain is the hard problem. Even if we had the neural correlate of consciousness, we would still lack an understanding of how a piece of electrified meat can yield our experiences. Even if we know neuron 8,742,637 fired whenever we close our eyes and smell deeply a newborn baby’s head, experiencing a rush of pleasure, how does that explain what it feels like to smell a newborn? That is the hard problem, what philosophers tend to call the explanatory gap. And all the neuronal maps in the world won’t solve it.
Imagine you held in your hand a complete description of the physics of a chess game. All the locations and motions of all the objects and all the forces that acted were accurately described. You would still be lacking something in your understanding of what a chess game really is. The physical facts alone don’t explain what is a chess game. A different level of explanation is needed (e.g., the rules of chess, plus some psychological facts about people). Likewise, a different level of explanation will be required to explain the what-it-feels-like of consciousness.
There are philosophers who claim that we cannot solve the hard problem of consciousness, even in principle. For example, Thomas Nagel claims that the aim of reduction in science is to move away from our subjective experience. We have succeeded in science when we get away from the experience of the light and noise of thunder and lightning and get to an understanding of it as electrical discharge. We move away from water as a colorless liquid that slakes our thirst and get to H20. But there’s no way to move from subjective to objective perspective when what you’re talking about is subjective experience! David Chalmers claims that since we can imagine a philosophical zombie (that is, an exact neurophysical and behavioral duplicate of ourselves but who doesn’t have conscious experiences), the hard problem can never be solved. (Chalmers’ fun page on zombies here.)
That seems a little precipitous to me. I’m not sure it’s something we can’t understand even in principle. Perhaps it will take more integration of a cognitive description of consciousness with neuroscience. But while neuroscience may be close to discovering a neural correlate of consciousness, solution of the hard problem is still well out of reach.