What Neuroscience Can’t Tell Us

Guest Post by Rose Woodhouse

Nothing seems to piss off most scientists more than when an uninformed non-scientist makes wrong-headed scientific claims. For example, Jenny McCarthy’s anti-vaccine rants have been known to give scientists the vapors.  Allow me to say to scientists, then: some of us don’t like it when you  do it to us, either. That is, it is a bit irritating when you make grand and less-than-informed philosophical pronouncements based on your scientific discoveries (e.g., that neuroscience has established that there is no free will).

So what was probably a tossed off line in a Vaughan Bell article about what anesthesia can tell us about consciousness got a bit under my skin. After describing recent advances in research of anesthesia, Bell expresses optimism that such research might point us in the direction of discovering the neural correlates of consciousness. So far, so good. He suggests that such knowledge will aid greatly in treatment of patients. Huzzah!, say I. But then he closes with: “So in addition to being an essential medical tool, this technique may also help us dissect one of the greatest hard problems of cognitive science.” Um, not so much.

Suppose we discover the exact neural correlates of consciousness. Suppose we know exactly which neurons firing at what time result in what degree of consciousness. We would surely make leaps and bounds in improving anesthesia. We would, perhaps, come up with treatments of comatose patients. We would have a better understanding of who was in
fact conscious despite being behaviorally inert. What would remain is the hard problem. Even if we had the neural correlate of consciousness, we would still lack an understanding of how a piece of electrified meat can yield our experiences. Even if we know neuron 8,742,637 fired whenever we close our eyes and smell deeply a newborn baby’s head, experiencing a rush of pleasure, how does that explain what it feels like to smell a newborn? That is the hard problem, what philosophers tend to call the explanatory gap. And all the neuronal maps in the world won’t solve it.

Imagine you held in your hand a complete description of the physics of a chess game. All the locations and motions of all the objects and all the forces that acted were accurately described. You would still be lacking something in your understanding of what a chess game really is. The physical facts alone don’t explain what is a chess game. A different level of explanation is needed (e.g., the rules of chess, plus some psychological facts about people). Likewise, a different level of explanation will be required to explain the what-it-feels-like of consciousness.

There are philosophers who claim that we cannot solve the hard problem of consciousness, even in principle. For example, Thomas Nagel claims that the aim of reduction in science is to move away from our subjective experience. We have succeeded in science when we get away from the experience of the light and noise of thunder and lightning and get to an understanding of it as electrical discharge. We move away from water as a colorless liquid that slakes our thirst and get to H20. But there’s no way to move from subjective to objective perspective when what you’re talking about is subjective experience! David Chalmers claims that since we can imagine a philosophical zombie (that is, an exact neurophysical and behavioral duplicate of ourselves but who doesn’t have conscious experiences), the hard problem can never be solved. (Chalmers’ fun page on zombies here.)

That seems a little precipitous to me. I’m not sure it’s something we can’t understand even in principle. Perhaps it will take more integration of a cognitive description of consciousness with neuroscience. But while neuroscience may be close to discovering a neural correlate of consciousness, solution of the hard problem is still well out of reach.

Russell Saunders

Russell Saunders is the ridiculously flimsy pseudonym of a pediatrician in New England. He has a husband, three sons, daughter, cat and dog, though not in that order. He enjoys reading, running and cooking. He can be contacted at blindeddoc using his Gmail account. Twitter types can follow him @russellsaunder1.

18 Comments

  1. Rose, the problem with the chess game example merely shows that social facts are not reducible to physical facts even though social facts supervene of physical ones. So, let us suppose that property dualism is in fact true, does it follow that the hard problem remains unsolved? To say that there is a hard problem is to say that an explanation is needed. But to be a property dualist is to basically say that no possible explanation exists. After all, if you think that the explanatory gap can be filled in principle (even if we currently lack the knowledge) then you are not a property dualist. However, in what sense can we call a problem a problem if there is in principle no solution to it? Seriously, do we think there is a hard problem in terms of reducing the rules of soccer to merely physical facts about the world (quantum probability distributions of particles)? To be a dualist just is to say that there can be no solution to these things. Some things are just social facts period. Similarly some things are just mental facts/properties. What’s the problem again?

    The point I am trying to make is that an explanatory gap exists iff there is no explanation that relates the two facts in question AND some explanation is possible. If we are good property dualists, we won’t think that such explanations are possible.

  2. Yes, I mentioned Chalmers to say that the view is out there (that is, that the explanatory gap is insoluble in principle), and that to some people what is the hard problem is actually the impossible problem. I tried to be non-committal. I’m actually not a property dualist, for what it’s worth – I’m a physicalist. So I think there is a hard problem, not an impossible problem. (hence the “yet” in the title.) I could have been clearer, but I’m trying to make a point of writing to lay people, not philosophers!

    • I’m a physicalist. So I think there is a hard problem, not an impossible problem.

      Fair enough. But now, as a physicalist who denies that complete knowledge about the neural correlates of consciousness actually xplain the subjective feeling of it, what would count as an explanation?

      • Oh wait.. I didnt read your full article properly nevermind.

        But coud you explain what this means?

        “Perhaps it will take more integration of a cognitive description of consciousness with neuroscience”

    • Consciousness is as difficult and unfathomable to understand as a finger trying to touch itself.

  3. Are the problems impossible, or is the framework used to approach them limited?

    I have a tendency to come down on the second side.

    Unfortunately, there are uncountably infinite numbers of limited frameworks and the number of unlimited frameworks is isomorphic to the null set.

    Which is a fancy-ass way of saying, “we focus too much on the solvability of the problem(s) and not enough on the limitations of the frameworks”.

    • I’m not prepared to say the problem is in principle impossible. It may well be, but I don’t think that’s clear yet. But yes, I think the framework is the issue here.

  4. From an engineer’s perspective, the map is just a part of the system (consciousness). It would be silly to expect that the reverse engineering of a highly sophisticated system would succeed by merely knowing the basic correlations of its nodes. It is akin to getting a hold of a program’s technical documentation which defines the programming classes, objects, etc and thinking you have in your hands a working version of the code.

    In order to reverse engineer the actual process of consciousness, and I mean to the Omega point — being able to recreate biological animal consciousness to the T within a 100% accurate biological simulation — such a map may prove to be a vital tool, but it will take a great deal of new tests, other new tools, many dis proven theories and a whole lot of human ingenuity to reach such a point. Just as it would with an extremely complex piece of software built on an antiquated system created millions of years ago 😉

    Then of course we delve into the realm of philosophy and theology — do we have a soul? If so, where does it reside? What specific effect does it have on consciousness? Good stuff.

  5. I’ve worked with AI models for many years now. My mother, as it happens, was an anesthesiologist. We had many pleasant discussions on the subject of consciousness.

    While it is true we won’t be able to reverse engineer the brain, that’s not the goal. The brain is rather like a Swiss Army knife, a bunch of interesting tools on a fairly straightforward framing apparatus.

    Consciousness is merely an artifact of integrated perception. Yesterday, I had my elderly cat put to sleep. She was given a tranquilizer injection, lay down on my girlfriend’s wrist and began to purr. When it was clear she’d lost consciousness, the second injection stopped her heart. We should all die so well.

    For the rest of my life, that purring will haunt me, a truly happy memory. I’d done all my crying the night before, gave the little cat her final treats, let her sleep on my chair, this chair, the one in which I’m sitting now. It really was time, her breast cancer had invaded her thoracic cavity, she was retaining fluid. She couldn’t climb into her litter box without pain. Weighing all in the balances, I’m sure we did the right thing.

    Consciousness is more than mere perception, it’s the ability to seek out the faint signals of meaning in the endless torrent of perceptual static, to reconcile those signals with what little truth we have derived over the course of a lifetime. We do not remember infancy because it imprinted upon our minds tabula rasa. Memory, it seems, mainly records transits from the status quo. One of the great joys of extreme old age, I am told, is the ability to recall childhood with exquisite clarity. How accurate those memories might be remains unverified and unverifiable, but as someone who’s constructed neural networks at the practical limits of the hardware for many decades, my little networks imprint in ways I would never have expected.

    We’ve already reached the Omega point. We don’t need to recreate animal consciousness, we’re learning to control it directly in wetware. We’re going for an Omicron point, artificial domestication. Why bother to reinvent the eye or the Jacobsen’s organ or the Ampullae of Lorenzini? A few million years of evolution have given us perfectly usable ones in birds and snakes and sharks.

    • I am sincerely sorry for your loss. An old friend of mine just lost a beloved cat this past week, too, and I know how hard it can be to say goodbye to a beloved pet.

      • I am such a jerk. I got caught up in integrated perception, and I didn’t say I’m sorry about your cat, which I meant to. I am really sorry. It sucks so much to lose a pet you love.

        • Oh, that’s okay. At this point, all I feel is relief as we pore over the animal shelter websites, making silly noises over cute kitties. We are defined by those we love. We make ourselves vulnerable to them, open our hearts to them, an act of trust and deepest need. When love is returned, that love validates our existences, gives meaning to the struggle.

          My heart will always have a cat-shaped recess in it, easily filled. I’ve had so many now. Their lives are in our hands: giving Purdy’s body back was not so hard.

          “Such,” [said the Venerable Bede], “O King, seems to me the present life of men on earth, in comparison with that time which to us is uncertain, as if when on a winter’s night you sit feasting with your earldormen and brumali — and a simple sparrow should fly into the hall, and coming in at one door, instantly fly out through another. In that time in which it is indoors it is indeed not touched by the fury of the winter; but yet, this smallest space of calmness being passed almost in a flash, from winter going into winter again, it is lost to our eyes.”

          “Somewhat like this appears the life of man — but of what follows or what went before, we are utterly ignorant.”

    • I don’t think we’re at the Omega point just yet 🙂 When mentally incapable people are restored to normal intelligence, when brain damaged victims are brought back into “our world”, when a headless man can be brought back to life, that is when we have reached the Omega point. The Omicron shall be super-consciousness (whatever the hell that may be), group consciousness (ie wired into the net), and of course some very nifty brain upgrades that will come down the pipe soon enough.

      I need to learn how to fly a Chinook by lunchtime. No excuses.

  6. And this is where I think we currently have more fruitful explanations of consciousness – in cognitive terms, rather than purely neuroscientific. I’m not sure “integrated perception” is the correct cognitive description, but that’s where we’re going to start.

    • You’re probably right, insofar as it more Integration than Perception. My own theories on consciousness are based on our ability to filter out the extraneous.

  7. Hi there,

    I think you’re mistaking ‘the hard problem of consciousness’ (explaining how subjective experience / qualia arise from the function of the brain) for “one of the greatest hard problems of cognitive science” (which neural circuits are responsible for maintaining consciousness).

    At this point, I suspect I should say something defensive and academic like “as the article was entirely focused on the second point I hoped this was clear” but, actually, looking back at my crappy wording, I think I’ll have to take this one on the chin. Apologies for that.

    All the best,
    Vaughan

    • Thanks for taking the time to reply! I did take most of your article to refer to the second kind, but thought you made reference to the first in the last line. But reading it over, I find I misunderstood you. My apologies!

Comments are closed.