Showing posts with label hard problem. Show all posts
Showing posts with label hard problem. Show all posts

Monday, July 28, 2014

Is Quantum Mechanics Relevant to the Philosophy of Mind (and the Other Way Around)?


Quentin Ruyant posted this article on the possible relevance of quantum mechanics to a philosophy of mind and consciousness. While he seems convinced (as are many neuroscientists and philosophers) that QM is not scientifically relevant to a philosophy of mind, he allows that there may be some metaphysical reasons to examine a possible connection. Posted at the Scientia Salon site.

Be sure to check out the comments on this post at the original site - some interesting stuff. For an excellent review of this article, check out this post at Conscious Entities.

Is quantum mechanics relevant to the philosophy of mind (and the other way around)?

6a00d8341bf7f753ef01910260a3e7970c

There have been speculations on a possible link between quantum mechanics and the mind almost since the early elaboration of quantum theory (including by well known physicists, such as Wigner, Bohr and Pauli). Yet despite a few proposals (e.g. from Stapp, Penrose, Eccles [1]) what we could dub “quantum mind hypothesis” are often readily dismissed as irrelevant and are seldom discussed in contemporary philosophy of mind. My aim in this article is to defend the relevance of this type of approach.

For the purpose of this discussion it is useful to distinguish two different theses regarding the putative links between quantum mechanics and the mind:
  1. The mind is relevant in interpreting quantum mechanics
  2. Quantum mechanics is relevant in the philosophy of mind
Of course the two theses are not necessarily construed as independent by the proponents of quantum-mind hypothesis. One could argue that the mind is relevant in interpreting quantum mechanics, precisely for the same reasons that quantum mechanics is relevant in the philosophy of mind. This is actually what I will argue here (or at least that it is a promising hypothesis that should be pursued). However, the two theses face different kinds of objections and need to be distinguished.

Is consciousness a biological problem?

Quite logically, I will first tackle the second one: the idea that quantum mechanics could help us explain consciousness. Such claim is sometimes dismissed on the ground that the problem of understanding consciousness is a biological problem, not a physical one. Let me clarify a bit: by “biological/physical problem” I understand: a problem which is better informed by biology/physics, not necessarily a purely scientific (as opposed to philosophical) problem. Quantum mechanics, it is said, is only relevant at very small scales of reality, while conscious organisms are biological organisms, typically found at a macroscopic level, where quantum effects manifest themselves as mere noise. Besides, it is said, randomness is not a proper substitute for free-will, so quantum mechanics wouldn’t help anyway. Therefore quantum mechanics is irrelevant to philosophy of mind.

First, let us observe that typical quantum effects are not necessarily foreign to biology, as illustrated by the burgeoning field of quantum biology. Nor are they in principle confined to the microscopic level — this is the heart of the measurement problem, as illustrated by the famous Schrödinger’s cat thought experiment. Quantum effects such as entanglement also help explain macroscopically observable properties, such as heat capacities or magnetic susceptibilities [2]. It is generally assumed that decoherence precludes the observability of quantum effects on macroscopic objects, but as Zurek et al. note, decoherence is more a heuristic tool to be applied on a case by case basis than a generic consequence of the theory [3]. Finally, quantum entanglement is hard to measure on complex systems. The idea that no quantum effect exists at all on our scale is thus neither empirically nor theoretically grounded. At most we can say that no quantum effect is detectable in common physical objects whose behavior can be accurately described using Newtonian mechanics alone, such as tables and chairs, but of course these are not the sort of conscious objects we are interested in (unless, of course, you think that biological phenomena can be explained with Newtonian mechanics alone).

However, my main contention concerns the idea that the problem of consciousness is a biological problem. Let us follow Chalmers in distinguishing the “easy problems” of consciousness from the “hard problem.” The easy problems concern everything that is scientifically tractable from a third person perspective — how do we discriminate and integrate information, etc. that is, all the cognitive aspects of consciousness. These (not so easy) problems are undeniably biological or psychological. The “hard problem” concerns the phenomenal aspect of consciousness, the subjective first-person “what it’s like” to be conscious. And this question, Chalmers argues, is not scientifically tractable: it is a metaphysical problem.

Metaphysics addresses the most fundamental aspects of reality and arguably the phenomenal aspect of consciousness is one of them. Now, if there is a branch of science which more closely resembles metaphysics in its specific interest for the fundamental aspects of reality, it is physics — not biology. Physics and metaphysics overlap in many respects (just consider the wild speculations about a mathematical universe advanced by physicists such as Tegmark [4]) and there is probably a continuum between the two. On the contrary, a contribution of biology to fundamental metaphysical issues seems to me rather implausible. I could be wrong (and Chalmers could be wrong in thinking that phenomenal aspects of consciousness are metaphysical), but I contend that the hard problem of consciousness, if it exists, is not a biological problem, but a physical one: it is just too fundamental a problem to be addressed from a biological perspective. Note that I don’t mean to deny that there are relations between phenomenal and psychological aspects, in the sense that certain cognitive states are correlated with specific phenomenal aspects, but explaining such correlations is distinct from explaining why there are phenomenal aspects to begin with.

Of course, no metaphysician denies that physics is of interest in the philosophy of mind. Kim’s causal exclusion argument involves the principle of “physical closure.” The argument precisely addresses the problem of the relations between the physical and the mental [5]. What some metaphysicians apparently deny is that quantum physics or any actual physics is of particular interest for such issues: for these authors metaphysics can still produce interesting insights about the physical “in general,” that is, whatever actual physics says. They seem to assume that the physical “in general” poses no important problem of interpretation apart from the well entrenched problems of classical metaphysics.

It seems to me that there is no such thing as “the physical in general, whatever actual physics says”: our conception of the physical changes with our physics. There is no point in reasoning on the physical without taking into account what our best current physics says about it. And our best current physics is quantum mechanics (quantum field theory to be precise). For this reason I think, following Ladyman, Ross and Spurrett [6], that metaphysicians should be informed by our best physics rather than work on a dated conception of the physical, or, as they say provocatively, on “A-level chemistry.” (Ladyman, Ross and Spurrett note that some of Kim’s central arguments rely on conceptions of the physical that are no longer accepted by physicists. The same goes, I would say, of thought experiments involving clones and mind duplication: the no-cloning theorem in quantum mechanics precludes the possibility of such perfect physical duplication [7]).

I am not saying that all metaphysicians should be trained in contemporary physics to produce valuable work (Kim’s Mind in a Physical World is very valuable and important, in my opinion), but contemporary physics is definitely a place we should look at to address fundamental issues in the philosophy of mind. My overall impression is that this is hardly the case today, although such inputs are considered in Chalmers’ The Conscious Mind [8].

Does embracing a quantum mechanical view of the physical really change the perspective for the metaphysics of mind? At the very least metaphysical interpretations of the physical inspired by contemporary physics could open new avenues to be explored, and, perhaps, help make progress on important conundrums in the field, such as the problem of mental causation. It seems to me that there are no good reasons not to follow this path.

Is the mind foreign to the measurement problem?

Which leads us directly to the second point, i.e., the first thesis sketched above: that the mind is relevant in interpreting quantum mechanics. The idea was initially proposed by some physicists as a solution to the measurement problem — the problem of reconciling the theoretical structure of quantum mechanics, which describes non-local “superpositions of states,” with actual phenomena, where no superposition is ever observed. The theoretical structure does all the predictive job, so to speak (apart from the Born rule, which maps the structure with outcome probabilities [9]) and ultimately, the fact that no superposition exists for measured quantities is only ascertained by our conscious observation. Hence the idea that it is the mind which makes the wave-function “collapse.” Of course there are other, less anthropocentric theories, such as Bohm’s, Ghirardi-Rimini-Weber [10] or the infamous many-worlds interpretation [11].

The main type of objection against interpretations involving an observer, I would say, is that they seem too reminiscent of either 19th century Idealism or early 20th century neo-Kantian and phenomenalist views (which did strongly influence said physicists). These doctrines have declined in favor of a renewal of scientific realism in the course of the 20th century.

From a realist perspective, such interpretations seem to attribute a privileged ontological status to the human brain, which is increasingly not acceptable. Was there really no definite reality before life appeared on earth? Does the moon vanishes when no one is looking? All this seems barely good enough for mystics and new age gurus (there might be more sensible anti-realist interpretations, but let’s not quibble…) However, having previously rejected the idea that phenomenal aspects of consciousness are to be addressed by biology, all of this is easily defused: a privileged ontological status of human observers only makes sense for those who pretend that biology can inform deep metaphysical questions.

Let me be more specific and draw on an example. I suggested that phenomenal aspects of consciousness could eventually be explained under a proper interpretation of physics. A possible such explanation could take the form of panpsychism: the idea that, somehow, all matter is conscious. In fact, by distinguishing phenomenal aspects from cognitive aspects of consciousness and relegating the former to physics and the latter to biology or psychology, we would have something like panphenomenalism: the idea that all matter is “phenomenal.” Anyway, in the context of either panpsychism or panphenomenalism, granting a particular role to phenomenality in physics, say, in the collapse of the wave function, does not amount to granting a privileged ontological status to the brain.

Perhaps panpsychism is implausible, but panphenomenalism fares a bit better in my opinion. Obviously, tables and chairs are not conscious. Following panphenomenalism, what they lack is not phenomenality (which would be a feature of their fundamental constitution) but cognitive abilities. Phenomenality without memory, persistence, information integration and a capacity for world and self representation is simply not awareness, or not full awareness — it is at best being transiently aware of nothing identifiable, without the very possibility of knowing that one is or was aware,nothing close to consciousness. I would readily grant this feature to electrons if it could convincingly explain some relevant metaphysical issue.

Another frequent objection against panpsychism is the so-called combination problem: if phenomenal aspects are present in the microscopic constituents of reality, how is it that we have a unified phenomenal experience? I don’t have an answer to this question, but it is not specific to panpsychism (it is a version of the binding problem also found in computational theories of mind, for example). My guess is that it has something to do with a link between quantum entanglement and cognition, perhaps in line with Tononi’s integrated information theory [12], but this is pure speculation. In any case, quantum holism, if accepted, seems to provide a good basis to answer this [13], whatever quantum-mind theory we endorse.

At any rate, although I find it attractive, my goal is not to convince you that panphenomenalism is the one true theory of mind, but to illustrate the fact that one can make sense of an involvement of the mind in the interpretation of quantum mechanics without falling back into Idealism. And, of course, there are other alternatives too, such as Eccles’ dualism for example, or Stapp’s kind-of dual aspect theory, or perhaps some versions of neutral monism.

Another common objection to considering a role of the observer in the measurement problem is that it involves non-locality, which is at odds with Lorentz invariance in special relativity. This is actually a potential problem for most collapse interpretations of quantum mechanics (but apparently, GRW theory does not face it). However, invoking phenomenal aspects in a solution to the measurement problem does not necessarily involve an objective wave-function collapse: it could involve, say, a relational or a modal interpretation of quantum mechanics [14]. Which interpretation of quantum mechanics best fits our needs to account for phenomenal aspects depending on which theory of mind we endorse is precisely the kind of question which should be addressed in the philosophy of mind.

In sum, my goal is not to defend one or the other interpretation of quantum mechanics, nor to defend one or the other theory of mind, but rather to stress the relevance and potential fruitfulness of discussions relating these two domains of inquiry. The hard problem of consciousness and the measurement problem in quantum mechanics share a strong conceptual affinity: both concern the relations between physical structure and phenomenal aspects of reality, broadly construed. Either the world viewed from the mind, or the mind viewed from the world, if you like. This conceptual affinity should not be neglected on the ground of unfounded suspicions of Idealism or anti-realism or any other similar concern. The example of panphenomenalism above shows that a common treatment to both problems might be explored without presenting insurmountable obstacles, something worth pondering.

Yet, in spite of the conceptual affinity between these two central problems of philosophy, talk of quantum mechanics in the philosophy of mind is often brushed aside. At the same time, talk of consciousness and rational agents in, say, discussions on the many-worlds (or many-minds) interpretation of quantum mechanics is ubiquitous, and difficult to avoid. Both camps act as if important issues in the other camp were already settled. This is a strange situation. Aren’t we perhaps missing something by being too compartimentalized? One of the main roles of philosophy — and metaphysics in particular — is after all to provide a unified picture of the world. Is it inconceivable that some considerations in the philosophy of mind (or other areas of philosophy) might inform our interpretations of physics as much as the converse?

Is quantum mechanics useful at all?

To conclude, let me address a final worry that I have so far left aside: that quantum mechanics is of no help in explaining the mind at all. I don’t know about the debate concerning the relationship between free-will and randomness — except that randomness in quantum mechanics is closely tied to the measurement problem, and that what we mean by “randomness” is also up to interpretation. (Shouldn’t we say “unpredictability” instead? Or shall I suggest “physical privacy”?)

Besides, I do not claim that quantum mechanics can explain consciousness. My argument is more modest: the question of phenomenal aspects of consciousness should be addressed in relation to quantum mechanics, because only our best physics can inform such metaphysical questions, and because quantum effects are not necessarily confined to the microscopic realm. Moreover, it should be addressed in relation to the measurement problem, because they share conceptual affinities, and because the “threat” of Idealism is unfounded. All I claim is that a suitable metaphysical interpretation of quantum mechanics could eventually explain the metaphysical problem of consciousness.

Having said that, some features of quantum mechanics such as non-locality/holism or the no-cloning and the free-will theorem [15], could eventually help address some questions in the philosophy of mind, such as the binding problem or the problem of causal exclusion.

In light of this, quantum mechanics certainly deserves more consideration in the philosophy of mind. In my view, claiming that quantum effects reduce to “microscopic noise” simply disregards the epistemic depth of the measurement problem, just as claiming that the problem of consciousness is essentially biological disregards its ontological depth. These two “dogmas” of philosophy of mind are mutually reinforcing and we should reject them altogether if we want to make sense of consciousness as well as of quantum mechanics.
_____
Quentin Ruyant is a PhD student in philosophy of science in Rennes, France and former engineer. He maintains a blog dedicated to the popularization of philosophy of science (in French)
[1] Quantum approaches to consciousness, Stanford Encyclopedia of Philosophy.
[2] Macroscopic entanglement witnesses.
[3] Deconstructing decoherence.
[4] Our mathematical universe;  and Why physicists are saying consciousness is a state of matter, like a solid, a liquid or a gas.
[5] See “The completeness of the physical,” in Mental causation, Stanford Encyclopedia of Philosophy.
[6] Every Thing Must Go: Metaphysics Naturalized.
[7] No-cloning theorem.
[8] The Conscious Mind: In Search of a Fundamental Theory.
[9] The Born rule.
[10] On collapse theories and the Ghirardini-Rimini-Weber model.
[11] See this recent essay by Sean Carroll about why the many-worlds interpretation of QM is not that crazy after all.
[12] Integrated information theory.
[13] See: Holism and nonseparability in physics, Stanford Encyclopedia of Philosophy.
[14] Modal interpretations of quantum mechanics and Relational quantum mechanics, Stanford Encyclopedia of Philosophy.
[15] Free will theorem.

Thursday, July 17, 2014

David Chalmers: How Do You Explain Consciousness? (TED2014)


It's weird to see David Chalmers with short hair. For as long as I have been aware of him, he has had long hair that made him look like a member of Whitesnake's reunion tour, not a world renowned philosopher.

https://yy2.staticflickr.com/5301/5779378540_038150a11f_z.jpg

Be that as it may, in this TED Talk from TED2014, Chalmers talks about the subject for which he is best known, consciousness.

David Chalmers: How Do You Explain Consciousness?

TED2014 · 18:37 | Filmed Mar 2014
Our consciousness is a fundamental aspect of our existence, says philosopher David Chalmers: “There’s nothing we know about more directly…. but at the same time it’s the most mysterious phenomenon in the universe.” He shares some ways to think about the movie playing in our heads.

Saturday, June 07, 2014

Physicist Michio Kaku Explains Consciousness for You


Hmmm . . . a physicist explaining consciousness. Interesting leap from one field to another on the part of one of the most successful science writers for a public audience. I have his new book, The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind, but I have not made time to read it.

Here is his "space-time" definition (theory) of consciousness:
consciousness is the set of feedback loops necessary to create a model of our place in space with relationship to others and relationship to time
That is nice, simple utilitarian view of consciousness, an attempt to answer the "what does it do?" question. But it fails to confront (and nothing below suggests he even addresses) the "hard problem" of consciousness, i.e., HOW does it arise, how does a 3.5 lb lump of tissue see and feel the color red? It does not address "why there is “something it is like” for a subject in conscious experience, why conscious mental states “light up” and directly appear to the subject."

His model also does not account for the existence of art or the desire to transcend waking states of consciousness. In the interview below, he addresses the "how" of humor and jokes, but not the "why" - why do we make humor and jokes, what purpose does it serve within a model of consciousness that is fundamentally about orienting us in space and time?

Perhaps I am being unfair, having not read the book, but what I see in the interview below does not inspire me to move the book up list of things to read.

Michio Kaku Explains Consciousness for You

The gregarious physicist gets inside our brains.

By Luba Ostashevsky and Kevin Berger | June 5, 2014

The first thing we asked Michio Kaku when he stopped by Nautilus for an interview was what was a nice theoretical physicist like him doing studying the brain. Of course the outgoing Kaku, 67, a professor at City College of New York, and frequent cheerleader for science on TV and radio, had a colorful answer. He told us that one day as a child in Palo Alto, California, when the hometown of Stanford University was punctuated by apple orchards and alfalfa fields, he was struck by an obituary of Albert Einstein that mentioned the question that haunted the twilight of the great physicist’s life: how to unify the forces of nature into a “unified field theory.” Kaku, who in 2005 published a book on Einstein, and is a proponent of string theory in physics, has devoted his entire career to solving Einstein’s conundrum. But along the way, Kaku said, he has been fascinated by the other great mystery of nature: the origin of consciousness. In his new book, The Future of the Mind, Kaku has turned the physicist’s “rigorous” eye on the brain, charting its evolution, transformations, and mutations, arriving at futuristic scenarios of human brains melded with computers to amplify collective memory and intelligence. We found the book insightful and engaging and were struck by the confidence with which Kaku explains the nature of consciousness. He answered our questions with zest and insight—stirring, we might imagine, controversy among neuroscientists.


What’s a nice theoretical physicist like you doing studying the brain?

Well, first of all, in all of science, if you were to summarize two greatest mysteries, one would be the origin of the universe and one would be the origin of intelligence and consciousness. And as a physicist, I work in the first. I work in the theory of cosmology, of big bangs and multiverses. That’s my world, that’s my day job, that’s how I earn a living. However, I also realize that we physicists have been fascinated by consciousness. There are Nobel Prize winners who argue about the question of consciousness. Is there a cosmic consciousness? What does it mean to observe something? What does it mean to exist? So these are questions that we physicists have asked ourselves ever since Newton began to create laws of physics, and we began to understand that we too have to obey the laws of physics, and therefore we are part of the equation. And so there’s this huge gap that physicists have danced around for many, many decades and that is consciousness. So I decided—I said to myself, “Why not apply a physicist’s point of view to understand something as ephemeral as consciousness?” How do we physicists attack a problem? Well, first of all we create a model—a model of an electron, a proton, a planet in space. We begin to create the laws of motion for that planet and then understand how it interacts with the sun. How it goes around the sun, how it interacts with other planets. Then lastly we predict the future. We make a series of predictions for the future. So first we understand the position of the electron in space. Then we calculate the relationship of the electron to other electrons and protons. Third we run the videotape forward in time. That’s how we physicists work. So I said to myself, “Why not apply the same methodology to consciousness?” And then I began to realize that there are three levels of consciousness: the consciousness of space, that is, the consciousness of alligators and reptiles; the consciousness of relationship to others, that is, social animals, monkeys, animals which have a social hierarchy and emotions; and third, we run the videotape forward, we plan, strategize, scheme about the future. So I began to realize that consciousness itself falls into the same paradigm when we analyze physics and consciousness together.

What is your “space-time theory of consciousness?”

Well, I’m a physicist and we like to categorize things numerically. We like to rank things, to find the inter-relationship between things, and then to extrapolate into the future. That’s what we physicists, that’s how we approach a problem. But when it comes to consciousness, realize that there are over 20,000 papers written on consciousness. Never have so many people spent so much time to produce so little. So I wanted to create a definition of consciousness and then to rank consciousness. So I think that consciousness is the set of feedback loops necessary to create a model of our place in space with relationship to others and relationship to time. So take a look at animals for example. I would say that reptiles are conscious, but they have a limited consciousness in the sense they understand their position in space with regard to their prey, with regard to where they live, and that is basically the back of our brain. The back of our brain is the oldest part of the brain; it is the reptilian brain, the space brain. Then in the middle part of the brain is the emotional brain, the brain that understands our relationship to other members of our species. Etiquette, politeness, social hierarchy—all these things are encoded in the emotional brain, the monkey brain at the center of the brain. Then we have the highest level of consciousness, which separates us from the animal kingdom. You see animals really understand space, in fact better than us. Hawks, for example, have eyesight much better than our eyesight. We also have an emotional brain just like the monkeys and social animals, but we understand time in a way that animals don’t. We understand tomorrow. Now you can train your dog or a cat to perform many tricks, but try to explain the concept of tomorrow to your cat or a dog. Now what about hibernation? Animals hibernate, right? But that’s because it’s instinctual. It gets colder, instinct tells them to slow down and they eventually fall asleep and hibernate. We, on the other hand, we have to pack our bags, we have to winterize our home, we have to do all sorts of things to prepare for wintertime. And so we understand time in a way that animals don’t.

Why is a sense of time a key to understanding consciousness?

Well, we’re building robots now right? And the question is how conscious are robots? Well, as you can see, they are at a level one. They have the intelligence of a cockroach, the intelligence of an insect, the intelligence of a reptile. They don’t have emotions. They can’t laugh and they can’t understand who you are. They don’t understand who they are. There’s no understanding of a social pecking order. And, well, they understand time to a degree but only in one parameter. They can simulate the future only in one direction. We simulate the future in all dimensions—dimensions of emotions, dimensions of space and time. So we see that robots are basically at level one. And then one day, we may meet aliens from out of space and then the question is, well, if they’re smarter than us, what does that mean to be smarter than us? Well, to me, it means being able to daydream, strategize, plan much better than us. They will be several steps ahead of us if they are more intelligent than us. They could, quote, outwit us because they see the future. So that’s where we differ from the animals. We see the future. We plan, scheme, strategize. We can’t help it. And some people say, “Well bah humbug! I don’t believe this theory, there’s got to be exceptions, things that are outside the theory of consciousness like humor.” What could be more ephemeral than a joke? But think about a joke for a moment. Why is a joke funny? A joke is funny because you hear the joke, and then you mentally complete the punch line by yourself, and then when the punch line is different from what you anticipated, it is, quote, funny, okay? For example one of Roosevelt’s daughters was the gossip of the White House and she was famous for saying, “If you have nothing good to say about somebody, then please sit next to me.” Now why is that quote funny? It’s funny because you complete the sentence yourself: if you have nothing good to say about somebody, then don’t say anything at all. Your parents taught you that. But then the twist is “well come sit next to me.” And that’s why it’s, quote, funny. Or WC Fields was asked the question, “Are you in favor of social activities for youth? Like, are you in favor of clubs for youth?” And he said, “Well am I in favor of clubs for youth? Yes, but only if kindness fails.” That’s funny because we think clubs are social gatherings, but for WC Fields he twists the punch line and says, no a club is for hitting people. And that’s why that quote is funny—because we cannot help it. We mentally complete the future.

You say we have a “CEO” in our brain. What exactly is that?

Well, how do we differ from the animals? If you put, for example, a mouse between pain and pleasure, between a shock and food, or between two pieces of food, I’m sorry, it will actually, like the proverbial donkey, get confused. It’ll go back and forth, back and forth because it cannot evaluate. It cannot do the ultimate evaluation of something. It lacks a CEO to make the final decision. We have the CEO. It’s in the frontal part of the brain and we can actually locate where our sense of awareness is. You put the brain in an MRI scan, you ask the person to imagine yourself, and bingo! Right there, right behind your forehead it lights up. That is where you have your sense of self. And then when you have to make hard decisions between two things, animals have a hard time doing that because they’re being hit with all these different kinds of stimuli. It’s a hard decision for them. We, on the other hand, again that part lights up and that is, quote, the CEO that finally makes the final decision in evaluating all the other consequences. And how did we do this? By simulating the future. If you get candy and put a candy in front of a kid the kid says, “Well if I grab that candy will my mother be happy? Will my mother be sad? I mean, how will I pay for it?” That’s what goes on in your mind, you complete the future and that’s the part of the brain that lights up. So that’s how the CEO makes the decision between two things while animals do it by instinct, or they just get confused.

Your “CEO in the brain” seems to act with intent and purpose. But neurons just fire or don’t. You can’t say they have purpose, right?

There is a purpose behind our consciousness, and that is basically survival and also reproduction. So if you think about your daydreaming, what do you daydream about? Well you daydream about survival first of all. Where’s my next food or my job? I mean, how do I impress people to advance in my career? And so on and so forth. And then you think, “Hey it’s Friday night. You know, I’m lonely. I want to go out and, you know, dance at some dance hall and have some fun.” So if you think about it, there is a purpose, and that’s why we have emotions. Emotions have a definite purpose. Evolution gave us emotions because they’re good for us. For example, the concept of like. How do you like something? Well if you think about it, most things are actually dangerous. Of all the things that you see around, they’re either neutral or actually dangerous. There’s only a small sliver of things which are good for you. And emotions say, “I like this because these things are good for you.” Jealousy is very important, for example, as an emotion because it helps to ensure your reproduction and the fact that your genes will carry out into the next generation. Anger. All these emotions that we have, that are instinctual, are basically hardwired into us because we have to make split-second decisions, which would take many, many minutes for the prefrontal cortex to rationally evaluate. We don’t have time for that. If you see a tiger, you feel fear. That’s because it’s dangerous and you have to run away. And then we have the other question that is sometimes asked: Can a robot feel redness? Or how do we know that we are conscious? Because we can feel a sunset or we feel the enormous splendor of nature but robots can’t, right? Well I don’t believe in that, because back in the old days people used to ask the question, “What is life?” I still remember, as a kid, all these essays and articles written about “What is life?” That question has pretty much disappeared. Nobody asks that question anymore because we now know—because of biotechnology, the degradations—it’s a very complicated question. It’s not just living and non-living. We have all sorts of viruses and all sorts of things in between. So we now realize that the question “What is life?” has pretty much disappeared. So I think the question of “What is consciousness and can consciousness understand redness in a machine?” will also gradually just disappear. One day we will have a machine that understands redness much better than us. It’ll be able to understand the electromagnetic spectrum, the poetry, be able to analyze the law of redness, history of redness, much better than any human. And the robot will say, “Can humans understand redness? I don’t think so.” One day, robots will have so much access to the Internet—so much access to sensors—that they will understand redness in a way that most humans cannot and robots will conclude that, “My god, humans cannot understand redness.”

Granted, consciousness arises out of the brain. But what is consciousness itself? What, for instance, is the sense of redness?

Well, if you take a look at the circuitry of the brain, you realize that the sensors of the brain are limited. Sometimes they can be mis-wired; that’s called synesthesia. And you realize that we have certain parts of the brain that register certain kinds of senses, including redness. Now then the question is, can blind people understand redness, right? And the answer is no, but they have the receptors—they have the apparatus there that can allow them to understand redness, but they don’t. So ultimately, I think you can create a robot—a robot, which will have the same sensors, the same abilities—to understand redness much better than a human and be able to recite poetry, be able to have eloquent statements about the essence of redness much better than any human poet can. Then the question is, well, does the robot understand redness? At that point the question becomes irrelevant, because the robot can talk, feel, express the concept of redness many, many, many times better than any human. But what’s going on in the mind? Well, a bunch of circuits or a bunch of neurons firing and so on and so forth. And that will be redness.

What is self-awareness?

Well, again there are thousands of papers written about self-awareness and I have to make a definition in one sentence. My definition is very simple: Self-awareness is when you put yourself in that model. So this model of space, your relationship to other humans, and then relationship to time, when you put yourself in that model that is self-awareness. And then you ask the question, well, are robots self-aware? Well the answer is obviously no. When the robot Watson beat two humans on the game show Jeopardy on national TV, many people thought, “Uh-oh the robots are coming; they’re going to put us in zoos. They’re going to throw peanuts at us and make us dance behind bars just like we do that with bears.” Wrong. Watson has no sense of self-awareness. You cannot go to Watson and slap it on his back and say, “Good boy, good boy, you just beat two humans on Jeopardy.” Look, Watson doesn’t even know that it is a computer. Watson doesn’t know what a human is. Watson doesn’t even know that it won this prize of beating two humans on a game show because he does not have a model of itself as a machine, a model of humans as being made out of flesh and blood, and he doesn’t have the three categories of intelligence other than understanding space and being able to navigate facts on the internet. So again, self-awareness I have to define it. Self-awareness is when you put yourself in this model of space, time, and relationship to others.

Are we merely biological machines?

Well, we are definitely biological machines. Okay, there’s no question about that. The question is, what does that mean? What does that mean for people’s feelings about the universe [and] sense of who they are? Are humans special in that sense from animals? Well, I’ve looked at a continuum. If you take a look at our own evolution and you were to believe, for example, that only humans are conscious (which is the dominant position of most psychologists and most people in the field), that humans really are different, we are conscious, animals or not. That is the dominant position in the entire field. But if you take our evolutionary history, at what point did we suddenly become conscious? There’s a continuum of our ancestors going back millions, in fact, billions of years and then you say, “Well at what point did we suddenly become conscious?” and then you begin to realize that, hey this is a stupid question. Consciousness itself probably has a continuum. It has stages as I mentioned, but consciousness probably has a continuum and so, in that sense, we are linked to the animal kingdom. Now are we special? Again, it depends on how you define special—how you define soul. What I’m saying is, if you give me a criterion, that is, are we x, y, z? Then what I say is, “Okay how do you measure it?” Give me an experiment that I can put a human in a box by which I can measure this criterion that you give me. So are we biological machines? The answer is yes, but what does that mean? Does a machine have a soul? Does a machine have something more? Well, define more. Define soul. Define essence. Give me a definition and then I will give you an experiment by which we can differentiate yes or no. That’s how we physicists think.

What’s the future of the human brain?

Well, first of all I think that brain-machine interface is going to explode in terms of developments, financing, and breakthroughs. The Pentagon is putting tens of millions of dollars into the revolutionary prosthetics program because think of the thousands of veterans of Iraq and Afghanistan who had injured spinal cords, no arms, no legs. We can connect the brain directly now to a mechanical arm [or] mechanical leg. At the next international soccer games, the person who starts the Brazilian World Cup Games will be partially paralyzed, wearing an exoskeleton. In fact, my colleague Stephen Hawking, the noted cosmologist, he has lost control of his fingers now, so we have connected his brain to a computer. The next time you see him on television, look at his right frame. In his right frame there’s an antenna there with a chip in it that connects him to a laptop computer. And we now have, in this sense, telepathy. We’re now able to actually take human thoughts and carry out movements of objects in the material world. People who are totally paralyzed can now read email, write email, surf the web, do crossword puzzles, operate their wheelchair, operate household appliances and they are totally paralyzed—they are vegetables. We’ve done this with animals. We’ve done this with humans. And in the future, because you ask about the future, we will also have artificial memories as well. Last year for the first time in world history, we recorded a memory and implanted a memory into the brain. At Wake Force University and also in Los Angeles, you take a mouse, teach the mouse how to sip water from a flask, and then look at the hippocampus, record the impulses ricocheting across the hippocampus (which is the gateway to memory), record it, and then later, when the mouse forgets how to do this, you re-insert the memory back into the hippocampus and bingo! The mouse learns on the first try. Next, will be primates. For example, a primate eating a banana or learning how to manipulate a certain kind of toy. That memory can be recorded and then re-inserted into the brain. And the short-term goal is to create a brain pacemaker. A brain pacemaker whereby people with Alzheimer’s could just simply push a button and they will know where they live, they will know who they are, they will know who their kids are, and beyond that, even more complex memories. Maybe we’ll be able to record a memory of a vacation you never had and be able to upload that vacation. Or you’re a college student learning calculus by simply pushing a button. Or if you’re a worker that’s been laid off because of technology, why not upgrade your skills? These are all possibilities that are real because now the politicians are getting interested in this, and they’re putting big bucks to the tune of a billion dollars into the brain initiative.

How will artificial intelligence change our view of humanity?

Well, we realize that democracy is perhaps the worst form of government except for all the others that have been tried, said Winston Churchill, and people will democratically vote. They will democratically decide how the human race will evolve. For example: designer children. We cannot do that today, but it’s coming. The day will come when parents will decide what genes they want to have propagated into their kids. Already, for example, if you’re Jewish in Brooklyn and you have the potential of Tay-Sachs, a horrible genetic disease, you can be tested and the embryos can be tested and you can abort them. So you have already a form of genetic engineering taking place right now, today. We can actually genetically engineer certain disease genes out of your gene pool. That’s today. In the future we may be able to deliberately do this. And so we begin to realize that we may have the power of controlling our genetic destiny. And the same thing with intelligence: If we have the ability to upload memories, perhaps we’ll have the ability to have super memories—to have a library of memories so that we can learn calculus and learn all the different subjects that we flunked in college—and have them inserted into our mind. And so as the decades go by, we may have these superhuman abilities. And with exoskeletons we may be able to live on Mars and live on other planets with skeletons that allow us to have superpowers and the ability to breath in different atmospheres and things like this. My point is that in a democracy people will decide for themselves. We cannot decide. We cannot say that that’s immoral, that’s moral. People in the future will democratically decide how they want their genetic heritage and how they want their physical heritage to be propagated.

Sunday, September 08, 2013

Adrian D. Nelson - The Study of Fundamental Consciousness Entering the Mainstream

Christof Koch, working alongside Francis Crick, spent a couple of decades seeking the neural correlates of consciousness, and established himself as a leader in the study of consciousness and the ways the brain creates the mind. Yet, for all the discoveries and advances in our understanding of the brain, the how of converting sensory experience into subjective sensations remains a mystery.

Philosopher of mind David Chalmers has distinguished between the easy problem of consciousness and the hard problem of consciousness. The "easy problem" is essentially the area in which Koch and Crick were working - identifying the what, the neural correlates of consciousness.
Finding the neural correlates of consciousness is a problem of the same general type as finding the neural correlates of anything—language or memory, for instance. Neuroscience has made great progress in solving such problems in the past. Finding the brain regions and processes that correlate with consciousness is simply a matter of directing an existing research strategy from areas of previous success (language, memory) onto a different aspect of mental functioning (consciousness).

Solms, Mark; Turnbull, Oliver. (2010-09-07). Brain and the Inner World: An Introduction to the Neuroscience of Subjective Experience (Kindle Locations 799-802).
This approach seeks to understand which brain regions and/or processes correlated with conscious experience, and identifying where in the brain they reside. The "hard problem" is the why and the how.

In his own work, Koch is careful to distinguish between neural correlates of consciousness and a theory of consciousness.
It should be noted that discovering and characterizing the Neural Correlates of Consciousness in brains is not the same as a theory of consciousness. Only the latter can tell us why particular systems can experience anything, why they are conscious, and why other systems - such as the enteric nervous system or the immune system - are not. However, understanding the Neural Correlates of Consciousness is a necessary step toward such a theory. (Koch, C. Neural Correlates of Consciousness, Scholarpedia entry)
In his most recent book, Consciousness: Confessions of a Romantic Reductionist (2012), Koch admits his openness to non-materialist explanations of consciousness, including the possibility that consciousness is a fundamental feature of the universe. In this interview from The Atlantic, he goes a little further:
I was surprised to see your book invoke Pierre Teilhard de Chardin, the Jesuit priest and paleontologist who believed the universe is becoming more conscious as it gets more complex. Most scientists write off Teilhard as a religious apologist.

Koch: Most scientists don't even know about him. He had this idea about evolution where he argued that from very simple micro molecules to single cell organisms to multi-cell organisms to simple animals to complex animals to us is the emergence of complexity. He observed that the universe was getting more and more complex, and he postulated this would continue. Essentially, he postulated something like the Internet. He called it the "noosphere" -- the sphere of knowledge that covers the entire planet and is heavily interconnected. He died in 1955, long before any of this emerged, and he postulated that human society would evolve into a very complicated entity that would become self-conscious. He thought this would happen on other planets and throughout the entire universe, and the universe in some weird state would become self-conscious. It's all totally speculative, but I do like some of these ideas. I see a universe that's conducive to the formation of stable molecules and to life. And I do believe complexity is associated with consciousness. Therefore, we seem to live in a universe that's particularly conducive to the emergence of consciousness. That's why I call myself a "romantic reductionist."
The article below, from the Collective Evolution blog, offers a little overview of how the study of consciousness is changing in fundamental ways.

The Study of Fundamental Consciousness Entering the Mainstream

August 8, 2013 by Adrian D. Nelson



The world-renowned neuroscientist Christof Koch, spent decades working alongside the co-discoverer of the DNA molecule, Francis Crick. For decades these two men searched for the neurobiological basis of consciousness. They discovered many insights into cognition and the functioning of perception, yet the central enigma, the nature of consciousness itself, remained mysteriously elusive.

In 2009, Koch shocked the scientific community by publishing his conviction that consciousness probably isn’t just in brains, but is a fundamental feature of reality. This is a view known to philosophers as ‘panpsychism.’ The theory that Koch is now dedicating his research to is called ‘Integrated Information Theory’ or ‘IIT.’ It is the brainchild of neuroscientist Giulio Tononi of the University of Wisconsin-Madison.

In explaining his theory, Tononi asks us to consider a simple light sensitive photo diode like those found in a digital camera. A simple diode might respond to just two states: light or dark. We could present our diode with any number of images, yet regardless of the picture, the diode conforms to one of only two possible states. Is it light, or is it dark?

Now consider yourself looking at the same picture, lets say, of the Eiffel Tower on a beautiful spring day in Paris. For us, looking at this image results in a reduction from a near infinity of possible states. Not an image of the Andromeda galaxy, not a childhood picture of your mother, not cells dividing in a Petri dish and so on. Because of the vast number of images we are capable of recognizing, each one is highly informative. For Tononi, the vast amount of information capable of being integrated in the brain means that we have a comparatively huge capacity for consciousness.

Tononi’s theory, that consciousness is born out of networks with high degrees of integrated information, has novel ways of being tested in the laboratory.

In studies with sleeping participants, Tononi and his colleagues used transcranial magnetic stimulation to send a ripple of activity through the cortex of sleeping participants. The researchers found that when dreaming, this ripple reverberated through the cortex longer than when participants were in stages of dreamless sleep. This demonstrated that during dreaming, when the brain is conscious, the cortex has a higher degree of integration.

In another experiment, the researchers built tiny robots known as ‘animats’ that they placed into mazes. The animats used simple integrated networks capable of evolving over sequential generations. To their surprise, the greater the degree of integration that the animats evolved, the quicker they were able to escape the mazes. For Tononi this finding suggested that consciousness may play a more central role in evolution than had previously been thought.

The mathematical value of integrated information in a network is known as phi. But Tononi’s theory, now the topic of serious mainstream discussion, has an extraordinary implication. Phi didn’t just occur in brains, it is a property of any network with a total informational content greater than its individual parts. Every living cell, every electronic circuit, even a proton consisting of just three elementary particles have a value of phi greater than zero. According to Integrated Information Theory, all of these things possess something, albeit but a glimmer of ‘what it is like’ to be them. Tononi states:
“Consciousness is a fundamental property, like mass or charge. Wherever there is an entity with multiple states, there is some consciousness. You need a special structure to get a lot of it but consciousness is everywhere, it is a fundamental property.”
Integrated information theory is in its infancy and there are still many questions it must face. Did the information of brains operate at the level of the neuron, or the protein, or something deeper still? The electromagnetic field of the brain, as observed by psi researcher Dean Radin, is always re-establishing its quantum connection to the entire universe. Could a much richer informational interaction exist than has yet been imagined?

Physicists such as John Wheeler have laid the groundwork for a radical new understanding of reality, in which matter, the laws and constants of nature, and indeed the entire universe is best described, not in terms of physical objects, but through the play and display of a fundamental dynamic information.

Quantum mechanics suggests that the entire physical universe is potentially interconnected at a deep level of nature. So is the total informational content of the universe integrated in some deep sense? Is it in a mysterious way conscious of itself?

As spiritual traditions throughout the ages have long asserted, instead of isolated and separate experiencing beings, we may experience on behalf of the greater evolving system in which we find ourselves.

In Koch’s highly anticipated 2012 book, ‘Consciousness – Confessions of a Romantic Reductionist’, he states:
“I do believe that the laws of physics overwhelmingly favored the emergence of consciousness. The universe is a work in progress. Such a belief evokes jeremiads from many biologists and philosophers but the evidence from cosmology, biology and history is compelling.”
Regardless of the validity of Tononi’s theory, today increasing numbers of scientists and academics are convinced that the existence of consciousness simply cannot be sensibly denied. The study of fundamental consciousness is now entering the mainstream. This movement consists of thinkers in and outside of the mind sciences. Yet despite their different academic backgrounds, they are united by two common convictions: that consciousness is an intrinsic rather than incidental emergence in the universe, and that any complete account of reality must include an explanation of it.

Sources:


Koch, C. (2009, August 18). A complex theory of consciousness: Is complexity the secret to sentience, to a panpsychic view of consciousness? Scientific American.

Tononi, G. (2008). Consciousness as integrated information: A provisional manifesto. Biological Bulletin, 215(3), 216-242.

Edlund, J. A., Chaumont, N., Hintze, A., Koch C., Tononi G., & Adami, C. (2011). Integrated information increases with fitness in the evolution of animats. PLoS Computational Biology, 7(10).

Radin, D. I. (2006). Entangled Minds: Extrasensory experiences in quantum reality. New York: Simon & Schuster.

Koch, C. (2012). Consciousness: Confessions of a Romantic Reductionist. MIT Press Books.

Friday, August 09, 2013

Marcelo Gleiser - The Nature Of Consciousness: A Question Without An Answer?

It's a little strange to read Marcelo Gleiser (a professor of physics and astronomy) riffing about consciousness at NPR's 13.7 Cosmos and Culture blog, since this tends to be the domain of philosopher Alva Noë on that blog, but it is an interesting article.

Gleiser is thinking hard about David Chalmers' "Hard Problem of Consciousness," i.e., what it is like to be conscious and how that happens from the 3.5 lb lump of fatty tissue in our skulls. He rejects the notion that consciousness is a by-product of neuronal activity, as well as the computational model (that we can simulate the brain and mind with a computer):
it becomes very hard to see how the subjective quality of the experiential mind will emerge from neuronal modeling in silicon chips: to capture thinking is not the same thing as capturing what the thinking is about.
I agree.

The Nature Of Consciousness: A Question Without An Answer?


by MARCELO GLEISER
August 07, 2013

How does our subjective reality emerge from the physical structures of the brain and body?
iStockphoto.com

Today I'd like to go back to a topic that leaves most people perplexed, me included: the nature of consciousness and how it "emerges" in our brains. I wrote about this a few months ago, promising to get back to it. At this point, no scientist or philosopher in the world knows how to answer it. If you think you know the answer, you probably don't understand the question:

Are you all matter?

Or, let's phrase it in a different way, a little less controversial and more amenable to a scientific discussion: how does the brain, a network of some 90 billion neurons, generate the subjective experience you have of being you?

Australian philosopher David Chalmers, now at New York University, dubbed this question "The Hard Problem of Consciousness." He did this to differentiate it from other problems, which he considers the "easy" ones, that is, those that can be solved through the diligent application of scientific research and methodology as it's being already done in cognitive neurosciences and in computational neuroscience. Even if some of these "easy" problems may take a century to solve, their difficulty doesn't even come close to that of the "hard" problem, which, some speculate, may be insoluble.

Note that, even if the hard problem may be insoluble, the majority of scientists and philosophers still stick to the hypothesis that matter is all there is and that "you" exist as a neuronal construction within your brain (and body, as the two are linked in many ways, not all understood yet).

Here are some of the problems Chalmers calls easy:
  • The ability to discriminate and react to external stimuli
  • The integration of sensorial information
  • The difference between a state of wakefulness and sleep
  • The intentional control of behavior
These questions are on the whole localized, amenable to a reductionist description of how specific parts of the brain operate as electrochemical circuitry through myriad neural connections.

Recently, Henry Markram from the Federal Polytechnic School in Lausanne, Switzerland, received a billion-euro grant to lead the Human Brain Project, a consortium of more than a dozen European institutions that intends to create a full-blown simulation of the human brain. For this, they will need a supercomputer capable of more than a billion-billion operations per second (exaflops, where "exa" stands for 1018), about 50 times faster than today's high-end machines. Optimists believe that such computing power is within reach, possibly before the end of this decade.

Of course, Markram's project, or the intent of modeling a human brain in full in a computer, clashes frontally with the notion of the hard problem.

Markram and the "computationalists" believe that if the simulation is sufficiently complete and detailed, including everything from the flow of neurotransmitters across each individual synapse to the amazingly complex network of the trillions of inter-synaptic connections across the brain tissue, that it will function just as a human brain does, including a consciousness in every way as amazing as ours. To them, the hard problem doesn't exist: everything can be obtained from pilling neuron upon neuron on computer chip models, as bricks compose a house, plus all the other building details, plumbing, wiring, etc.

Although we must agree that Markram's project is of enormous scientific importance, I can't quite see how a computer simulation can create something like a human consciousness. Perhaps some other kind of consciousness, but not ours.

Another philosopher from New York University (that ought to be an amazing department to work in), Thomas Nagel, argued that we are incapable of understanding what it is like to be another animal, with its own subjective experience. He took bats as an example, probably because they construct their sense of reality through echolocation and are so different from us. Using ideas from MIT linguist Noam Chomsky, who has argued that every brain has cognitive limitations stemming from its design and evolutionary functionality (for example, a mouse will never talk), Nagel showed that we will never truly understand what it is like to be a bat.

This is another way of thinking about Chalmers' hard problem, what philosopher Colin McGinn calls "cognitive closure." (McGinn has just left the University of Miami after much controversy. Who knows, maybe he will also join NYU's philosophy department?)

Back to McGinn's ideas, he and other "mysterians" defend the idea that our brains can only do so much and one of the things that it can't do is understand the nature of consciousness. Being a philosophical argument there is of course no scientific proof of this limitation (what physicists fondly call a "no-go theorem"), but McGinn makes a compelling case, arguing that the difficulty comes from consciousness being nowhere and everywhere in the brain, thus not amenable to the methodic reductionist analysis as we tend to do with scientific issues.

This being the case, it becomes very hard to see how the subjective quality of the experiential mind will emerge from neuronal modeling in silicon chips: to capture thinking is not the same thing as capturing what the thinking is about.

McGinn leaves the door open to more advanced intelligences, with brains designed in more capable ways than ours. Of course, unless you are Ray Kurzweil and are convinced that it is just a matter of time before machines will be able to not just simulate the mind but leave us all behind, we can't ever predict reliably whether such technological marvels will come to be. But even if a more advanced (machine?) intelligence one day figures out what consciousness is about, it seems that for today we will have to continue living with the mystery of not knowing.