Sunday, March 10, 2013

Ray Kurzweil's "How to Create a Mind" Reviewed by Philosopher Colin McGinn


From the New York Review of Books, eminent philosopher Colin McGinn reviews the new, somewhat controversial book from Ray Kurzweil, How to Create a Mind: The Secret of Human Thought Revealed.  

McGinn begins the review by rightly pointing out that Kurzweil is not a professional neuroscientist, psychologist, or philosopher. Based on this, he seems incredulous that Kurzweil's books promises to reveal "the secret of human thought.” Kurzweil makes the bold assertion that he knows "how to create a mind.”

Although my reasons are different (in part) from McGinn's for finding Kurzweil's claims to be too far reaching to be taken seriously, I am in agreement with much of what he writes here.

As a little background, here is a very brief sketch of McGinn from Wikipedia:
Colin McGinn (born 10 March 1950) is a British philosopher, currently Professor of Philosophy and Cooper Fellow at the University of Miami. He previously held teaching positions at the University of Oxford and Rutgers University. 
McGinn is best known for his work in the philosophy of mind, and is the author of over 20 books on this and other areas of philosophy, including The Character of Mind (1982), The Problem of Consciousness (1991), Consciousness and Its Objects (2004), and The Meaning of Disgust (2011).

Perhaps most relevant to this review, he is also author of The Character of Mind: An Introduction to the Philosophy of Mind (OPUS) (1997) and The Mysterious Flame: Conscious Minds In A Material World (2000).

Homunculism

MARCH 21, 2013

Colin McGinn



Eric Edelman: Inspiration of a Dreamer, 2013

How to Create a Mind: The Secret of Human Thought Revealed
by Ray Kurzweil
Viking, 336 pp., $27.95

According to Wikipedia, Ray Kurzweil is an
American author, inventor, futurist, and director of engineering at Google. Aside from futurology, he is involved in such fields as optical character recognition (OCR), text-to-speech synthesis, speech recognition technology, and electronic keyboard instruments.
So he is a computer engineer specializing in word recognition technology, with a side interest in bold predictions about future machines. He is not a professional neuroscientist or psychologist or philosopher. Yet here we have a book purporting to reveal—no less—“the secret of human thought.” Kurzweil is going to tell us, in no uncertain terms, “how to create a mind”: that is to say, he has a grand theory of the human mind, in which its secrets will be finally revealed.

These are strong claims indeed, and one looks forward eagerly to learning what this new theory will look like. Perhaps at first one feels a little skeptical that Kurzweil has succeeded where so many have failed, but one tries to keep an open mind—hoping the book will justify the hype so blatantly brandished in its title. After all, Kurzweil has honors from three US presidents (so says Wikipedia) and was the “principal inventor of the first CCD flatbed scanner” and other useful devices, as well as receiving many other entrepreneurial awards. He is clearly a man of many parts—but is ultimate theoretician of the mind one of them?

What is this grand theory? It is set out in chapter 3 of the book, “A Model of the Neocortex: The Pattern Recognition Theory of Mind.” One cannot help noting immediately that the theory echoes Kurzweil’s professional achievements as an inventor of word recognition machines: the “secret of human thought” is pattern recognition, as it is implemented in the hardware of the brain. To create a mind therefore we need to create a machine that recognizes patterns, such as letters and words. Calling this the PRTM (pattern recognition theory of mind), Kurzweil outlines what his theory amounts to by reference to the neural architecture of the neocortex, the wrinkled thin outer layer of the brain.

According to him, there are about 300 million neural pattern recognizers in the neocortex, with a distinctive arrangement of dendrites and axons (the tiny fibers that link one neuron to another). A stimulus is presented, say, the letter “A,” and these little brain machines respond by breaking it down into its geometric constituents, which are then processed: thus “A” is analyzed into a horizontal bar and two angled lines meeting at a point. By recognizing each constituent separately, the neural machine can combine them and finally recognize that the stimulus is an instance of the letter “A.” It can then use this information to combine with other letter recognizers to recognize, say, the word “APPLE.” This procedure is said to be “hierarchical,” meaning that it proceeds by part-whole analysis: from elementary shapes, to letters, to words, to sentences. To recognize the whole pattern you first have to recognize the parts.

The process of recognition, which involves the firing of neurons in response to stimuli from the world, will typically include weightings of various features, as well as a lowering of response thresholds for probable constituents of the pattern. Thus some features will be more important than others to the recognizer, while the probability of recognizing a presented shape as an “E” will be higher if it occurs after “APPL.”

These recognizers will therefore be “intelligent,” able to anticipate and correct for poverty and distortion in the stimulus. This process mirrors our human ability to recognize a face, say, when in shadow or partially occluded or drawn in caricature. Kurzweil contends that such pattern recognizers are uniform across the brain, so that all regions of the neocortex work in basically the same manner. This is why, he thinks, the brain exhibits plasticity: one part can take over the job performed by another part because all parts work according to the same principles.

It is this uniformity of anatomy and function that emboldens him to claim that he has a quite general theory of the mind, since pattern recognition is held to be the essence of mind and all pattern recognition is implemented by the same basic neural mechanisms. And since we can duplicate these mechanisms in a machine, there is nothing to prevent us from creating an artificial mind—we just need to install the right pattern recognizers (which Kurzweil can manufacture for a price). The “secret of thought” is therefore mechanical pattern recognition, with hierarchical structure and suitable weightings for constituent features. All is revealed!

What are we to make of this theory? First, pattern recognition is a subject much studied by perceptual psychologists, so Kurzweil is hardly original in calling attention to it (I worked on it myself as a psychology student back in 1970). What is more original is his contention that it provides the key to mental phenomena in general.

However, that claim seems obviously false. Pattern recognition pertains to perception specifically, not to all mental activity: the perceptual systems process stimuli and categorize what is presented to the senses, but that is only part of the activity of the mind. In what way does thinking involve processing a stimulus and categorizing it? When I am thinking about London while in Miami I am not recognizing any presented stimulus as London—since I am not perceiving London with my senses. There is no perceptual recognition going on at all in thinking about an absent object. So pattern recognition cannot be the essential nature of thought. This point seems totally obvious and quite devastating, yet Kurzweil has nothing to say about it, not even acknowledging the problem.

He does in one place speak of dreaming as a “sequence of patterns” and he might try to say the same about thinking. But this faces obvious objections. First, even if that is true, there is no pattern recognition involved when I dream, or when I think about London and my friends and relatives there. So his “model of the neocortex” does not apply. Second, it is quite unclear what this description is supposed to mean. Why is a dream a sequence of “patterns,” instead of just ideas or images or hallucinations? The notion of “pattern” has lost its moorings in the geometric models of letters and faces: Are we seriously to suppose that dreams and thoughts have geometrical shape? At best the word “pattern” is now being used loosely and metaphorically; there is no theory of dreaming or thinking here. Similarly for Kurzweil’s claim that memories are “sequences of patterns”: What notion of pattern is he working with here? Why is remembering that I have to feed the cat itself some kind of pattern?

What has happened is that he has switched from patterns as stimuli in the external environment to patterns as mental entities, without acknowledging the switch; and it is hardly plausible to suggest that dreams and thoughts are themselves geometric patterns that we introspectively recognize. So what is the point of calling dreams and thoughts “patterns”? The truth is that the PRTM does not generalize beyond its original home of sensory perception—the recognition of external patterns in the environment.

Indeed, it is notable that Kurzweil makes no serious effort to generalize beyond the perceptual case, blithely proceeding as if everything mental involves perception. In fact, it is not even clear that all perception involves pattern recognition in any significant sense. When I see an apple as red, do I recognize the color as a pattern? No, because the color is not a geometric arrangement of shapes or anything analogous to that—it is simply a homogeneous sensory quality. Is the sweetness of sugar or the smell of a rose a pattern? Not every perceived feature of objects resembles a letter of the alphabet or a word—the objects of Kurzweil’s professional interest and expertise.

Then there are such mental phenomena as emotion, imagination, reasoning, willing, intending, calculating, silently talking to oneself, feeling pain and pleasure, itches, and moods—the full panoply of the mind. In what useful sense do all these count as “pattern recognition”? Certainly they are nothing like the perceptual cases on which Kurzweil focuses. He makes no attempt to explain how these very various mental phenomena fit his supposedly general theory of mind—and they clearly do not. So he has not shown us how to “create a mind,” or come anywhere near to doing so. Thus the hype of the title explodes very early and with a feeble fizzle. Why write a book with such an ambitious title and then deliver so little?

There is another glaring problem with Kurzweil’s book: the relentless and unapologetic use of homunculus language. Kurzweil writes: “The firing of the axon is that pattern recognizer shouting the name of the pattern: ‘Hey guys, I just saw the written word “apple.”’” Again:
If, for example, we are reading from left to right and have already seen and recognized the letters “A,” “P,” “P,” and “L,” the “APPLE” recognizer will predict that it is likely to see an “E” in the next position. It will send a signal down to the “E” recognizer saying, in effect, “Please be aware that there is a high likelihood that you will see your “E” pattern very soon, so be on the lookout for it.” The “E” recognizer then adjusts its threshold such that it is more likely to recognize an “E.”
Presumably (I am not entirely sure) Kurzweil would agree that such descriptions cannot be taken literally: individual neurons don’t say things or predict things or see things—though it is perhaps as if they do. People say and predict and see, not little bunches of neurons, still less bits of machines. Such anthropomorphic descriptions of cortical activity must ultimately be replaced by literal descriptions of electric charge and chemical transmission (though they may be harmless for expository purposes). Still, they are not scientifically acceptable as they stand.

But the problem bites deeper than that, for two reasons. First, homunculus talk can give rise to the illusion that one is nearer to accounting for the mind, properly so-called, than one really is. If neural clumps can be characterized in psychological terms, then it looks as if we are in the right conceptual ballpark when trying to explain genuine mental phenomena—such as the recognition of words and faces by perceiving conscious subjects. But if we strip our theoretical language of psychological content, restricting ourselves to the physics and chemistry of cells, we are far from accounting for the mental phenomena we wish to explain. An army of homunculi all recognizing patterns, talking to each other, and having expectations might provide a foundation for whole-person pattern recognition; but electrochemical interactions across cell membranes are a far cry from actually consciously seeing something as the letter “A.” How do we get from pure chemistry to full-blown psychology?

And the second point is that even talk of “pattern recognition” by neurons is already far too homunculus-like for comfort: people (and animals) recognize patterns—neurons don’t. Neurons simply emit electrical impulses when caused to do so by impinging stimuli; they don’t recognize anything in the literal sense. Recognizing is a conscious mental act. Neither do neurons read or understand—though they may be said to simulate these mental acts.

Eric Edelman: An Unanswered Question, 2013

Here I must say something briefly about the standard language that neuroscience has come to assume in the last fifty or so years (the subject deserves extended treatment). Even in sober neuroscience textbooks we are routinely told that bits of the brain “process information,” “send signals,” and “receive messages”—as if this were as uncontroversial as electrical and chemical processes occurring in the brain. We need to scrutinize such talk with care. Why exactly is it thought that the brain can be described in these ways? It is a collection of biological cells like any bodily organ, much like the liver or the heart, which are not apt to be described in informational terms. It can hardly be claimed that we have observed information transmission in the brain, as we have observed certain chemicals; this is a purely theoretical description of what is going on. So what is the basis for the theory?

The answer must surely be that the brain is causally connected to the mind and themind contains and processes information. That is, a conscious subject has knowledge, memory, perception, and the power of reason—I have various kinds of information at my disposal. No doubt I have this information because of activity in my brain, but it doesn’t follow that my brain also has such information, still less microscopic bits of it. Why do we say that telephone lines convey information? Not because they are intrinsically informational, but because conscious subjects are at either end of them, exchanging information in the ordinary sense. Without the conscious subjects and their informational states, wires and neurons would not warrant being described in informational terms.

The mistake is to suppose that wires and neurons are homunculi that somehow mimic human subjects in their information-processing powers; instead they are simply the causal background to genuinely informational transactions. The brain considered in itself, independently of the mind, does not process information or send signals or receive messages, any more than the heart does; people do, and the brain is the underlying mechanism that enables them to do so. It is simply false to say that one neuron literally “sends a signal” to another; what it does is engage in certain chemical and electrical activities that are causally connected to genuine informational activities.

Contemporary brain science is thus rife with unwarranted homunculus talk, presented as if it were sober established science. We have discovered that nerve fibers transmit electricity. We have not, in the same way, discovered that they transmit information. We have simply postulated this conclusion by falsely modeling neurons on persons. To put the point a little more formally: states of neurons do not have propositional content in the way states of mind have propositional content. The belief that London is rainy intrinsically and literally contains the propositional content that London is rainy, but no state of neurons contains that content in that way—as opposed to metaphorically or derivatively (this kind of point has been forcibly urged by John Searle for a long time).

And there is theoretical danger in such loose talk, because it fosters the illusion that we understand how the brain can give rise to the mind. One of the central attributes of mind is information (propositional content) and there is a difficult question about how informational states can come to exist in physical organisms. We are deluded if we think we can make progress on this question by attributing informational states to the brain. To be sure, if the brain were to process information, in the full-blooded sense, then it would be apt for producing states like belief; but it is simply not literally true that it processes information. We are accordingly left wondering how electrochemical activity can give rise to genuine informational states like knowledge, memory, and perception. As so often, surreptitious homunculus talk generates an illusion of theoretical understanding.*

Returning to Ray Kurzweil, I must applaud his chapter on consciousness and free will—for its existence, if not for its content. He is at least aware that these are difficult philosophical and scientific problems; he commendably refrains from offering facile “solutions” of the kind beloved by the brain-enamored. But the chapter sits ill with the earlier parts of the book, in which we are confidently assured that the author has a grand theory of the mind, in the form of the PRTM. For consciousness and free will are surely central aspects of the human mind and yet Kurzweil makes no claim (wisely) that they can be reductively explained by means of his 300 million “pattern recognizers” (which don’t, as I have noted, really recognize anything).

To create a mind one needs at a minimum to create consciousness, but Kurzweil doesn’t even attempt to describe a way for doing that. He is content simply to record his conviction (he calls it a “leap of faith”) that if a machine can pass the Turing test we can declare it to be conscious—that is, if it talks like a conscious being it must be a conscious being. But this is not to provide any theory of themechanism of consciousness—of what it is in the brain that enables an organism to be conscious. Clearly, unconscious processes of so-called “pattern recognition” in the neocortex will not suffice for consciousness, being precisely unconscious. All we really get in this chapter is a ramble over very familiar terrain, with nothing added to what currently exists. Worse, there are some quite execrable remarks about the philosophy of Wittgenstein, which demonstrate zero understanding of his philosophy during the periods of the Tractatus-Logico Philosophicus and thePhilosophical Investigations. Kurzweil asks:
What is it that the later Wittgenstein thought was worth thinking and talking about? It was issues such as beauty and love, which he recognized exist imperfectly in the minds of men.
So what are we to make of all the discussion of language and meaning in the Investigations? Kurzweil is way out of his depth here.

The computer engineer gets back to his main field of competence in the penultimate chapter, which restates his earlier published views about the future of information technology. His “futurist” thesis is that computing power doubles every year—information technology improves exponentially, not linearly (he calls this the Law of Accelerating Returns). He boasts that this prediction has been borne out every year since 1890 (the year of the first automated US census), and there does seem to be an empirical basis for it. But is it a law of nature and if so of what kind? What exactly is the reason for it? Technology does not in general improve exponentially, so what is it about information technology that makes this putative law hold? Is it somehow inherent in information itself? That seems hard to understand. Perhaps it is just the way things have contingently been so far, so that the rate of growth may slow down at any minute.

Kurzweil acknowledges that there are physical limits on the “law,” imposed by the structure of the atom and its possible states; it is not that computing power will double every year for all eternity! So the “law” doesn’t seem much like other scientific laws, such as the law of gravity or even the law of supply and demand. What seems to me worth noting is that the growth of information technology does not depend on the nature of the material substrate in which information exists (such as silicon chips), because new substrates keep being invented. Once the information capacity of one medium has been exhausted, engineers come up with a new medium, with even more potential states and yet more tightly packed. But then the “law” depends on a prediction about human ingenuity—that we will keep inventing ever more powerful physical systems for computation.

It is therefore ultimately a psychological law: to the effect that human creativity in the field of information technology improves exponentially. And that doesn’t look like a natural law at all, but just a fortunate historical fact about the twentieth century. Thus Kurzweil’s “law” is more likely to be fortuitous than genuinely law-like: there is no necessity that information technology improves exponentially over (all?) time. It is just an accidental, though interesting, historical fact, not written into the basic workings of the cosmos. As philosophers say, the generalization lacks nomological necessity.

Here then is my overall assessment of this book: interesting in places, fairly readable, moderately informative, but wildly overstated.

*****
Not all neuroscience employs homuncular language. Many neuroscientists limit themselves to descriptions of electrical and chemical activity in the brain. The recent announcement by the Obama administration of an ambitious project to map the human brain seems commendably free of homunculus mythology. The same can be said for a recent article in the journal Neuron by six scientists recommending such a project. See A. Paul Alivisatos et al., “The Brain Activity Map Project and the Challenge of Functional Connectomics,” Neuron, Vol. 74 (June 21, 2012).

No comments: