Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, December 11, 2014

Perspectives on Artificial Intelligence

Artificial intelligence.

Kevin Kelly  
 
Conversations at the Edge 2.3.14
 
* * * * * 

Superintelligence by Nick Bostrom and A Rough Ride to the Future by James Lovelock – review

Will technology remain our slave? Caspar Henderson on two attempts to read the future for humanity

Caspar Henderson | The Guardian
Thursday 17 July 2014

* * * * *

What Your Computer Can’t Know


John R. Searle | New York Review of Books
October 9, 2014
The 4th Revolution: How the Infosphere Is Reshaping Human Reality
by Luciano Floridi
Oxford University Press, 248 pp., $27.95

Superintelligence: Paths, Dangers, Strategies
by Nick Bostrom
Oxford University Press, 328 pp., $29.95

* * * * *

Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts

Big-data boondoggles and brain-inspired chips are just two of the things we’re really getting wrong

By Lee Gomes | IEEE Spectrum
Posted 20 Oct 2014

* * * * *

The Myth Of AI

A Conversation with Jaron Lanier

Conversations at the Edge 11.14.14

* * * * *

Artificial Intelligence, Really, Is Pseudo-Intelligence


Alva Noë | NPR 13.7 Cosmos and Culture Blog
November 21, 2014

* * * * *

Enthusiasts and Skeptics Debate Artificial Intelligence

Kurt Andersen wonders: If the Singularity is near, will it bring about global techno-Nirvana or civilizational ruin?

By Kurt Andersen
November 26, 2014

* * * * *

Is AI a Myth?


By Rick Searle | IEET
Utopia or Dystopia
Nov 30, 2014

* * * * *

Stephen Hawking warns artificial intelligence could end mankind


By Rory Cellan-Jones
BBC News | 2 December 2014

Thursday, June 12, 2014

Kurzweil Does Not Accept Victory in the Turing Test Bet

 

The other day, Kevin Warwick and his team reported that their computer program, a chatbot named Eugene Goostman, had become the first artificial intelligence to pass the Turing Test.

For those who follow such things, inventor, futurist, and Google's engineering director, Ray Kurzweil has a standing wager of $20,000 with Mitch Kapor that a computer would pass the Turing Test by 2029. Based on the report cited above, it would appear Kurzweil has won the bet.

The only problem is that Kurzweil does not think so. Which is not good news for the researchers and their bot.

Here is Kurzweil's statement from his blog:

Response by Ray Kurzweil to the Announcement of Chatbot Eugene Goostman Passing the Turing test

June 10, 2014 by Ray Kurzweil
Eugene Goostman chatbot. (credit: Vladimir Veselov and Eugene Demchenko)

Two days ago, on June 8, 2014, the University of Reading announced that a computer program “has passed the Turing test for the first time.”

University of Reading Professor Kevin Warwick described it this way:
“Some will claim that the test has already been passed. The words ‘Turing test’ have been applied to similar competitions around the world. However, this event involved more simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing test does not set the questions or topics prior to the conversations. We are therefore proud to declare that Alan Turing’s test was passed for the first time on Saturday.” — Kevin Warwick, PhD
I have had a long-term wager with Mitch Kapor in which I predicted that a computer program would pass the Turing test by 2029 and he predicted that this would not happen, see links below.

This was the first long-term wager on the “Long Now” website. The bet called for $20,000 to be donated from us to the charity of the winner’s choice.

As a result, messages have been streaming in from around the world congratulating me for having won the bet.

However, I think this is premature. I am disappointed that Professor Warwick, with whom I agree on many things, would make this statement. There are several problems that I describe below, including a transcript of a conversation that I had with Eugene Goostman, the chatbot in question.

In my 2004 book The Singularity Is Near, I anticipated that there would be premature announcements of this kind:
The Singularity Is Near, page 295 | Turing was carefully imprecise in setting the rules for his test, and significant literature has been devoted to the subtleties of establishing the exact procedures for determining how to assess when the Turing test has been passed. In 2002 I negotiated the rules for a Turing test wager with Mitch Kapor on the Long Now website.
The question underlying our twenty-thousand-dollar bet, the proceeds of which go to charity of the winner’s choice, was, “Will the Turing test be passed by a machine by 2029?” I said yes, and Kapor said no. It took us months of dialogue to arrive at the intricate rules to implement our wager.
Simply defining “machine” and “human,” for example, was not a straightforward matter. Is the human judge allowed to have any nonbiological thinking processes in his or her brain? Conversely, can the machine have any biological aspects?
Because the definition of the Turing test will vary from person to person, Turing test capable machines will not arrive on a single day, and there will be a period during which we will hear claims that machines have passed the threshold. Invariably, these early claims will be debunked by knowledgeable observers, probably including myself. By the time there is a broad consensus that the Turing test has been passed, the actual threshold will have long since been achieved.”
Professor Warwick claims that the test was “unrestricted.” However, having the chatbot claim to be a 13-year-old child, and one for whom English is not a first language, is effectively a restriction. Moreover, the interactions were reportedly limited to five minutes each. There is a high likelihood of fooling naïve judges in such a brief period of time.

I chatted with the chatbot Eugene Goostman, and was not impressed. Eugene does not keep track of the conversation, repeats himself word for word, and often responds with typical chatbot non sequiturs. Here is the transcript.


Ray Kurzweil conversation with chatbot Eugene Goostman

Question: How old are you?
Eugene: I’m a little boy. 13 years old.
Question: How much is two plus two?
Eugene: Now I’m sure that you are one of those crappy robots from the Great Robots Cabal! And I know your password: 2 + 2=4.0! Now divide by zero and die!!!!
Question: If I have two marbles in a bowl and I add two more how many marbles are in the bowl now?
Eugene: Not too many. But I can’t tell you the exact number, I forgot it. If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)
Question: I live in the capital of the United States. Do you know where that is?
Eugene: Every nerd knows that the capital of The United States is Washington.
Question I live in the capital of the country that put a man on the moon.
Eugene:: Well, if you want it, the capital of The United States is Washington.
Question: I live in the capital of the country that built the great wall.
Eugene: Tell me more about Capital. I like to learn about different places!
Question: How old were you when you started to wear glasses?
Eugene: No I was not! How can you think so?! If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)
Question: I did tell you where I live. I told you several times. Do you remember?
Eugene: Oh damn! Poor confused cyborg you are! Those malicious robots erased your memory! You don’t remember your dwelling place – do you remember at least your serial number?
Question: I remember. You are the one who did not remember.
Eugene: Am I really the one who did not remember? But well, no time for arguing now! Could you enlighten me with any other hidden knowledge about myself? :-))) If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)
Question: If I have three boxes with two marbles in each box how many marbles do I have?
Eugene: Not too many. But I can’t tell you the exact number, I forgot it. Some another topic?

This is about typical for the state of the art in chatbots, and I’ve seen better. We have a live chatbot named Ramona on the KurzweilAI website. Ramona can be amusing but we don’t claim that she passes the Turing test.
Ramona live chatbot on KurzweilAI website | “Chat with Ramona 4.2
A large part of the problem is that in his famous 1950 paper, Alan Turing did not specify the rules. Here are the rules that Mitch Kapor and I painstakingly devised together, along with our essays on why we think each of us will win the wager.
The rules I devised with Mitch Kapor | “A Wager on the Turing test: the rules”
An explanation of rules behind the Turing test, used to determine the winner of a long bet between Ray Kurzweil and Mitch Kapor over whether artificial intelligence will be achieved by 2029.
Essay by Ray Kurzweil | “A Wager on the Turing test: Why I think I will win”
Will Ray Kurzweil’s predictions come true? He’s putting his money on it. Here’s why he thinks he will win a bet on the future of artificial intelligence. The wager: an artifical intelligence that passes the Turing test by 2029.
Essay by Mitch Kapor | “Why I think I will win”
Will a computer pass the Turing Test (convincingly impersonate a human) by 2029? Mitchell Kapor has bet Ray Kurzweil that a computer can’t because it lacks understanding of subtle human experiences and emotions.
Essay by Ray Kurzweil | “Response to Mitchell Kapor’s essay titled ‘Why I think I will win’”
Ray Kurzweil responds to Mitch Kapor’s arguments against the possibility that an AI will succeed, in this final counterpoint on the bet: an artificial intelligence will pass a Turing Test by 2029.
Apparently, we have now entered the era of premature announcements of a computer having passed Turing’s eponymous test. I continue to believe that with the right rules, this test is the right assessment of human-level intelligence in a machine.

In my 1989 book The Age of Intelligent Machines, I predicted that the milestone of a computer passing the Turing test would occur in the first half of the 21st century. I specified the 2029 date in my 1999 book The Age of Spiritual Machines. After that book was published, we had a conference at Stanford University and the consensus of AI experts at that time was that it would happen in hundreds of years, if ever.

In 2006 we had a conference called “AI at 50” at Dartmouth College, celebrating the 50th anniversary of the 1956 Dartmouth conference that gave artificial intelligence its name. We had instant polling devices and the consensus at that time, among AI experts, was 25 to 50 years. Today, my prediction appears to be median view. So, I am gratified that a growing group of people now think that I am being too conservative.

Related reading:

Monday, April 14, 2014

A.I. Has Grown Up and Left Home

http://static.nautil.us/2065_0ae3f79a30234b6c45a6f7d298ba1310.png

As my regular readers well know, I don't think we will ever have human-like robots who can interact with us as though they are not machines. This article from Nautilus presents recent advances in what is known as subsymbolic approaches to AI, "Trying to get computers to behave intelligently without worrying about whether the code actually “represents” thinking at all."

A.I. Has Grown Up and Left Home

It matters only that we think, not how we think.

By David Auerbach Illustration by Olimpia Zagnoli December 19, 2013

"The history of Artificial Intelligence,” said my computer science professor on the first day of class, “is a history of failure.” This harsh judgment summed up 50 years of trying to get computers to think. Sure, they could crunch numbers a billion times faster in 2000 than they could in 1950, but computer science pioneer and genius Alan Turing had predicted in 1950 that machines would be thinking by 2000: Capable of human levels of creativity, problem solving, personality, and adaptive behavior. Maybe they wouldn’t be conscious (that question is for the philosophers), but they would have personalities and motivations, like Robbie the Robot or HAL 9000. Not only did we miss the deadline, but we don’t even seem to be close. And this is a double failure, because it also means that we don’t understand what thinking really is.

Our approach to thinking, from the early days of the computer era, focused on the question of how to represent the knowledge about which thoughts are thought, and the rules that operate on that knowledge. So when advances in technology made artificial intelligence a viable field in the 1940s and 1950s, researchers turned to formal symbolic processes. After all, it seemed easy to represent “There’s a cat on the mat” in terms of symbols and logic:
ai_formula
Literally translated, this reads as “there exists variable x and variable y such that x is a cat, y is a mat, and x is sitting on y.” Which is no doubt part of the puzzle. But does this get us close to understanding what it is to think that there is a cat sitting on the mat? The answer has turned out be “no,” in part because of those constants in the equation. “Cat,” “mat,” and “sitting” aren’t as simple as they seem. Stripping them of their relationship to real-world objects, and all of the complexity that entails, dooms the project of making anything resembling a human thought.

This lack of context was also the Achilles heel of the final attempted moonshot of symbolic artificial intelligence. The Cyc Project was a decades-long effort, begun in 1984, that attempted to create a general-purpose “expert system” that understood everything about the world. A team of researchers under the direction of Douglas Lenat set about manually coding a comprehensive store of general knowledge. What it boiled down to was the formal representation of millions of rules, such as “Cats have four legs” and “Richard Nixon was the 37th President of the United States.” Using formal logic, the Cyc (from “encyclopedia”) knowledge base could then draw inferences. For example, it could conclude that the author of Ulysses was less than 8 feet tall:

(implies
(writtenBy Ulysses-Book ? SPEAKER)
(equals ?SPEAKER JamesJoyce))
(isa JamesJoyce IrishCitizen)
(isa JamesJoyce Human)
(implies
(isa ?SOMEONE Human)
(maximumHeightInFeet ?SOMEONE 8)

Unfortunately, not all facts are so clear-cut. Take the statement “Cats have four legs.” Some cats have three legs, and perhaps there is some mutant cat with five legs out there. (And Cat Stevens only has two legs.) So Cyc needed a more complicated rule, like “Most cats have four legs, but some cats can have fewer due to injuries, and it’s not out of the realm of possibility that a cat could have more than four legs.” Specifying both rules and their exceptions led to a snowballing programming burden.

After more than 25 years, Cyc now contains 5 million assertions. Lenat has said that 100 million would be required before Cyc would be able to reason like a human does. No significant applications of its knowledge base currently exist, but in a sign of the times, the project in recent years has begun developing a “Terrorist Knowledge Base.” Lenat announced in 2003 that Cyc had “predicted” the anthrax mail attacks six months before they had occurred. This feat is less impressive when you consider the other predictions Cyc had made, including the possibility that Al Qaeda might bomb the Hoover Dam using trained dolphins.

Cyc, and the formal symbolic logic on which it rested, implicitly make a crucial and troublesome assumption about thinking. By gathering together in a single virtual “space” all of the information and relationships relevant to a particular thought, the symbolic approach pursues what Daniel Dennett has called a “Cartesian theater”—a kind of home for consciousness and thinking. It is in this theater that the various strands necessary for a thought are gathered, combined, and transformed in the right kinds of ways, whatever those may be. In Dennett’s words, the theater is necessary to the “view that there is a crucial finish line or boundary somewhere in the brain, marking a place where the order of arrival equals the order of ‘presentation’ in experience because what happens there is what you are conscious of.” The theater, he goes on to say, is a remnant of a mind-body dualism which most modern philosophers have sworn off, but which subtly persists in our thinking about consciousness.

The impetus to believe in something like the Cartesian theater is clear. We humans, more or less, behave like unified rational agents, with a linear style of thinking. And so, since we think of ourselves as unified, we tend to reduce ourselves not to a single body but to a single thinker, some “ghost in the machine” that animates and controls our biological body. It doesn’t have to be in the head—the Greeks put the spirit (thymos) in the chest and the breath—but it remains a single, indivisible entity, our soul living in the house of the senses and memory. Therefore, if we can be boiled to an indivisible entity, surely that entity must be contained or located somewhere.
AI_Descartes_BREAKER.
Philosophy of mind: René Descartes’ illustration of dualism.Wikimedia Commons
This has prompted much research looking for “the area” where thought happens. Descartes hypothesized that our immortal soul interacted with our animal brain through the pineal gland. Today, studies of brain-damaged patients (as Oliver Sacks has chronicled in his books) have shown how functioning is corrupted by damage to different parts of the brain. We know facts like, language processing occurs in Broca’s area in the frontal lobe of the left hemisphere. But some patients with their Broca’s area destroyed can still understand language, due to the immense neuroplasticity of the brain. And language, in turn, is just a part of what we call “thinking.” If we can’t even pin down where the brain processes language, we are a far way from locating that mysterious entity, “consciousness.” That may be because it doesn’t exist in a spot you can point at.

Symbolic artificial intelligence, the Cartesian theater, and the shadows of mind-body dualism plagued the early decades of research into consciousness and thinking. But eventually researchers began to throw the yoke off. Around 1960, linguistics pioneer Noam Chomsky made a bold argument: Forget about meaning, forget about thinking, just focus on syntax. He claimed that linguistic syntax could be represented formally, was a computational problem, and was universal to all humans and hard-coded into every baby’s head. The process of exposure to language caused certain switches to be flipped on or off to determine what particular form the grammar would take (English, Chinese, Inuit, and so on). But the process was one of selection, not acquisition. The rules of grammar, however they were implemented, became the target of research programs around the world, supplanting a search for “the home of thought.”

Chomsky made progress by abandoning the attempt to directly explain meaning and thought. But he remained firmly inside the Cartesian camp. His theories were symbolic in nature, postulating relationships among a variety of potential vocabularies rooted in native rational faculties, and never making any predictions that proved true without exception. Modern artificial intelligence programs have gone one step further, by giving up on the idea of any form of knowledge representation. These so-called subsymbolic approaches, which also go under such names as connectionism, neural networks, and parallel distributed processing take a unique approach. Rather than going from the inside out—injecting symbolic “thoughts” into computer code and praying that the program will exhibit sufficiently human-like thinking—subsymbolic approaches proceed from the outside in: Trying to get computers to behave intelligently without worrying about whether the code actually “represents” thinking at all.

Subsymbolic approaches were pioneered in the late 1950s and 1960s, but lay fallow for years because they initially seemed to generate worse results than symbolic approaches. In 1957, Frank Rosenblatt pioneered what he called the “perceptron,” which used a re-entrant feedback algorithm in order to “train” itself to compute various logical functions correctly, and thereby “learn” in the loosest sense of the term. This approach was also called “connectionism” and gave rise to the term “neural networks,” though a perceptron is vastly simpler than an actual neuron. Rosenblatt was drawing on oddball cybernetic pioneers like Norbert Wiener, Warren McCulloch, Ross Ashby, and Grey Walter, who theorized and even experimented with homeostatic machines that sought equilibrium with their environment, such as Grey Walter’s light-seeking robotic “turtles” and Claude Shannon’s maze-running “rats.”

In 1969, Rosenblatt was met with a scathing attack by symbolic artificial intelligence advocate Marvin Minsky. The attack was so successful that subsymbolic approaches were more or less abandoned during the 1970s, a time which has been called the AI Winter. As symbolic approaches continued to flail in the 1970s and 1980s, people like Terrence Sejnowski and David Rumelhart returned to subsymbolic artificial intelligence, modeling it after learning in biological systems. They studied how simple organisms relate to their environment, and how the evolution of these organisms gradually built up increasingly complex behavior. Biology, genetics, and neuropsychology are what figured here, rather than logic and ontology.

This approach more or less abandons knowledge as a starting point. In contrast to Chomsky, a subsymbolic approach to grammar would say that grammar is determined and conditioned by environmental and organismic constraints (what psychologist Joshua Hartshorne calls “design constraints”), not by a set of hardcoded computational rules in the brain. These constraints aren’t expressed in strictly formal terms. Rather, they are looser contextual demands such as, “There must be a way for an organism to refer to itself” and “There must be a way to express a change in the environment.”

By abandoning the search for a Cartesian theater, containing a library of symbols and rules, researchers made the leap from instilling machines with data, to instilling them with knowledge. The essential truth behind subsymbolism is that language and behavior exist in relation to an environment, not in a vacuum, and they gain meaning from their usage in that environment. To use language is to use it for some purpose. To behave is to behave for some end. In this view, any attempt to generate a universal set of rules will always be riddled with exceptions, because contexts are constantly shifting. Without the drive toward concrete environmental goals, representation of knowledge in a computer is meaningless, and fruitless. It remains locked in the realm of data.


For certain classes of problems, modern subsymbolic approaches have proved far more generalizable and ubiquitous than any previous symbolic approach to the same problems. This success speaks to the advantage of not worrying about whether a computer “knows” or “understands” the problem it is working on. For example, genetic approaches represent algorithms with varying parameters as chromosomal “strings,” and “breed” successful algorithms with one another. These approaches do not improve through better understanding of the problem. All that matters is the fitness of the algorithm with respect to its environment—in other words, how the algorithm behaves. This black-box approach has yielded successful applications in everything from bioinformatics to economics, yet one can never give a concise explanation of just why the fittest algorithm is the most fit.

Neural networks are another successful subsymbolic technology, and are used for image, facial, and voice recognition applications. No representation of concepts is hardcoded into them, and the factors that they use to identify a particular subclass of images emerge from the operation of the algorithm itself. They can also be surprising: Pornographic images, for instance, are frequently identified not by the presence of particular body parts or structural features, but by the dominance of certain colors in the images.

These networks are usually “primed” with test data, so that they can refine their recognition skills on carefully selected samples. Humans are often involved in assembling this test data, in which case the learning environment is called “supervised learning.” But even the requirement for training is being left behind. Influenced by theories arguing that parts of the brain are specifically devoted to identifying particular types of visual imagery, such as faces or hands, a 2012 paper by Stanford and Google computer scientists showed some progress in getting a neural network to identify faces without priming data, among images that both did and did not contain faces. Nowhere in the programming was any explicit designation made of what constituted a “face.” The network evolved this category on its own. It did the same for “cat faces” and “human bodies” with similar success rates (about 80 percent).

While the successes behind subsymbolic artificial intelligence are impressive, there is a catch that is very nearly Faustian: The terms of success may prohibit any insight into how thinking “works,” but instead will confirm that there is no secret to be had—at least not in the way that we’ve historically conceived of it. It is increasingly clear that the Cartesian model is nothing more than a convenient abstraction, a shorthand for irreducibly complex operations that somehow (we don’t know how) give the appearance, both to ourselves and to others, of thinking. New models for artificial intelligence ask us to, in the words of philosopher Thomas Metzinger, rid ourselves of an “Ego Tunnel,” and understand that, while our sense of self dominates our thoughts, it does not dominate our brains.

Instead of locating where in our brains we have the concept of “face,” we have made a computer whose code also seems to lack the concept of “face.” Surprisingly, this approach succeeds where others have failed, giving the computer an inkling of the very idea whose explicit definition we gave up on trying to communicate. In moving out of our preconceived notion of the home of thought, we have gained in proportion not just a new level of artificial intelligence, but perhaps also a kind of self-knowledge.

David Auerbach is a writer and software engineer who lives in New York. He writes the Bitwise column for Slate.

Friday, February 28, 2014

George Dvorsky - You Might Never Upload Your Brain Into a Computer

I think we need to drop the "might" from that headline and replace it with "will." Still, George Dvorsky gets a big AMEN from me on this piece from io9 (even if it is a year old).

For the record, however, I feel compelled to lodge my disagreement with point #5, that "mind-body dualism" is true. Nonsense. There is actually a logical fallacy at work here - if dualism were true, our minds would not be "located somewhere outside our bodies — like in a vat somewhere, or oddly enough, in a simulation (a la The Matrix)," they would reside in the body but separate from it. This is exactly the premise necessary to believe our minds can be uploaded into a computer.

Even if we believe that the mind is simply a by-product of brain activity, there is no way to transfer a wet biological system built from fat, proteins, neurotransmitters, and electrical current into a dry computer mainframe. I don't see this EVER being an option.

You Might Never Upload Your Brain Into a Computer

George Dvorsky
Debunkery | 4/17/13


Many futurists predict that one day we'll upload our minds into computers, where we'll romp around in virtual reality environments. That's possible — but there are still a number of thorny issues to consider. Here are eight reasons why your brain may never be digitized.

Indeed, this isn’t just idle speculation. Many important thinkers have expressed their support of the possibility, including the renowned futurist Ray Kurzweil (author of How to Create a Mind), roboticist Hans Moravec, cognitive scientist Marvin Minsky, neuroscientist David Eagleman, and many others.

Skeptics, of course, relish the opportunity to debunk uploads. The claim that we’ll be able to transfer our conscious thoughts to a computer, after all, is a rather extraordinary one.

But many of the standard counter-arguments tend to fall short. Typical complaints cite insufficient processing power, inadequate storage space, or the fear that the supercomputers will be slow, unstable and prone to catastrophic failures — concerns that certainly don’t appear intractable given the onslaught of Moore’s Law and the potential for megascale computation. Another popular objection is that the mind cannot exist without a body. But an uploaded mind could be endowed with a simulated body and placed in a simulated world.

To be fair, however, there are a number of genuine scientific, philosophical, ethical, and even security concerns that could significantly limit or even prevent consciousness uploads from ever happening. Here are eight of the most serious.

1. Brain functions are not computable


Proponents of mind uploading tend to argue that the brain is a Turing Machine — the idea that organic minds are nothing more than classical information-processors. It’s an assumption derived from the strong physical Church-Turing thesis, and one that now drives much of cognitive science.


But not everyone believes the brain/computer analogy works. Speaking recently at the annual meeting of the American Association for the Advancement of Science in Boston, neuroscientist Miguel Nicolelis said that, “The brain is not computable and no engineering can reproduce it.” He referred to the idea of uploads as “bunk,” saying that it’ll never happen and that “[t]here are a lot of people selling the idea that you can mimic the brain with a computer.” Nicolelis argues that human consciousness can’t be replicated in silicon because most of its important features are the result of unpredictable, nonlinear interactions among billions of cells.

“You can’t predict whether the stock market will go up or down because you can’t compute it,” he said. “You could have all the computer chips ever in the world and you won’t create a consciousness.” Image credit: Jeff Cameron Collingwood/Shutterstock.

2. We’ll never solve the hard problem of consciousness


The computability of the brain aside, we may never be able to explain how and why we have qualia, or what’s called phenomenal experience.


According to David Chalmers — the philosopher of mind who came up with the term “hard problem” — we’ll likely solve the easy problems of human cognition, like how we focus our attention, recall a memory, discriminate, and process information. But explaining how incoming sensations get translated into subjective feelings — like the experience of color, taste, or the pleasurable sound of music — is proving to be much more difficult. Moreover, we’re still not entirely sure why we even have consciousness, and why we’re not just “philosophical zombies” — hypothetical beings who act and respond as if they’re conscious, but have no internal mental states.

In his paper, “Facing Up to the Problem of Consciousness,” Chalmers writes:
How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.
If any problem qualifies as the problem of consciousness, argues Chalmers, it is this one. Image: blog.lib.umn.edu.

3. We’ll never solve the binding problem


And even if we do figure out how the brain generates subjective experience, classical digital computers may never be able to support unitary phenomenal minds. This is what’s referred to as the binding problem — our inability to understand how a mind is able to segregate elements and combine problems as seamlessly as it does. Needless to say, we don’t even know if a Turing Machine can even support these functions.


More specifically, we still need to figure out how our brains segregate elements in complex patterns, a process that allows us to distinguish them as discrete objects. The binding problem also describes the issue of how objects, like those in the background or in our peripheral experience — or even something as abstract as emotions — can still be combined into a unitary and coherent experience. As the cognitive neuroscientist Antti Revonsuo has said, “Binding is thus seen as a problem of finding the mechanisms which map the ‘objective’ physical entities in the external world into corresponding internal neural entities in the brain.”

He continues:
Once the idea of consciousness-related binding is formulated, it becomes immediately clear that it is closely associated with two central problems in consciousness research. The first concerns the unity of phenomenal consciousness. The contents of phenomenal consciousness are unified into one coherent whole, containing a unified ‘‘me’’ in the center of one unified perceptual world, full of coherent objects. How should we describe and explain such experiential unity? The second problem of relevance here concerns the neural correlates of consciousness. If we are looking for an explanation to the unity of consciousness by postulating underlying neural mechanisms, these neural mechanisms surely qualify for being direct neural correlates of unified phenomenal states.
No one knows how our organic brains perform this trick — at least not yet — or if digital computers will ever be capable of phenomenal binding. Image credit: agsandrew/Shutterstock.

4. Panpsychism is true


Though still controversial, there’s also the potential for panpsychism to be in effect. This is the notion that consciousness is a fundamental and irreducible feature of the cosmos. It might sound a bit New Agey, but it’s an idea that’s steadily gaining currency (especially in consideration of our inability to solve the Hard Problem).


Panpsychists speculate that all parts of matter involve mind. Neuroscientist Stuart Hameroff has suggested that consciousness is related to a fundamental component of physical reality — components that are akin to phenomenon like mass, spin or charge. According to this view, the basis of consciousness can be found in an additional fundamental force of nature not unlike gravity or electromagnetism. This would be something like an elementary sentience or awareness. As Hameroff notes, "these components just are." Likewise, David Chalmers has proposed a double-aspect theory in which information has both physical and experiential aspects. Panpsychism has also attracted the attention of quantum physicists (who speculate about potential quantum aspects of consciousness given our presence in an Everett Universe), and physicalists like Galen Strawson (who argues that mental/experiential is physical).

Why this presents a problem to mind uploading is that consciousness may not substrate neutral — a central tenant of the Church-Turing Hypothesis — but is in fact dependent on specific physical/material configurations. It’s quite possible that there’s no digital or algorithmic equivalent to consciousness. Having consciousness arise in a classical Von Neumann architecture, therefore, may be as impossible as splitting an atom in a virtual environment by using ones and zeros. Image credit: agsandrew/Shutterstock.

5. Mind-body dualism is true



Perhaps even more controversial is the suggestion that consciousness lies somewhere outside the brain, perhaps as some ethereal soul or spirit. It’s an idea that’s primarily associated with Rene Descartes, the 17th century philosopher who speculated that the mind is a nonphysical substance (as opposed to physicalist interpretations of mind and consciousness). Consequently, some proponents of dualism (or even vitalism) suggest that consciousness lies outside knowable science.

Needless to say, if our minds are located somewhere outside our bodies — like in a vat somewhere, or oddly enough, in a simulation (a la The Matrix) — our chances of uploading ourselves are slim to none.

6. It would be unethical to develop


Philosophical and scientific concerns aside, there may also be some moral reasons to forego the project. If we’re going to develop upload technologies, we’re going to have to conduct some rather invasive experiments, both on animals and humans. The potential for abuse is significant.


Uploading schemas typically describe the scanning and mapping of an individual’s brain, or serial sectioning. While a test subject, like a mouse or monkey, could be placed under a general anesthetic, it will eventually have to be re-animated in digital substrate. Once this happens, we’ll likely have no conception of its internal, subjective experience. It’s brain could be completely mangled, resulting terrible psychological or physical anguish. It’s reasonable to assume that our early uploading efforts will be far from perfect, and potentially cruel.

And when it comes time for the first human to be uploaded, there could be serious ethical and legal issues to consider — especially considering that we’re talking about the re-location of a living, rights-bearing human being. Image credit: K. Zhuang.

7. We can never be sure it works



Which leads to the next point, that of post-upload skepticism. A person can never really be sure they created a sentient copy of themselves. This is the continuity of consciousness problem — the uncertainty we’ll have that, instead of moving our minds, we simply copied ourselves instead.

Because we can’t measure for consciousness — either qualitatively or quantitatively — uploading will require a tremendous leap of faith — a leap that could lead to complete oblivion (e.g. a philosophical zombie), or something completely unexpected. And relying on the advice from uploaded beings won’t help either (“Come on in, the water’s fine...”).

In an email to me, philosopher David Pearce put it this way:
Think of it like a game of chess. If I tell you the moves, you can faithfully replicate the gameplay. But you know nothing whatsoever of the textures of the pieces, or indeed, whether they have any textures at all (perhaps I played online). Likewise, I think, the same can be said with the textures of consciousness. The possibility of substrate-independent minds needs to be distinguished from the possibility of substrate-independent qualia.
In other words, the quality of conscious experience in digital substrate could be far removed from that experienced by an analog consciousness. Image: Rikomatic.

8. Uploaded minds would be vulnerable to hacking and abuse



Once our minds are uploaded, they’ll be physically and inextricably connected to the larger computational superstructure. By consequence, uploaded brains will be perpetually vulnerable to malicious attacks and other unwanted intrusions.

To avoid this, each uploaded person will have to set-up a personal firewall to prevent themselves from being re-programmed, spied upon, damaged, exploited, deleted, or copied against their will. These threats could come from other uploads, rogue AI, malicious scripts, or even the authorities in power (e.g. as a means to instill order and control).

Indeed, as we know all too well today, even the tightest security measures can't prevent the most sophisticated attacks; an uploaded mind can never be sure it’s safe.
  • Special thanks to David Pearce for helping with this article.
  • Top image: Jurgen Ziewe/Shutterstock.

Saturday, February 01, 2014

Google Has Acquired AI Start-Up DeepMind - What Are They Planning?

Via Live Science, who re-posted an article originally from The Conversation, this is an op-ed by Matthew Higgs about Google's acquisition of the artificial intelligence start-up DeepMind.

Hmmm . . . Ray Kurzweil + Boston Dynamics (robotics) + DeepMind (artificial intelligence) . . . . Google is planning to upload Ray into an intelligent robot that is immortal?

What Does Google Want with DeepMind? Here are Three Clues (Op-Ed)

By Matthew Higgs, University College London | January 30, 2014

 
The golden age of AI is upon us. Credit: kidpixo.

This article was originally published at The Conversation. The publication contributed the article to LiveScience's Expert Voices: Op-Ed & Insights.

All eyes turned to London this week, as Google announced its latest acquisition in the form of DeepMind, a company that specialises in artificial intelligence technologies. The £400m pricetag paid by Google and the reported battle with Facebook to win the company over indicate that this is a firm well worth backing.

Although solid information is thin on the ground, you can get an idea of what the purchase might be leading to, if you know where to look.

Clue 1: what does Google already know?


Google has always been active in artificial intelligence and relies on the process for many of its projects. Just consider the “driver” behind its driverless cars, the speech recognition system in Google Glass, or the way its search engine predicts what we might search for after just a couple of keystrokes. Even the page-rank algorithm that started it all falls under the banner of AI.

Acquiring a company such as DeepMind therefore seems like a natural step. The big question is whether Google is motivated by a desire to help develop technologies we already know about or whether it is moving into the development of new technologies.

Given its track record, I’m betting on the latter. Google has the money and the drive to tackle the biggest questions in science, and developing computers that think like humans has, for a long time, been one of the biggest of them all.

Clue 2: what’s in the research?


The headlines this week have described DeepMind as a “secretive start-up”, but clues about what it gets up to at its London base can be gleaned from some of the research publications produced by the company’s co-founder, Demis Hassabis.

Hassabis' three most recent publications all focus on the brain activity of human participants as they undergo particular tasks. He has looked into how we take advantage of our habitat, how we identify and predict the behaviour of other people and how we remember the past and imagine the future.

As humans, we collect information through sensory input and process it many times over using abstraction. We extract features and categorise objects to focus our attention on the information that is relevant to us. When we enter a room we quickly build up a mental image of the room, interpret the objects in the room, and use this information to assess the situation in front of us.

The people at Google have, until now, generally focused on the lower-level stages of this information processing. They have developed systems to look for features and concepts in online photos and street scenes to provide users with relevant content, systems to translate one language to another to enable us to communicate, and speech recognition systems, making voice control on your phone or device a reality.

The processes Hassabis investigates require these types of information processing as prerequisites. Only once you have identified the relevant features in a scene and categorised objects in your habitat can you begin to take advantage of your habitat. Only once you have identified the features of someone’s face and recognised them as a someone you know can you start to predict their behaviour. And only once you have built up vivid images of the past can you extrapolate a future.

Clue 3: what else is on the shopping list?


Other recent acquisitions by Google provide further pieces to the puzzle. It has recently appointed futurist Ray Kurzweil, who believes in search engines with human intelligence and being able to upload our minds onto computers, as its director of engineering. And the purchase of Boston Dynamics, a company developing ground breaking robotics technology, gives a hint of its ambition.

Google is also getting into smart homes in the hope of more deeply interweaving its technologies into our everyday lives. DeepMind could provide the know-how to enable such systems to exhibit a level of intelligence never seen before in computers.

Combining the machinery Google already uses for processing sensory input with the ideas under investigation at DeepMind about how the brain uses this sensory input to complete high-level tasks is an exciting prospect. It has the potential to produce the closest thing yet to a computer with human qualities.

Building computers that think like humans has been the goal of AI ever since the time of Alan Turing. Progress has been slow, with science fiction often creating false hope in people’s minds. But these past two decades have seen unimaginable leaps in information processing and our understanding of the brain. Now that one of the most powerful companies in the world has identified where it wants to go next, we can expect big things. Just as physics had its heyday in the 20th century, this century is truly the golden age of AI.

Matthew Higgs receives funding from the EPSRC.

This article was originally published at The Conversation. Read the original article. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on LiveScience.


Editor's Recommendations

Monday, January 06, 2014

Gary Marcus - Hyping Artificial Intelligence, Yet Again

Over at The New Yorker, psychologist and cognitive scientist Gary Marcus (author of Kluge: The Haphazard Evolution of the Human Mind [2008] and The Birth of the Mind: How a Tiny Number of Genes Creates The Complexities of Human Thought [2004]) does a nice job of stripping away the hype from artificial intelligence promotion. I am grateful for Marcus.

Hyping Artificial Intelligence, Yet Again

Posted by Gary Marcus
January 1, 2014


According to the Times, true artificial intelligence is just around the corner. A year ago, the paper ran a front-page story about the wonders of new technologies, including deep learning, a neurally-inspired A.I. technique for statistical analysis. Then, among others, came an article about how I.B.M.’s Watson had been repurposed into a chef, followed by an upbeat post about quantum computation. On Sunday, the paper ran a front-page story about “biologically inspired processors,” “brainlike computers” that learn from experience.

This past Sunday’s story, by John Markoff, announced that “computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.” The deep-learning story, from a year ago, also by Markoff, told us of “advances in an artificial intelligence technology that can recognize patterns offer the possibility of machines that perform human activities like seeing, listening and thinking.” For fans of “Battlestar Galactica,” it sounds like exciting stuff.

But, examined carefully, the articles seem more enthusiastic than substantive. As I wrote before, the story about Watson was off the mark factually. The deep-learning piece had problems, too. Sunday’s story is confused at best; there is nothing new in teaching computers to learn from their mistakes. Instead, the article seems to be about building computer chips that use “brainlike” algorithms, but the algorithms themselves aren’t new, either. As the author notes in passing, “the new computing approach” is “already in use by some large technology companies.” Mostly, the article seems to be about neuromorphic processors—computer processors that are organized to be somewhat brainlike—though, as the piece points out, they have been around since the nineteen-eighties. In fact, the core idea of Sunday’s article—nets based “on large groups of neuron-like elements … that learn from experience”—goes back over fifty years, to the well-known Perceptron, built by Frank Rosenblatt in 1957. (If you check the archives, the Times billed it as a revolution, with the headline “NEW NAVY DEVICE LEARNS BY DOING.” The New Yorker similarly gushed about the advancement.) The only new thing mentioned is a computer chip, as yet unproven but scheduled to be released this year, along with the claim that it can “potentially [make] the term ‘computer crash’ obsolete.” Steven Pinker wrote me an e-mail after reading the Times story, saying “We’re back in 1985!”—the last time there was huge hype in the mainstream media about neural networks.

What’s the harm? As Yann LeCun, the N.Y.U. researcher who was just appointed to run Facebook’s new A.I. lab, put it a few months ago in a Google+ post, a kind of open letter to the media, “AI [has] ‘died’ about four times in five decades because of hype: people made wild claims (often to impress potential investors or funding agencies) and could not deliver. Backlash ensued. It happened twice with neural nets already: once in the late 60’s and again in the mid-90’s.”

A.I. is, to be sure, in much better shape now than it was then. Google, Apple, I.B.M., Facebook, and Microsoft have all made large commercial investments. There have been real innovations, like driverless cars, that may soon become commercially available. Neuromorphic engineering and deep learning are genuinely exciting, but whether they will really produce human-level A.I. is unclear—especially, as I have written before, when it comes to challenging problems like understanding natural language.

The brainlike I.B.M. system that the Times mentioned on Sunday has never, to my knowledge, been applied to language, or any other complex form of learning. Deep learning has been applied to language understanding, but the results are feeble so far. Among publicly available systems, the best is probably a Stanford project, called Deeply Moving, that applies deep learning to the task of understanding movie reviews. The cool part is that you can try it for yourself, cutting and pasting text from a movie review and immediately seeing the program’s analysis; you even teach it to improve. The less cool thing is that the deep-learning system doesn’t really understand anything.

It can’t, say, paraphrase a review or mention something the reviewer liked, things you’d expect of an intelligent sixth-grader. About the only thing the system can do is so-called sentiment analysis, reducing a review to a thumbs-up or thumbs-down judgment. And even there it falls short; after typing in “better than ‘Cats!’ ” (which the system correctly interpreted as positive), the first thing I tested was a Rotten Tomatoes excerpt of a review of the last movie I saw, “American Hustle”: “A sloppy, miscast, hammed up, overlong, overloud story that still sends you out of the theater on a cloud of rapture.” The deep-learning system couldn’t tell me that the review was ironic, or that the reviewer thought the whole was more than the sum of the parts. It told me only, inaccurately, that the review was very negative. When I sent the demo to my collaborator, Ernest Davis, his luck was no better than mine. Ernie tried “This is not a book to be ignored” and “No one interested in the subject can afford to ignore this book.” The first came out as negative, the second neutral. If Deeply Moving is the best A.I. has to offer, true A.I.—of the sort that can read a newspaper as well as a human can—is a long way away.

Overhyped stories about new technologies create short-term enthusiasm, but they also often lead to long-term disappointment. As LeCun put it in his Google+ post, “Whenever a startup claims ‘90% accuracy’ on some random task, do not consider this newsworthy. If the company also makes claims like ‘we are developing machine learning software based on the computational principles of the human brain’ be even more suspicious.”

As I noted in a recent essay, some of the biggest challenges in A.I. have to do with common-sense reasoning. Trendy new techniques like deep learning and neuromorphic engineering give A.I. programmers purchase on a particular kind of problem that involves categorizing familiar stimuli, but say little about how to cope with things we haven’t seen before. As machines get better at categorizing things they can recognize, some tasks, like speech recognition, improve markedly, but others, like comprehending what a speaker actually means, advance more slowly. Neuromorphic engineering will probably lead to interesting advances, but perhaps not right away. As a more balanced article on the same topic in Technology Review recently reported, some neuroscientists, including Henry Markram, the director of a European project to simulate the human brain, are quite skeptical of the currently implemented neuromorphic systems on the grounds that their representations of the brain are too simplistic and abstract.

As a cognitive scientist, I agree with Markram. Old-school behaviorist psychologists, and now many A.I. programmers, seem focused on finding a single powerful mechanism—deep learning, neuromorphic engineering, quantum computation, or whatever—to induce everything from statistical data. This is much like what the psychologist B. F. Skinner imagined in the early nineteen-fifties, when he concluded all human thought could be explained by mechanisms of association; the whole field of cognitive psychology grew out of the ashes of that oversimplified assumption.

At times like these, I find it useful to remember a basic truth: the human brain is the most complicated organ in the known universe, and we still have almost no idea how it works. Who said that copying its awesome power was going to be easy?

Gary Marcus is a professor of psychology at N.Y.U. and a visiting cognitive scientist at the new Allen Institute for Artificial Intelligence. This essay was written in memory of his late friend Michael Dorfman—friend of science, enemy of hype.

Photograph: Chris Ratcliffe/Bloomberg/Getty

Tuesday, December 10, 2013

Ray Kurzweil and the Brains Behind the Google Brain (Big Think)

Ah, Ray Kurzweil . . . he's so brilliant in some respects and so misguided in others. Kurzweil has predicted, and indeed made a bet with Mitchell Kapor (of $20,000), that we will develop a conscious computer (one that can pass the Turing test) by 2029. Pardon me while I laugh hysterically for a few minutes. Ahem . . . you can read both men's arguments at the link above.

There are other reasons I find Kurweil laughable, but they are not relevant to this post.

What is relevant is that he has teamed up with the brain-trust at Google to try to create an intelligent machine, which gives him better odds than if he was on his own.


Ray Kurzweil and the Brains Behind the Google Brain

by Big Think Editors
December 8, 2013
Time was when Google engineers spent all their days counting links and ranking pages. The company's famous algorithm made it the leading search engine in the world. Admittedly, it was far from perfect. That is why current efforts are aimed at developing ways for computers to read and understand natural language.

Enter Ray Kurzweil, an inventor and expert in artificial intelligence. Kurzweil's goal is ostensibly to help the company improve the accuracy of its search results, but that is certainly not all. Kurzweil, after all, is one of the world's leading advocates of "hard AI," or the development of consciousness in an artificial being. Kurzweil believes this will come about in 2029, to be specific.

So in addition to Google's development of autonomous cars and its aggressive play in robotic delivery systems, the company is also looking to build an artificial brain, aka "The Google Brain." As Steven Levy notes on Wired, this is a fact that "some may consider thrilling and others deeply unsettling. Or both."

Kurzweil is collaborating with Jeff Dean to find the brain's algorithm, and Kurzweil says the reason he is at Google is to take full advantage of the company's deep learning resources.

In the video below, Kurzweil outlines three tangible benefits that he expects to come out of this project. Beyond building more intelligent machines, if we are able to reverse-engineer the brain, we will be able to do a better job at fixing it. We will also gain more insight into ourselves, he says. After all, "our identity, our consciousness, the concept of free will is closely associated with the brain."

Watch the video here:


Image courtesy of Shutterstock
* * * * *

Deep Learning


by Big Think Editors
The Big Idea for Sunday, December 08, 2013

A smart machine, if given enough data, can teach teach itself to recognize patterns and mimic the way that the human brain behaves.

In today's lesson, Ray Kurzweil provides insights into the work he is doing at Google. His ostensible goal is to help the company develop a better search engine that can process natural language. But the potential benefits of discovering the brain's algorithm go much further than that. The more we understand about the brain, Kurzweil says, the better we are able to fix it. Moreover, the brain is at the center of our understanding of human identity, and our notions of consciousness and free will.


Perspectives

1 Ray Kurzweil and the Brains Behind the Google Brain
Big Think Editors Big Think TV

2 Reverse-Engineering the Brain
Dr. Joy Hirsch

3 The Ghost in the Machine: Unraveling the Mystery of Consciousness
Megan Erickson Think Tank

4 The Most Amazing Race: Reverse-Engineering the Brain
Daniel Honan Think Tank

by Big Think Editors

Monday, November 11, 2013

The Man Who Would Teach Machines to Think (Douglas Hofstadter)


In the November issue of The Atlantic, James Somers offers an in-depth profile of Douglas Hofstadter, a man who is simultaneously brilliant and annoying. I do not share his belief and the possibility of a true artificial intelligence (AI), so this part of his body of work is vaguely annoying.

However, as he has worked on the AI problem, he was watched how his own mind works in the process, a kind of process mindfulness. The result, for me, was one of his best books, but one not even mentioned in this article - I Am a Strange Loop. As I read this book, it seemed like the cognitive science version of the neurobiology Antonio Damasio has been presenting in his work, most notably in The Feeling of What Happens: Body and Emotion in the Making of Consciousness (2000), Looking for Spinoza: Joy, Sorrow, and the Feeling Brain (2003), and Self Comes to Mind: Constructing the Conscious Brain (2012).

This article is part of an Atlantic special report on Imagination, optimism, and the nature of progress.


The Man Who Would Teach Machines to Think

Douglas Hofstadter, the Pulitzer Prize–winning author of Gödel, Escher, Bach, thinks we've lost sight of what artificial intelligence really means. His stubborn quest to replicate the human mind.

James Somers | Oct 23 2013
November 2013 Issue 


It depends on what you mean by artificial intelligence.” Douglas Hofstadter is in a grocery store in Bloomington, Indiana, picking out salad ingredients. “If somebody meant by artificial intelligence the attempt to understand the mind, or to create something human-like, they might say—maybe they wouldn’t go this far—but they might say this is some of the only good work that’s ever been done.”

Hofstadter says this with an easy deliberateness, and he says it that way because for him, it is an uncontroversial conviction that the most-exciting projects in modern artificial intelligence, the stuff the public maybe sees as stepping stones on the way to science fiction—like Watson, IBM’s Jeopardy-playing supercomputer, or Siri, Apple’s iPhone assistant—in fact have very little to do with intelligence. For the past 30 years, most of them spent in an old house just northwest of the Indiana University campus, he and his graduate students have been picking up the slack: trying to figure out how our thinking works, by writing computer programs that think.Their operating premise is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself. Computers are flexible enough to model the strange evolved convolutions of our thought, and yet responsive only to precise instructions. So if the endeavor succeeds, it will be a double victory: we will finally come to know the exact mechanics of our selves—and we’ll have made intelligent machines.

The idea that changed Hofstadter’s existence, as he has explained over the years, came to him on the road, on a break from graduate school in particle physics. Discouraged by the way his doctoral thesis was going at the University of Oregon, feeling “profoundly lost,” he decided in the summer of 1972 to pack his things into a car he called Quicksilver and drive eastward across the continent. Each night he pitched his tent somewhere new (“sometimes in a forest, sometimes by a lake”) and read by flashlight. He was free to think about whatever he wanted; he chose to think about thinking itself. Ever since he was about 14, when he found out that his youngest sister, Molly, couldn’t understand language, because she “had something deeply wrong with her brain” (her neurological condition probably dated from birth, and was never diagnosed), he had been quietly obsessed by the relation of mind to matter. The father of psychology, William James, described this in 1890 as “the most mysterious thing in the world”: How could consciousness be physical? How could a few pounds of gray gelatin give rise to our very thoughts and selves?

Roaming in his 1956 Mercury, Hofstadter thought he had found the answer—that it lived, of all places, in the kernel of a mathematical proof. In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself. Consciousness, Hofstadter wanted to say, emerged via just the same kind of “level-crossing feedback loop.” He sat down one afternoon to sketch his thinking in a letter to a friend. But after 30 handwritten pages, he decided not to send it; instead he’d let the ideas germinate a while. Seven years later, they had not so much germinated as metastasized into a 2.9‑pound, 777-page book called Gödel, Escher, Bach: An Eternal Golden Braid, which would earn for Hofstadter—only 35 years old, and a first-time author—the 1980 Pulitzer Prize for general nonfiction.

GEB, as the book became known, was a sensation. Its success was catalyzed by Martin Gardner, a popular columnist for Scientific American, who very unusually devoted his space in the July 1979 issue to discussing one book—and wrote a glowing review. “Every few decades,” Gardner began, “an unknown author brings out a book of such depth, clarity, range, wit, beauty and originality that it is recognized at once as a major literary event.” The first American to earn a doctoral degree in computer science (then labeled “communication sciences”), John Holland, recalled that “the general response amongst people I know was that it was a wonderment.”

Hofstadter seemed poised to become an indelible part of the culture. GEB was not just an influential book, it was a book fully of the future. People called it the bible of artificial intelligence, that nascent field at the intersection of computing, cognitive science, neuroscience, and psychology. Hofstadter’s account of computer programs that weren’t just capable but creative, his road map for uncovering the “secret software structures in our minds,” launched an entire generation of eager young students into AI.

But then AI changed, and Hofstadter didn’t change with it, and for that he all but disappeared.

GEB arrived on the scene at an inflection point in AI’s history. In the early 1980s, the field was retrenching: funding for long-term “basic science” was drying up, and the focus was shifting to practical systems. Ambitious AI research had acquired a bad reputation. Wide-eyed overpromises were the norm, going back to the birth of the field in 1956 at the Dartmouth Summer Research Project, where the organizers—including the man who coined the term artificial intelligence, John McCarthy—declared that “if a carefully selected group of scientists work on it together for a summer,” they would make significant progress toward creating machines with one or more of the following abilities: the ability to use language; to form concepts; to solve problems now solvable only by humans; to improve themselves. McCarthy later recalled that they failed because “AI is harder than we thought.”

With wartime pressures mounting, a chief underwriter of AI research—the Defense Department’s Advanced Research Projects Agency (ARPA)—tightened its leash. In 1969, Congress passed the Mansfield Amendment, requiring that Defense support only projects with “a direct and apparent relationship to a specific military function or operation.” In 1972, ARPA became DARPA, the D for “Defense,” to reflect its emphasis on projects with a military benefit. By the middle of the decade, the agency was asking itself: What concrete improvements in national defense did we just buy, exactly, with 10 years and $50 million worth of exploratory research?

By the early 1980s, the pressure was great enough that AI, which had begun as an endeavor to answer yes to Alan Turing’s famous question, “Can machines think?,” started to mature—or mutate, depending on your point of view—into a subfield of software engineering, driven by applications. Work was increasingly done over short time horizons, often with specific buyers in mind. For the military, favored projects included “command and control” systems, like a computerized in-flight assistant for combat pilots, and programs that would automatically spot roads, bridges, tanks, and silos in aerial photographs. In the private sector, the vogue was “expert systems,” niche products like a pile-selection system, which helped designers choose materials for building foundations, and the Automated Cable Expertise program, which ingested and summarized telephone-cable maintenance reports.

In GEB, Hofstadter was calling for an approach to AI concerned less with solving human problems intelligently than with understanding human intelligence—at precisely the moment that such an approach, having borne so little fruit, was being abandoned. His star faded quickly. He would increasingly find himself out of a mainstream that had embraced a new imperative: to make machines perform in any way possible, with little regard for psychological plausibility.

Take Deep Blue, the IBM supercomputer that bested the chess grandmaster Garry Kasparov. Deep Blue won by brute force. For each legal move it could make at a given point in the game, it would consider its opponent’s responses, its own responses to those responses, and so on for six or more steps down the line. With a fast evaluation function, it would calculate a score for each possible position, and then make the move that led to the best score. What allowed Deep Blue to beat the world’s best humans was raw computational power. It could evaluate up to 330 million positions a second, while Kasparov could evaluate only a few dozen before having to make a decision.

Hofstadter wanted to ask: Why conquer a task if there’s no insight to be had from the victory? “Okay,” he says, “Deep Blue plays very good chess—so what? Does that tell you something about how we play chess? No. Does it tell you about how Kasparov envisions, understands a chessboard?” A brand of AI that didn’t try to answer such questions—however impressive it might have been—was, in Hofstadter’s mind, a diversion. He distanced himself from the field almost as soon as he became a part of it. “To me, as a fledgling AI person,” he says, “it was self-evident that I did not want to get involved in that trickery. It was obvious: I don’t want to be involved in passing off some fancy program’s behavior for intelligence when I know that it has nothing to do with intelligence. And I don’t know why more people aren’t that way.”

One answer is that the AI enterprise went from being worth a few million dollars in the early 1980s to billions by the end of the decade. (After Deep Blue won in 1997, the value of IBM’s stock increased by $18 billion.) The more staid an engineering discipline AI became, the more it accomplished. Today, on the strength of techniques bearing little relation to the stuff of thought, it seems to be in a kind of golden age. AI pervades heavy industry, transportation, and finance. It powers many of Google’s core functions, Netflix’s movie recommendations, Watson, Siri, autonomous drones, the self-driving car.

“The quest for ‘artificial flight’ succeeded when the Wright brothers and others stopped imitating birds and started … learning about aerodynamics,” Stuart Russell and Peter Norvig write in their leading textbook, Artificial Intelligence: A Modern Approach. AI started working when it ditched humans as a model, because it ditched them. That’s the thrust of the analogy: Airplanes don’t flap their wings; why should computers think?

It’s a compelling point. But it loses some bite when you consider what we want: a Google that knows, in the way a human would know, what you really mean when you search for something. Russell, a computer-science professor at Berkeley, said to me, “What’s the combined market cap of all of the search companies on the Web? It’s probably four hundred, five hundred billion dollars. Engines that could actually extract all that information and understand it would be worth 10 times as much.”

This, then, is the trillion-dollar question: Will the approach undergirding AI today—an approach that borrows little from the mind, that’s grounded instead in big data and big engineering—get us to where we want to go? How do you make a search engine that understands if you don’t know how you understand? Perhaps, as Russell and Norvig politely acknowledge in the last chapter of their textbook, in taking its practical turn, AI has become too much like the man who tries to get to the moon by climbing a tree: “One can report steady progress, all the way to the top of the tree.”

Consider that computers today still have trouble recognizing a handwritten A. In fact, the task is so difficult that it forms the basis for CAPTCHAs (“Completely Automated Public Turing tests to tell Computers and Humans Apart”), those widgets that require you to read distorted text and type the characters into a box before, say, letting you sign up for a Web site.

In Hofstadter’s mind, there is nothing to be surprised about. To know what all A’s have in common would be, he argued in a 1982 essay, to “understand the fluid nature of mental categories.” And that, he says, is the core of human intelligence.

“Cognition is recognition,” he likes to say. He describes “seeing as” as the essential cognitive act: you see some lines as “an A,” you see a hunk of wood as “a table,” you see a meeting as “an emperor-has-no-clothes situation” and a friend’s pouting as “sour grapes” and a young man’s style as “hipsterish” and on and on ceaselessly throughout your day. That’s what it means to understand. But how does understanding work? For three decades, Hofstadter and his students have been trying to find out, trying to build “computer models of the fundamental mechanisms of thought.”

“At every moment,” Hofstadter writes in Surfaces and Essences, his latest book (written with Emmanuel Sander), “we are simultaneously faced with an indefinite number of overlapping and intermingling situations.” It is our job, as organisms that want to live, to make sense of that chaos. We do it by having the right concepts come to mind. This happens automatically, all the time. Analogy is Hofstadter’s go-to word. The thesis of his new book, which features a mélange of A’s on its cover, is that analogy is “the fuel and fire of thinking,” the bread and butter of our daily mental lives.

“Look at your conversations,” he says. “You’ll see over and over again, to your surprise, that this is the process of analogy-making.” Someone says something, which reminds you of something else; you say something, which reminds the other person of something else—that’s a conversation. It couldn’t be more straightforward. But at each step, Hofstadter argues, there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates.

“Beware,” he writes, “of innocent phrases like ‘Oh, yeah, that’s exactly what happened to me!’ … behind whose nonchalance is hidden the entire mystery of the human mind.”

In the years after the release of GEB, Hofstadter and AI went their separate ways. Today, if you were to pull AI: A Modern Approach off the shelf, you wouldn’t find Hofstadter’s name—not in more than 1,000 pages. Colleagues talk about him in the past tense. New fans of GEB, seeing when it was published, are surprised to find out its author is still alive.

Of course in Hofstadter’s telling, the story goes like this: when everybody else in AI started building products, he and his team, as his friend, the philosopher Daniel Dennett, wrote, “patiently, systematically, brilliantly,” way out of the light of day, chipped away at the real problem. “Very few people are interested in how human intelligence works,” Hofstadter says. “That’s what we’re interested in—what is thinking?—and we don’t lose track of that question.”

“I mean, who knows?” he says. “Who knows what’ll happen. Maybe someday people will say, ‘Hofstadter already did this stuff and said this stuff and we’re just now discovering it.’ ”

Which sounds exactly like the self-soothing of the guy who lost. But Hofstadter has the kind of mind that tempts you to ask: What if the best ideas in artificial intelligence—“genuine artificial intelligence,” as Hofstadter now calls it, with apologies for the oxymoron—are yellowing in a drawer in Bloomington?

Douglas R. Hofstadter was born into a life of the mind the way other kids are born into a life of crime. He grew up in 1950s Stanford, in a house on campus, just south of a neighborhood actually called Professorville. His father, Robert, was a nuclear physicist who would go on to share the 1961 Nobel Prize in Physics; his mother, Nancy, who had a passion for politics, became an advocate for developmentally disabled children and served on the ethics committee of the Agnews Developmental Center, where Molly lived for more than 20 years. In her free time Nancy was, the joke went, a “professional faculty wife”: she transformed the Hofstadters’ living room into a place where a tight-knit community of friends could gather for stimulating conversation and jazz, for “the interpenetration of the sciences and the arts,” Hofstadter told me—an intellectual feast.

Dougie ate it up. He was enamored of his parents’ friends, their strange talk about “the tiniest or gigantic-est things.” (At age 8, he once said, his dream was to become “a zero-mass, spin one-half neutrino.”) He’d hang around the physics department for 4 o’clock tea, “as if I were a little 12-year-old graduate student.” He was curious, insatiable, unboreable—“just a kid fascinated by ideas”—and intense. His intellectual style was, and is, to go on what he calls “binges”: he might practice piano for seven hours a day; he might decide to memorize 1,200 lines of Eugene Onegin. He once spent weeks with a tape recorder teaching himself to speak backwards, so that when he played his garbles in reverse they came out as regular English. For months at a time he’ll immerse himself in idiomatic French or write computer programs to generate nonsensical stories or study more than a dozen proofs of the Pythagorean theorem until he can “see the reason it’s true.” He spends “virtually every day exploring these things,” he says, “unable to not explore. Just totally possessed, totally obsessed, by this kind of stuff.”

Hofstadter is 68 years old. But there’s something Peter Pan–ish about a life lived so much on paper, in software, in a man’s own head. Can someone like that age in the usual way? Hofstadter has untidy gray hair that juts out over his ears, a fragile, droopy stature, and, between his nose and upper lip, a long groove, almost like the Grinch’s. But he has the self-seriousness, the urgent earnestness, of a still very young man. The stakes are high with him; he isn’t easygoing. He’s the kind of vegetarian who implores the whole dinner party to eat vegetarian too; the kind of sensitive speaker who corrects you for using “sexist language” around him. “He has these rules,” explains his friend Peter Jones, who’s known Hofstadter for 59 years. “Like how he hates you guys. That’s an imperative. If you’re talking to him, you better not say you guys.”

For more than 30 years, Hofstadter has worked as a professor at Indiana University at Bloomington. He lives in a house a few blocks from campus with Baofen Lin, whom he married last September; his two children by his previous marriage, Danny and Monica, are now grown. Although he has strong ties with the cognitive-science program and affiliations with several departments—including computer science, psychological and brain sciences, comparative literature, and philosophy—he has no official obligations. “I think I have about the cushiest job you could imagine,” he told me. “I do exactly what I want.”

He spends most of his time in his study, two rooms on the top floor of his house, carpeted, a bit stuffy, and messier than he would like. His study is the center of his world. He reads there, listens to music there, studies there, draws there, writes his books there, writes his e‑mails there. (Hofstadter spends four hours a day writing e‑mail. “To me,” he has said, “an e‑mail is identical to a letter, every bit as formal, as refined, as carefully written … I rewrite, rewrite, rewrite, rewrite all of my e‑mails, always.”) He lives his mental life there, and it shows. Wall-to-wall there are books and drawings and notebooks and files, thoughts fossilized and splayed all over the room. It’s like a museum for his binges, a scene out of a brainy episode of Hoarders.

“Anything that I think about becomes part of my professional life,” he says. Daniel Dennett, who co-edited The Mind’s I with him, has explained that “what Douglas Hofstadter is, quite simply, is a phenomenologist, a practicing phenomenologist, and he does it better than anybody else. Ever.” He studies the phenomena—the feelings, the inside actions—of his own mind. “And the reason he’s good at it,” Dennett told me, “the reason he’s better than anybody else, is that he is very actively trying to have a theory of what’s going on backstage, of how thinking actually happens in the brain.”

In his back pocket, Hofstadter carries a four-color Bic ballpoint pen and a small notebook. It’s always been that way. In what used to be a bathroom adjoined to his study but is now just extra storage space, he has bookshelves full of these notebooks. He pulls one down—it’s from the late 1950s. It’s full of speech errors. Ever since he was a teenager, he has captured some 10,000 examples of swapped syllables (“hypodeemic nerdle”), malapropisms (“runs the gambit”), “malaphors” (“easy-go-lucky”), and so on, about half of them committed by Hofstadter himself. He makes photocopies of his notebook pages, cuts them up with scissors, and stores the errors in filing cabinets and labeled boxes around his study.

For Hofstadter, they’re clues. “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.” Correct speech isn’t very interesting; it’s like a well-executed magic trick—effective because it obscures how it works. What Hofstadter is looking for is “a tip of the rabbit’s ear … a hint of a trap door.”

In this he is the modern-day William James, whose blend of articulate introspection (he introduced the idea of the stream of consciousness) and crisp explanations made his 1890 text, Principles of Psychology, a classic. “The mass of our thinking vanishes for ever, beyond hope of recovery,” James wrote, “and psychology only gathers up a few of the crumbs that fall from the feast.” Like Hofstadter, James made his life playing under the table, gleefully inspecting those crumbs. The difference is that where James had only his eyes, Hofstadter has something like a microscope.

You can credit the development of manned aircraft not to the Wright brothers’ glider flights at Kitty Hawk but to the six-foot wind tunnel they built for themselves in their bicycle shop using scrap metal and recycled wheel spokes. While their competitors were testing wing ideas at full scale, the Wrights were doing focused aerodynamic experiments at a fraction of the cost. Their biographer Fred Howard says that these were “the most crucial and fruitful aeronautical experiments ever conducted in so short a time with so few materials and at so little expense.”

In an old house on North Fess Avenue in Bloomington, Hofstadter directs the Fluid Analogies Research Group, affectionately known as FARG. The yearly operating budget is $100,000. Inside, it’s homey—if you wandered through, you could easily miss the filing cabinets tucked beside the pantry, the photocopier humming in the living room, the librarian’s labels (Neuroscience, MATHEMATICS, Perception) on the bookshelves. But for 25 years, this place has been host to high enterprise, as the small group of scientists tries, Hofstadter has written, “first, to uncover the secrets of creativity, and second, to uncover the secrets of consciousness.”

As the wind tunnel was to the Wright brothers, so the computer is to FARG. The quick unconscious chaos of a mind can be slowed down on the computer, or rewound, paused, even edited. In Hofstadter’s view, this is the great opportunity of artificial intelligence. Parts of a program can be selectively isolated to see how it functions without them; parameters can be changed to see how performance improves or degrades. When the computer surprises you—whether by being especially creative or especially dim-witted—you can see exactly why. “I have always felt that the only hope of humans ever coming to fully understand the complexity of their minds,” Hofstadter has written, “is by modeling mental processes on computers and learning from the models’ inevitable failures.”

Turning a mental process caught and catalogued in Hofstadter’s house into a running computer program, just a mile up the road, takes a dedicated graduate student about five to nine years. The programs all share the same basic architecture—a set of components and an overall style that traces back to Jumbo, a program that Hofstadter wrote in 1982 that worked on the word jumbles you find in newspapers.

The first thought you ought to have when you hear about a program that’s tackling newspaper jumbles is: Wouldn’t those be trivial for a computer to solve? And indeed they are—I just wrote a program that can handle any word, and it took me four minutes. My program works like this: it takes the jumbled word and tries every rearrangement of its letters until it finds a word in the dictionary.

Hofstadter spent two years building Jumbo: he was less interested in solving jumbles than in finding out what was happening when he solved them. He had been watching his mind. “I could feel the letters shifting around in my head, by themselves,” he told me, “just kind of jumping around forming little groups, coming apart, forming new groups—flickering clusters. It wasn’t me manipulating anything. It was just them doing things. They would be trying things themselves.”

The architecture Hofstadter developed to model this automatic letter-play was based on the actions inside a biological cell. Letters are combined and broken apart by different types of “enzymes,” as he says, that jiggle around, glomming on to structures where they find them, kicking reactions into gear. Some enzymes are rearrangers (pang-loss becomes pan-gloss or lang-poss), others are builders (g and h become the cluster gh; jum and ble become jumble), and still others are breakers (ight is broken into it and gh). Each reaction in turn produces others, the population of enzymes at any given moment balancing itself to reflect the state of the jumble.

It’s an unusual kind of computation, distinct for its fluidity. Hofstadter of course offers an analogy: a swarm of ants rambling around the forest floor, as scouts make small random forays in all directions and report their finds to the group, their feedback driving an efficient search for food. Such a swarm is robust—step on a handful of ants and the others quickly recover—and, because of that robustness, adept.

When you read Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, which describes in detail this architecture and the logic and mechanics of the programs that use it, you wonder whether maybe Hofstadter got famous for the wrong book. As a writer for The New York Times once put it in a 1995 review, “The reader of ‘Fluid Concepts & Creative Analogies’ cannot help suspecting that the group at Indiana University is on to something momentous.”

But very few people, even admirers of GEB, know about the book or the programs it describes. And maybe that’s because FARG’s programs are almost ostentatiously impractical. Because they operate in tiny, seemingly childish “microdomains.” Because there is no task they perform better than a human.

The modern era of mainstream AI—an era of steady progress and commercial success that began, roughly, in the early 1990s and continues to this day—is the long unlikely springtime after a period, known as the AI Winter, that nearly killed off the field.

It came down to a basic dilemma. On the one hand, the software we know how to write is very orderly; most computer programs are organized like a well-run army, with layers of commanders, each layer passing instructions down to the next, and routines that call subroutines that call subroutines. On the other hand, the software we want to write would be adaptable—and for that, a hierarchy of rules seems like just the wrong idea. Hofstadter once summarized the situation by writing, “The entire effort of artificial intelligence is essentially a fight against computers’ rigidity.” In the late ’80s, mainstream AI was losing research dollars, clout, conference attendance, journal submissions, and press—because it was getting beat in that fight.

The “expert systems” that had once been the field’s meal ticket were foundering because of their brittleness. Their approach was fundamentally broken. Take machine translation from one language to another, long a holy grail of AI. The standard attack involved corralling linguists and translators into a room and trying to convert their expertise into rules for a program to follow. The standard attack failed for reasons you might expect: no set of rules can ever wrangle a human language; language is too big and too protean; for every rule obeyed, there’s a rule broken.

If machine translation was to survive as a commercial enterprise—if AI was to survive—it would have to find another way. Or better yet, a shortcut.

And it did. You could say that it started in 1988, with a project out of IBM called Candide. The idea behind Candide, a machine-translation system, was to start by admitting that the rules-based approach requires too deep an understanding of how language is produced; how semantics, syntax, and morphology work; and how words commingle in sentences and combine into paragraphs—to say nothing of understanding the ideas for which those words are merely conduits. So IBM threw that approach out the window. What the developers did instead was brilliant, but so straightforward, you can hardly believe it.

The technique is called “machine learning.” The goal is to make a device that takes an English sentence as input and spits out a French sentence. One such device, of course, is the human brain—but the whole point is to avoid grappling with the brain’s complexity. So what you do instead is start with a machine so simple, it almost doesn’t work: a machine, say, that randomly spits out French words for the English words it’s given.

Imagine a box with thousands of knobs on it. Some of these knobs control general settings: given one English word, how many French words, on average, should come out? And some control specific settings: given jump, what is the probability that shot comes next? The question is, just by tuning these knobs, can you get your machine to convert sensible English into sensible French?

It turns out that you can. What you do is feed the machine English sentences whose French translations you already know. (Candide, for example, used 2.2 million pairs of sentences, mostly from the bilingual proceedings of Canadian parliamentary debates.) You proceed one pair at a time. After you’ve entered a pair, take the English half and feed it into your machine to see what comes out in French. If that sentence is different from what you were expecting—different from the known correct translation—your machine isn’t quite right. So jiggle the knobs and try again. After enough feeding and trying and jiggling, feeding and trying and jiggling again, you’ll get a feel for the knobs, and you’ll be able to produce the correct French equivalent of your English sentence.

By repeating this process with millions of pairs of sentences, you will gradually calibrate your machine, to the point where you’ll be able to enter a sentence whose translation you don’t know and get a reasonable result. And the beauty is that you never needed to program the machine explicitly; you never needed to know why the knobs should be twisted this way or that.

Candide didn’t invent machine learning—in fact the concept had been tested plenty before, in a primitive form of machine translation in the 1960s. But up to that point, no test had been very successful. The breakthrough wasn’t that Candide cracked the problem. It was that so simple a program performed adequately. Machine translation was, as Adam Berger, a member of the Candide team, writes in a summary of the project, “widely considered among the most difficult tasks in natural language processing, and in artificial intelligence in general, because accurate translation seems to be impossible without a comprehension of the text to be translated.” That a program as straightforward as Candide could perform at par suggested that effective machine translation didn’t require comprehension—all it required was lots of bilingual text. And for that, it became a proof of concept for the approach that conquered AI.

What Candide’s approach does, and with spectacular efficiency, is convert the problem of unknotting a complex process into the problem of finding lots and lots of examples of that process in action. This problem, unlike mimicking the actual processes of the brain, only got easier with time—particularly as the late ’80s rolled into the early ’90s and a nerdy haven for physicists exploded into the World Wide Web.

It is no coincidence that AI saw a resurgence in the ’90s, and no coincidence either that Google, the world’s biggest Web company, is “the world’s biggest AI system,” in the words of Peter Norvig, a director of research there, who wrote AI: A Modern Approach with Stuart Russell. Modern AI, Norvig has said, is about “data, data, data,” and Google has more data than anyone else.

Josh Estelle, a software engineer on Google Translate, which is based on the same principles as Candide and is now the world’s leading machine-translation system, explains, “you can take one of those simple machine-learning algorithms that you learned about in the first few weeks of an AI class, an algorithm that academia has given up on, that’s not seen as useful—but when you go from 10,000 training examples to 10 billion training examples, it all starts to work. Data trumps everything.”

The technique is so effective that the Google Translate team can be made up of people who don’t speak most of the languages their application translates. “It’s a bang-for-your-buck argument,” Estelle says. “You probably want to hire more engineers instead” of native speakers. Engineering is what counts in a world where translation is an exercise in data-mining at a massive scale.

That’s what makes the machine-learning approach such a spectacular boon: it vacuums out the first-order problem, and replaces the task of understanding with nuts-and-bolts engineering. “You saw this springing up throughout” Google, Norvig says. “If we can make this part 10 percent faster, that would save so many millions of dollars per year, so let’s go ahead and do it. How are we going to do it? Well, we’ll look at the data, and we’ll use a machine-learning or statistical approach, and we’ll come up with something better.”

Google has projects that gesture toward deeper understanding: extensions of machine learning inspired by brain biology; a “knowledge graph” that tries to map words, like Obama, to people or places or things. But the need to serve 1 billion customers has a way of forcing the company to trade understanding for expediency. You don’t have to push Google Translate very far to see the compromises its developers have made for coverage, and speed, and ease of engineering. Although Google Translate captures, in its way, the products of human intelligence, it isn’t intelligent itself. It’s like an enormous Rosetta Stone, the calcified hieroglyphics of minds once at work.

Did we sit down when we built Watson and try to model human cognition?” Dave Ferrucci, who led the Watson team at IBM, pauses for emphasis. “Absolutely not. We just tried to create a machine that could win at Jeopardy.”

For Ferrucci, the definition of intelligence is simple: it’s what a program can do. Deep Blue was intelligent because it could beat Garry Kasparov at chess. Watson was intelligent because it could beat Ken Jennings at Jeopardy. “It’s artificial intelligence, right? Which is almost to say not-human intelligence. Why would you expect the science of artificial intelligence to produce human intelligence?”

Ferrucci is not blind to the difference. He likes to tell crowds that whereas Watson played using a room’s worth of processors and 20 tons of air-conditioning equipment, its opponents relied on a machine that fits in a shoebox and can run for hours on a tuna sandwich. A machine, no less, that would allow them to get up when the match was over, have a conversation, enjoy a bagel, argue, dance, think—while Watson would be left humming, hot and dumb and un-alive, answering questions about presidents and potent potables.

“The features that [these systems] are ultimately looking at are just shadows—they’re not even shadows—of what it is that they represent,” Ferrucci says. “We constantly underestimate—we did in the ’50s about AI, and we’re still doing it—what is really going on in the human brain.”

The question that Hofstadter wants to ask Ferrucci, and everybody else in mainstream AI, is this: Then why don’t you come study it?

“I have mixed feelings about this,” Ferrucci told me when I put the question to him last year. “There’s a limited number of things you can do as an individual, and I think when you dedicate your life to something, you’ve got to ask yourself the question: To what end? And I think at some point I asked myself that question, and what it came out to was, I’m fascinated by how the human mind works, it would be fantastic to understand cognition, I love to read books on it, I love to get a grip on it”—he called Hofstadter’s work inspiring—“but where am I going to go with it? Really what I want to do is build computer systems that do something. And I don’t think the short path to that is theories of cognition.”

Peter Norvig, one of Google’s directors of research, echoes Ferrucci almost exactly. “I thought he was tackling a really hard problem,” he told me about Hofstadter’s work. “And I guess I wanted to do an easier problem.”

In their responses, one can see the legacy of AI’s failures. Work on fundamental problems reeks of the early days. “Concern for ‘respectability,’ ” Nils Nilsson writes in his academic history, The Quest for Artificial Intelligence, “has had, I think, a stultifying effect on some AI researchers.”

Stuart Russell, Norvig’s co-author of AI: A Modern Approach, goes further. “A lot of the stuff going on is not very ambitious,” he told me. “In machine learning, one of the big steps that happened in the mid-’80s was to say, ‘Look, here’s some real data—can I get my program to predict accurately on parts of the data that I haven’t yet provided to it?’ What you see now in machine learning is that people see that as the only task.”

It’s insidious, the way your own success can stifle you. As our machines get faster and ingest more data, we allow ourselves to be dumber. Instead of wrestling with our hardest problems in earnest, we can just plug in billions of examples of them. Which is a bit like using a graphing calculator to do your high-school calculus homework—it works great until you need to actually understand calculus.

It seems unlikely that feeding Google Translate 1 trillion documents, instead of 10 billion, will suddenly enable it to work at the level of a human translator. The same goes for search, or image recognition, or question-answering, or planning or reading or writing or design, or any other problem for which you would rather have a human’s intelligence than a machine’s.

This is a fact of which Norvig, just like everybody else in commercial AI, seems to be aware, if not dimly afraid. “We could draw this curve: as we gain more data, how much better does our system get?” he says. “And the answer is, it’s still improving—but we are getting to the point where we get less benefit than we did in the past.”

For James Marshall, a former graduate student of Hofstadter’s, it’s simple: “In the end, the hard road is the only one that’s going to lead you all the way.”

Hofstadter was 35 when he had his first long-term romantic relationship. He was born, he says, with “a narrow resonance curve,” borrowing a concept from physics to describe his extreme pickiness. “There have been certain women who have had an enormous effect on me; their face has had an incredible effect on me. I can’t give you a recipe for the face … but it’s very rare.” In 1980, after what he has described as “15 hellish, love-bleak years,” he met Carol Brush. (“She was at the dead center of the resonance curve.”) Not long after they met, they were happily married with two kids, and not long after that, while they were on sabbatical together in Italy in 1993, Carol died suddenly of a brain tumor. Danny and Monica were 5 and 2 years old. “I felt that he was pretty much lost a long time after Carol’s death,” says Pentti Kanerva, a longtime friend.

Hofstadter hasn’t been to an artificial-intelligence conference in 30 years. “There’s no communication between me and these people,” he says of his AI peers. “None. Zero. I don’t want to talk to colleagues that I find very, very intransigent and hard to convince of anything. You know, I call them colleagues, but they’re almost not colleagues—we can’t speak to each other.”

Hofstadter strikes me as difficult, in a quiet way. He is kind, but he doesn’t do the thing that easy conversationalists do, that well-liked teachers do, which is to take the best of what you’ve said—to work you into their thinking as an indispensable ally, as though their point ultimately depends on your contribution. I remember sitting in on a roundtable discussion that Hofstadter and his students were having and thinking of how little I saw his mind change. He seemed to be seeking consensus. The discussion had begun as an e-mail that he had sent out to a large list of correspondents; he seemed keenest on the replies that were keenest on him.

“So I don’t enjoy it,” he told me. “I don’t enjoy going to conferences and running into people who are stubborn and convinced of ideas I don’t think are correct, and who don’t have any understanding of my ideas. And I just like to talk to people who are a little more sympathetic.”

Ever since he was about 15, Hofstadter has read The Catcher in the Rye once every 10 years. In the fall of 2011, he taught an undergraduate seminar called “Why Is J. D. Salinger’s The Catcher in the Rye a Great Novel?” He feels a deep kinship with Holden Caulfield. When I mentioned that a lot of the kids in my high-school class didn’t like Holden—they thought he was a whiner—Hofstadter explained that “they may not recognize his vulnerability.” You imagine him standing like Holden stood at the beginning of the novel, alone on the top of a hill, watching his classmates romp around at the football game below. “I have too many ideas already,” Hofstadter tells me. “I don’t need the stimulation of the outside world.”

Of course, the folly of being above the fray is that you’re also not a part of it. “There are very few ideas in science that are so black-and-white that people say ‘Oh, good God, why didn’t we think of that?’ ” says Bob French, a former student of Hofstadter’s who has known him for 30 years. “Everything from plate tectonics to evolution—all those ideas, someone had to fight for them, because people didn’t agree with those ideas. And if you don’t participate in the fight, in the rough-and-tumble of academia, your ideas are going to end up being sidelined by ideas which are perhaps not as good, but were more ardently defended in the arena.”

Hofstadter never much wanted to fight, and the double-edged sword of his career, if there is one, is that he never really had to. He won the Pulitzer Prize when he was 35, and instantly became valuable property to his university. He was awarded tenure. He didn’t have to submit articles to journals; he didn’t have to have them reviewed, or reply to reviews. He had a publisher, Basic Books, that would underwrite anything he sent them.

Stuart Russell puts it bluntly. “Academia is not an environment where you just sit in your bath and have ideas and expect everyone to run around getting excited. It’s possible that in 50 years’ time we’ll say, ‘We really should have listened more to Doug Hofstadter.’ But it’s incumbent on every scientist to at least think about what is needed to get people to understand the ideas.”

Ars longa, vita brevis,” Hofstadter likes to say. “I just figure that life is short. I work, I don’t try to publicize. I don’t try to fight.”

There’s an analogy he made for me once. Einstein, he said, had come up with the light-quantum hypothesis in 1905. But nobody accepted it until 1923. “Not a soul,” Hofstadter says. “Einstein was completely alone in his belief in the existence of light as particles—for 18 years.

“That must have been very lonely.”

____

~ James Somers is a writer and computer programmer based in New York City.