Showing posts with label Turing Test. Show all posts
Showing posts with label Turing Test. Show all posts

Thursday, June 12, 2014

Kurzweil Does Not Accept Victory in the Turing Test Bet

 

The other day, Kevin Warwick and his team reported that their computer program, a chatbot named Eugene Goostman, had become the first artificial intelligence to pass the Turing Test.

For those who follow such things, inventor, futurist, and Google's engineering director, Ray Kurzweil has a standing wager of $20,000 with Mitch Kapor that a computer would pass the Turing Test by 2029. Based on the report cited above, it would appear Kurzweil has won the bet.

The only problem is that Kurzweil does not think so. Which is not good news for the researchers and their bot.

Here is Kurzweil's statement from his blog:

Response by Ray Kurzweil to the Announcement of Chatbot Eugene Goostman Passing the Turing test

June 10, 2014 by Ray Kurzweil
Eugene Goostman chatbot. (credit: Vladimir Veselov and Eugene Demchenko)

Two days ago, on June 8, 2014, the University of Reading announced that a computer program “has passed the Turing test for the first time.”

University of Reading Professor Kevin Warwick described it this way:
“Some will claim that the test has already been passed. The words ‘Turing test’ have been applied to similar competitions around the world. However, this event involved more simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing test does not set the questions or topics prior to the conversations. We are therefore proud to declare that Alan Turing’s test was passed for the first time on Saturday.” — Kevin Warwick, PhD
I have had a long-term wager with Mitch Kapor in which I predicted that a computer program would pass the Turing test by 2029 and he predicted that this would not happen, see links below.

This was the first long-term wager on the “Long Now” website. The bet called for $20,000 to be donated from us to the charity of the winner’s choice.

As a result, messages have been streaming in from around the world congratulating me for having won the bet.

However, I think this is premature. I am disappointed that Professor Warwick, with whom I agree on many things, would make this statement. There are several problems that I describe below, including a transcript of a conversation that I had with Eugene Goostman, the chatbot in question.

In my 2004 book The Singularity Is Near, I anticipated that there would be premature announcements of this kind:
The Singularity Is Near, page 295 | Turing was carefully imprecise in setting the rules for his test, and significant literature has been devoted to the subtleties of establishing the exact procedures for determining how to assess when the Turing test has been passed. In 2002 I negotiated the rules for a Turing test wager with Mitch Kapor on the Long Now website.
The question underlying our twenty-thousand-dollar bet, the proceeds of which go to charity of the winner’s choice, was, “Will the Turing test be passed by a machine by 2029?” I said yes, and Kapor said no. It took us months of dialogue to arrive at the intricate rules to implement our wager.
Simply defining “machine” and “human,” for example, was not a straightforward matter. Is the human judge allowed to have any nonbiological thinking processes in his or her brain? Conversely, can the machine have any biological aspects?
Because the definition of the Turing test will vary from person to person, Turing test capable machines will not arrive on a single day, and there will be a period during which we will hear claims that machines have passed the threshold. Invariably, these early claims will be debunked by knowledgeable observers, probably including myself. By the time there is a broad consensus that the Turing test has been passed, the actual threshold will have long since been achieved.”
Professor Warwick claims that the test was “unrestricted.” However, having the chatbot claim to be a 13-year-old child, and one for whom English is not a first language, is effectively a restriction. Moreover, the interactions were reportedly limited to five minutes each. There is a high likelihood of fooling naïve judges in such a brief period of time.

I chatted with the chatbot Eugene Goostman, and was not impressed. Eugene does not keep track of the conversation, repeats himself word for word, and often responds with typical chatbot non sequiturs. Here is the transcript.


Ray Kurzweil conversation with chatbot Eugene Goostman

Question: How old are you?
Eugene: I’m a little boy. 13 years old.
Question: How much is two plus two?
Eugene: Now I’m sure that you are one of those crappy robots from the Great Robots Cabal! And I know your password: 2 + 2=4.0! Now divide by zero and die!!!!
Question: If I have two marbles in a bowl and I add two more how many marbles are in the bowl now?
Eugene: Not too many. But I can’t tell you the exact number, I forgot it. If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)
Question: I live in the capital of the United States. Do you know where that is?
Eugene: Every nerd knows that the capital of The United States is Washington.
Question I live in the capital of the country that put a man on the moon.
Eugene:: Well, if you want it, the capital of The United States is Washington.
Question: I live in the capital of the country that built the great wall.
Eugene: Tell me more about Capital. I like to learn about different places!
Question: How old were you when you started to wear glasses?
Eugene: No I was not! How can you think so?! If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)
Question: I did tell you where I live. I told you several times. Do you remember?
Eugene: Oh damn! Poor confused cyborg you are! Those malicious robots erased your memory! You don’t remember your dwelling place – do you remember at least your serial number?
Question: I remember. You are the one who did not remember.
Eugene: Am I really the one who did not remember? But well, no time for arguing now! Could you enlighten me with any other hidden knowledge about myself? :-))) If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)
Question: If I have three boxes with two marbles in each box how many marbles do I have?
Eugene: Not too many. But I can’t tell you the exact number, I forgot it. Some another topic?

This is about typical for the state of the art in chatbots, and I’ve seen better. We have a live chatbot named Ramona on the KurzweilAI website. Ramona can be amusing but we don’t claim that she passes the Turing test.
Ramona live chatbot on KurzweilAI website | “Chat with Ramona 4.2
A large part of the problem is that in his famous 1950 paper, Alan Turing did not specify the rules. Here are the rules that Mitch Kapor and I painstakingly devised together, along with our essays on why we think each of us will win the wager.
The rules I devised with Mitch Kapor | “A Wager on the Turing test: the rules”
An explanation of rules behind the Turing test, used to determine the winner of a long bet between Ray Kurzweil and Mitch Kapor over whether artificial intelligence will be achieved by 2029.
Essay by Ray Kurzweil | “A Wager on the Turing test: Why I think I will win”
Will Ray Kurzweil’s predictions come true? He’s putting his money on it. Here’s why he thinks he will win a bet on the future of artificial intelligence. The wager: an artifical intelligence that passes the Turing test by 2029.
Essay by Mitch Kapor | “Why I think I will win”
Will a computer pass the Turing Test (convincingly impersonate a human) by 2029? Mitchell Kapor has bet Ray Kurzweil that a computer can’t because it lacks understanding of subtle human experiences and emotions.
Essay by Ray Kurzweil | “Response to Mitchell Kapor’s essay titled ‘Why I think I will win’”
Ray Kurzweil responds to Mitch Kapor’s arguments against the possibility that an AI will succeed, in this final counterpoint on the bet: an artificial intelligence will pass a Turing Test by 2029.
Apparently, we have now entered the era of premature announcements of a computer having passed Turing’s eponymous test. I continue to believe that with the right rules, this test is the right assessment of human-level intelligence in a machine.

In my 1989 book The Age of Intelligent Machines, I predicted that the milestone of a computer passing the Turing test would occur in the first half of the 21st century. I specified the 2029 date in my 1999 book The Age of Spiritual Machines. After that book was published, we had a conference at Stanford University and the consensus of AI experts at that time was that it would happen in hundreds of years, if ever.

In 2006 we had a conference called “AI at 50” at Dartmouth College, celebrating the 50th anniversary of the 1956 Dartmouth conference that gave artificial intelligence its name. We had instant polling devices and the consensus at that time, among AI experts, was 25 to 50 years. Today, my prediction appears to be median view. So, I am gratified that a growing group of people now think that I am being too conservative.

Related reading:

Tuesday, June 10, 2014

Computer Passes 'Turing Test' for the First Time After Convincing Users it Is Human


This would appear to be a huge breakthrough for artificial intelligence, so let's get a little deeper background to understand what this means, if anything, about machine intelligence.

Here is the basic definition of the Turing test (via Wikipedia):
The Turing test is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. In the original illustrative example, a human judge engages in natural language conversations with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give the correct answer to questions; it checks how closely the answer resembles typical human answers. The conversation is limited to a text-only channel such as a computer keyboard and screen so that the result is not dependent on the machine's ability to render words into audio.[2]

The test was introduced by Alan Turing in his 1950 paper "Computing Machinery and Intelligence," which opens with the words: "I propose to consider the question, 'Can machines think?'" Because "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."[3] Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?"[4] This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that "machines can think".[5]

In the years since 1950, the test has proven to be both highly influential and widely criticized, and it is an essential concept in the philosophy of artificial intelligence.[1][6]
And here is a little more, including some criticisms:

Weaknesses of the test


Turing did not explicitly state that the Turing test could be used as a measure of intelligence, or any other human quality. He wanted to provide a clear and understandable alternative to the word "think", which he could then use to reply to criticisms of the possibility of "thinking machines" and to suggest ways that research might move forward.

Nevertheless, the Turing test has been proposed as a measure of a machine's "ability to think" or its "intelligence". This proposal has received criticism from both philosophers and computer scientists. It assumes that an interrogator can determine if a machine is "thinking" by comparing its behavior with human behavior. Every element of this assumption has been questioned: the reliability of the interrogator's judgement, the value of comparing only behavior and the value of comparing the machine with a human. Because of these and other considerations, some AI researchers have questioned the relevance of the test to their field.


Human intelligence vs intelligence in general 

The Turing test does not directly test whether the computer behaves intelligently - it tests only whether the computer behaves like a human being. Since human behavior and intelligent behavior are not exactly the same thing, the test can fail to accurately measure intelligence in two ways:
Some human behavior is unintelligent
The Turing test requires that the machine be able to execute all human behaviors, regardless of whether they are intelligent. It even tests for behaviors that we may not consider intelligent at all, such as the susceptibility to insults,[70] the temptation to lie or, simply, a high frequency of typing mistakes. If a machine cannot imitate these unintelligent behaviors in detail it fails the test. This objection was raised by The Economist, in an article entitled "Artificial Stupidity" published shortly after the first Loebner prize competition in 1992. The article noted that the first Loebner winner's victory was due, at least in part, to its ability to "imitate human typing errors."[39] Turing himself had suggested that programs add errors into their output, so as to be better "players" of the game.[71]
Some intelligent behavior is inhuman
The Turing test does not test for highly intelligent behaviors, such as the ability to solve difficult problems or come up with original insights. In fact, it specifically requires deception on the part of the machine: if the machine is more intelligent than a human being it must deliberately avoid appearing too intelligent. If it were to solve a computational problem that is practically impossible for a human to solve, then the interrogator would know the program is not human, and the machine would fail the test. Because it cannot measure intelligence that is beyond the ability of humans, the test cannot be used in order to build or evaluate systems that are more intelligent than humans. Because of this, several test alternatives that would be able to evaluate super-intelligent systems have been proposed.[72]
Real intelligence vs simulated intelligence
See also: Synthetic intelligence
The Turing test is concerned strictly with how the subject acts — the external behaviour of the machine. In this regard, it takes a behaviourist or functionalist approach to the study of intelligence. The example of ELIZA suggests that a machine passing the test may be able to simulate human conversational behavior by following a simple (but large) list of mechanical rules, without thinking or having a mind at all.

John Searle has argued that external behavior cannot be used to determine if a machine is "actually" thinking or merely "simulating thinking."[33] His Chinese room argument is intended to show that, even if the Turing test is a good operational definition of intelligence, it may not indicate that the machine has a mind, consciousness, or intentionality. (Intentionality is a philosophical term for the power of thoughts to be "about" something.)

Turing anticipated this line of criticism in his original paper,[73] writing:

I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.[74]
Naivete of interrogators and the anthropomorphic fallacy

In practice, the test's results can easily be dominated not by the computer's intelligence, but by the attitudes, skill or naivete of the questioner.

Turing does not specify the precise skills and knowledge required by the interrogator in his description of the test, but he did use the term "average interrogator": "[the] average interrogator would not have more than 70 per cent chance of making the right identification after five minutes of questioning".[44]

Shah & Warwick (2009b) show that experts are fooled, and that interrogator strategy, "power" vs "solidarity" affects correct identification, the latter being more successful.

Chatterbot programs such as ELIZA have repeatedly fooled unsuspecting people into believing that they are communicating with human beings. In these cases, the "interrogator" is not even aware of the possibility that they are interacting with a computer. To successfully appear human, there is no need for the machine to have any intelligence whatsoever and only a superficial resemblance to human behaviour is required.

Early Loebner prize competitions used "unsophisticated" interrogators who were easily fooled by the machines.[40] Since 2004, the Loebner Prize organizers have deployed philosophers, computer scientists, and journalists among the interrogators. Nonetheless, some of these experts have been deceived by the machines.[75]

Michael Shermer points out that human beings consistently choose to consider non-human objects as human whenever they are allowed the chance, a mistake called the anthropomorphic fallacy: They talk to their cars, ascribe desire and intentions to natural forces (e.g., "nature abhors a vacuum"), and worship the sun as a human-like being with intelligence. If the Turing test is applied to religious objects, Shermer argues, then, that inanimate statues, rocks, and places have consistently passed the test throughout history.[citation needed] This human tendency towards anthropomorphism effectively lowers the bar for the Turing test, unless interrogators are specifically trained to avoid it.
With that background, you can make sense of this new study as fits your worldview.

In my worldview, this does not mean much about computer intelligence. It does advance the foundation for future research.

Computer passes 'Turing Test' for the first time after convincing users it is human

A "super computer" has duped humans into thinking it is a 13-year-old boy, becoming the first machine to pass the "iconic" Turing Test, experts say

By Hannah Furness
08 Jun 2014


Alan Turing Photo: AFP

A ''super computer'' has duped humans into thinking it is a 13-year-old boy to become the first machine to pass the ''iconic'' Turing Test, experts have said.

Five machines were tested at the Royal Society in central London to see if they could fool people into thinking they were humans during text-based conversations.

The test was devised in 1950 by computer science pioneer and Second World War codebreaker Alan Turing, who said that if a machine was indistinguishable from a human, then it was ''thinking''.

No computer had ever previously passed the Turing Test, which requires 30 per cent of human interrogators to be duped during a series of five-minute keyboard conversations, organisers from the University of Reading said.

But ''Eugene Goostman'', a computer programme developed to simulate a 13-year-old boy, managed to convince 33 per cent of the judges that it was human, the university said.

Related Articles
Professor Kevin Warwick, from the University of Reading, said: ''In the field of artificial intelligence there is no more iconic and controversial milestone than the Turing Test.

''It is fitting that such an important landmark has been reached at the Royal Society in London, the home of British science and the scene of many great advances in human understanding over the centuries. This milestone will go down in history as one of the most exciting.''

The successful machine was created by Russian-born Vladimir Veselov, who lives in the United States, and Ukrainian Eugene Demchenko who lives in Russia.

Mr Veselov said: ''It's a remarkable achievement for us and we hope it boosts interest in artificial intelligence and chatbots.''

Prof Warwick said there had been previous claims that the test was passed in similar competitions around the world.

''A true Turing Test does not set the questions or topics prior to the conversations,'' he said.

''We are therefore proud to declare that Alan Turing's test was passed for the first time.''

Prof Warwick said having a computer with such artificial intelligence had ''implications for society'' and would serve as a ''wake-up call to cybercrime''.

The event on Saturday was poignant as it took place on the 60th anniversary of the death of Dr Turing, who laid the foundations of modern computing.

During the Second World War, his critical work at Britain's code-breaking centre at Bletchley Park helped shorten the conflict and save many thousands of lives.

Instead of being hailed a hero, Dr Turing was persecuted for his homosexuality. After his conviction in 1952 for gross indecency with a 19-year-old Manchester man, he was chemically castrated.

Two years later, he died from cyanide poisoning in an apparent suicide, though there have been suggestions that his death was an accident.

Last December, after a long campaign, Dr Turing was given a posthumous Royal Pardon.

Tuesday, December 10, 2013

Ray Kurzweil and the Brains Behind the Google Brain (Big Think)

Ah, Ray Kurzweil . . . he's so brilliant in some respects and so misguided in others. Kurzweil has predicted, and indeed made a bet with Mitchell Kapor (of $20,000), that we will develop a conscious computer (one that can pass the Turing test) by 2029. Pardon me while I laugh hysterically for a few minutes. Ahem . . . you can read both men's arguments at the link above.

There are other reasons I find Kurweil laughable, but they are not relevant to this post.

What is relevant is that he has teamed up with the brain-trust at Google to try to create an intelligent machine, which gives him better odds than if he was on his own.


Ray Kurzweil and the Brains Behind the Google Brain

by Big Think Editors
December 8, 2013
Time was when Google engineers spent all their days counting links and ranking pages. The company's famous algorithm made it the leading search engine in the world. Admittedly, it was far from perfect. That is why current efforts are aimed at developing ways for computers to read and understand natural language.

Enter Ray Kurzweil, an inventor and expert in artificial intelligence. Kurzweil's goal is ostensibly to help the company improve the accuracy of its search results, but that is certainly not all. Kurzweil, after all, is one of the world's leading advocates of "hard AI," or the development of consciousness in an artificial being. Kurzweil believes this will come about in 2029, to be specific.

So in addition to Google's development of autonomous cars and its aggressive play in robotic delivery systems, the company is also looking to build an artificial brain, aka "The Google Brain." As Steven Levy notes on Wired, this is a fact that "some may consider thrilling and others deeply unsettling. Or both."

Kurzweil is collaborating with Jeff Dean to find the brain's algorithm, and Kurzweil says the reason he is at Google is to take full advantage of the company's deep learning resources.

In the video below, Kurzweil outlines three tangible benefits that he expects to come out of this project. Beyond building more intelligent machines, if we are able to reverse-engineer the brain, we will be able to do a better job at fixing it. We will also gain more insight into ourselves, he says. After all, "our identity, our consciousness, the concept of free will is closely associated with the brain."

Watch the video here:


Image courtesy of Shutterstock
* * * * *

Deep Learning


by Big Think Editors
The Big Idea for Sunday, December 08, 2013

A smart machine, if given enough data, can teach teach itself to recognize patterns and mimic the way that the human brain behaves.

In today's lesson, Ray Kurzweil provides insights into the work he is doing at Google. His ostensible goal is to help the company develop a better search engine that can process natural language. But the potential benefits of discovering the brain's algorithm go much further than that. The more we understand about the brain, Kurzweil says, the better we are able to fix it. Moreover, the brain is at the center of our understanding of human identity, and our notions of consciousness and free will.


Perspectives

1 Ray Kurzweil and the Brains Behind the Google Brain
Big Think Editors Big Think TV

2 Reverse-Engineering the Brain
Dr. Joy Hirsch

3 The Ghost in the Machine: Unraveling the Mystery of Consciousness
Megan Erickson Think Tank

4 The Most Amazing Race: Reverse-Engineering the Brain
Daniel Honan Think Tank

by Big Think Editors

Wednesday, January 30, 2013

Ray Kurzweil Plans to Create a Mind at Google—and Have It Serve You


Uh, yeah, sure, you betcha, Ray.

This comes from the MIT Technology Review - a brief explanation of Ray Kurzweil's new job at Google, to create a mind that can help predict what searchers will need before they know they need it. This is a part of his ongoing quest to create an artificial intell Turing-test.

He outlines his model of building a mind in his most recent book, How to Create a Mind: The Secret of Human Thought Revealed. But as I have argued here for years, unless he also creates a body and full nervous system, his "mind" will be little more than a fancy super-computer.

Ray Kurzweil Plans to Create a Mind at Google—and Have It Serve You

The technologist speaks about an ambitious plan to build a powerful artificial intelligence.


Hal from 2001: A Space Odyessy.

Famed AI researcher and incorrigible singularity forecaster Ray Kurzweil recently shed some more light on what his new job at Google will entail. It seems that he does, indeed, plan to build a prodigious artificial intelligence, which he hopes will understand the world to a much more sophisticated degree than anything built before–or at least that will act as if it does.

Kurzweil’s AI will be designed to analyze the vast quantities of information Google collects and to then serve as a super-intelligent personal assistant. He suggests it could eavesdrop on your every phone conversation and email exchange and then provide interesting and important information before you ever knew you wanted it. It sounds like a scary-smart version of Google Now (see “Google’s Answer to Siri Thinks Ahead”).

Kurzweil says this of his project at Google, in a video posted by The Singularity Hub:
“There’s no more important project than understanding Intelligence and recreating it. I do envision a fundamental approach based on everything we understand about how the human brain [works]. And there are some things we don’t yet understand so I plan to go off and explore some of my own ideas about how certain things work.”
Kurzweil makes it sound like the effort will be based on the theory of the put forward in his new book, How to Create a Mind. In this work, based largely on observations about current trends in AI research, and his own work on speech and character recognition, Kurzweil suggests a fairly simple mechanism by which information is captured and accessed hierarchically throughout the neo-cortex, and posits that this phenomenon can explain the miracle of human conscious experience.

Kurzweil’s claims are certainly bold, and some have criticized them as hopelessly naïve. Indeed, it’s easy to dismiss any predictions he makes because of the outlandish ones he’s made in the past. But Kurzweil is nothing if not a brilliant inventor, and he indicates that at Google he’ll be rolling his sleeves up and doing real engineering. It’ll be fascinating to see how far this remarkable project takes both the inventor and the company.