This would appear to be a huge breakthrough for artificial intelligence, so let's get a little deeper background to understand what this means, if anything, about machine intelligence.
Here is the basic definition of the Turing test (via Wikipedia):
The Turing test is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. In the original illustrative example, a human judge engages in natural language conversations with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give the correct answer to questions; it checks how closely the answer resembles typical human answers. The conversation is limited to a text-only channel such as a computer keyboard and screen so that the result is not dependent on the machine's ability to render words into audio.[2]And here is a little more, including some criticisms:
The test was introduced by Alan Turing in his 1950 paper "Computing Machinery and Intelligence," which opens with the words: "I propose to consider the question, 'Can machines think?'" Because "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."[3] Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?"[4] This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that "machines can think".[5]
In the years since 1950, the test has proven to be both highly influential and widely criticized, and it is an essential concept in the philosophy of artificial intelligence.[1][6]
Weaknesses of the test
Turing did not explicitly state that the Turing test could be used as a measure of intelligence, or any other human quality. He wanted to provide a clear and understandable alternative to the word "think", which he could then use to reply to criticisms of the possibility of "thinking machines" and to suggest ways that research might move forward.
Nevertheless, the Turing test has been proposed as a measure of a machine's "ability to think" or its "intelligence". This proposal has received criticism from both philosophers and computer scientists. It assumes that an interrogator can determine if a machine is "thinking" by comparing its behavior with human behavior. Every element of this assumption has been questioned: the reliability of the interrogator's judgement, the value of comparing only behavior and the value of comparing the machine with a human. Because of these and other considerations, some AI researchers have questioned the relevance of the test to their field.
Human intelligence vs intelligence in general
The Turing test does not directly test whether the computer behaves intelligently - it tests only whether the computer behaves like a human being. Since human behavior and intelligent behavior are not exactly the same thing, the test can fail to accurately measure intelligence in two ways:
Some human behavior is unintelligentThe Turing test requires that the machine be able to execute all human behaviors, regardless of whether they are intelligent. It even tests for behaviors that we may not consider intelligent at all, such as the susceptibility to insults,[70] the temptation to lie or, simply, a high frequency of typing mistakes. If a machine cannot imitate these unintelligent behaviors in detail it fails the test. This objection was raised by The Economist, in an article entitled "Artificial Stupidity" published shortly after the first Loebner prize competition in 1992. The article noted that the first Loebner winner's victory was due, at least in part, to its ability to "imitate human typing errors."[39] Turing himself had suggested that programs add errors into their output, so as to be better "players" of the game.[71]Some intelligent behavior is inhuman
The Turing test does not test for highly intelligent behaviors, such as the ability to solve difficult problems or come up with original insights. In fact, it specifically requires deception on the part of the machine: if the machine is more intelligent than a human being it must deliberately avoid appearing too intelligent. If it were to solve a computational problem that is practically impossible for a human to solve, then the interrogator would know the program is not human, and the machine would fail the test. Because it cannot measure intelligence that is beyond the ability of humans, the test cannot be used in order to build or evaluate systems that are more intelligent than humans. Because of this, several test alternatives that would be able to evaluate super-intelligent systems have been proposed.[72]Real intelligence vs simulated intelligence
See also: Synthetic intelligence
The Turing test is concerned strictly with how the subject acts — the external behaviour of the machine. In this regard, it takes a behaviourist or functionalist approach to the study of intelligence. The example of ELIZA suggests that a machine passing the test may be able to simulate human conversational behavior by following a simple (but large) list of mechanical rules, without thinking or having a mind at all.With that background, you can make sense of this new study as fits your worldview.
John Searle has argued that external behavior cannot be used to determine if a machine is "actually" thinking or merely "simulating thinking."[33] His Chinese room argument is intended to show that, even if the Turing test is a good operational definition of intelligence, it may not indicate that the machine has a mind, consciousness, or intentionality. (Intentionality is a philosophical term for the power of thoughts to be "about" something.)
Turing anticipated this line of criticism in his original paper,[73] writing:
I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.[74]Naivete of interrogators and the anthropomorphic fallacy
In practice, the test's results can easily be dominated not by the computer's intelligence, but by the attitudes, skill or naivete of the questioner.
Turing does not specify the precise skills and knowledge required by the interrogator in his description of the test, but he did use the term "average interrogator": "[the] average interrogator would not have more than 70 per cent chance of making the right identification after five minutes of questioning".[44]
Shah & Warwick (2009b) show that experts are fooled, and that interrogator strategy, "power" vs "solidarity" affects correct identification, the latter being more successful.
Chatterbot programs such as ELIZA have repeatedly fooled unsuspecting people into believing that they are communicating with human beings. In these cases, the "interrogator" is not even aware of the possibility that they are interacting with a computer. To successfully appear human, there is no need for the machine to have any intelligence whatsoever and only a superficial resemblance to human behaviour is required.
Early Loebner prize competitions used "unsophisticated" interrogators who were easily fooled by the machines.[40] Since 2004, the Loebner Prize organizers have deployed philosophers, computer scientists, and journalists among the interrogators. Nonetheless, some of these experts have been deceived by the machines.[75]
Michael Shermer points out that human beings consistently choose to consider non-human objects as human whenever they are allowed the chance, a mistake called the anthropomorphic fallacy: They talk to their cars, ascribe desire and intentions to natural forces (e.g., "nature abhors a vacuum"), and worship the sun as a human-like being with intelligence. If the Turing test is applied to religious objects, Shermer argues, then, that inanimate statues, rocks, and places have consistently passed the test throughout history.[citation needed] This human tendency towards anthropomorphism effectively lowers the bar for the Turing test, unless interrogators are specifically trained to avoid it.
In my worldview, this does not mean much about computer intelligence. It does advance the foundation for future research.
Computer passes 'Turing Test' for the first time after convincing users it is human
A "super computer" has duped humans into thinking it is a 13-year-old boy, becoming the first machine to pass the "iconic" Turing Test, experts say
By Hannah Furness
08 Jun 2014
Alan Turing Photo: AFP
A ''super computer'' has duped humans into thinking it is a 13-year-old boy to become the first machine to pass the ''iconic'' Turing Test, experts have said.
Five machines were tested at the Royal Society in central London to see if they could fool people into thinking they were humans during text-based conversations.
The test was devised in 1950 by computer science pioneer and Second World War codebreaker Alan Turing, who said that if a machine was indistinguishable from a human, then it was ''thinking''.
No computer had ever previously passed the Turing Test, which requires 30 per cent of human interrogators to be duped during a series of five-minute keyboard conversations, organisers from the University of Reading said.
But ''Eugene Goostman'', a computer programme developed to simulate a 13-year-old boy, managed to convince 33 per cent of the judges that it was human, the university said.
Related Articles
Professor Kevin Warwick, from the University of Reading, said: ''In the field of artificial intelligence there is no more iconic and controversial milestone than the Turing Test.
- Turing Institute announced in Budget 19 Mar 2014
- 'Royal pardon is long overdue' 24 Dec 2013
- Will Alice pass the Turing Test? 28 Feb 2002
- Computers still not quite clever enough to fool humans, Turing Test shows 12 Oct 2008
- How long before robots can think like us? 21 Aug 2012
''It is fitting that such an important landmark has been reached at the Royal Society in London, the home of British science and the scene of many great advances in human understanding over the centuries. This milestone will go down in history as one of the most exciting.''
The successful machine was created by Russian-born Vladimir Veselov, who lives in the United States, and Ukrainian Eugene Demchenko who lives in Russia.
Mr Veselov said: ''It's a remarkable achievement for us and we hope it boosts interest in artificial intelligence and chatbots.''
Prof Warwick said there had been previous claims that the test was passed in similar competitions around the world.
''A true Turing Test does not set the questions or topics prior to the conversations,'' he said.
''We are therefore proud to declare that Alan Turing's test was passed for the first time.''
Prof Warwick said having a computer with such artificial intelligence had ''implications for society'' and would serve as a ''wake-up call to cybercrime''.
The event on Saturday was poignant as it took place on the 60th anniversary of the death of Dr Turing, who laid the foundations of modern computing.
During the Second World War, his critical work at Britain's code-breaking centre at Bletchley Park helped shorten the conflict and save many thousands of lives.
Instead of being hailed a hero, Dr Turing was persecuted for his homosexuality. After his conviction in 1952 for gross indecency with a 19-year-old Manchester man, he was chemically castrated.
Two years later, he died from cyanide poisoning in an apparent suicide, though there have been suggestions that his death was an accident.
Last December, after a long campaign, Dr Turing was given a posthumous Royal Pardon.
No comments:
Post a Comment