Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Sunday, January 18, 2015

EDGE Question 2015: WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

It's time again for the annual EDGE question for 2015 - What do you think about machines that think? Among this years respondents are many authors and thinkers who work in psychology, neuroscience, philosophy, and consciousness research. Here are a few:

Stanislas Dehaene, Alison Gopnik, Thomas Metzinger, Bruce Sterling, Kevin Kelly, Sam Harris, Daniel Dennett, Andy Clark, Michael Shermer, Nicholas Humphrey, Gary Marcus, George Dyson, Paul Davies, Douglas Rushkoff, Helen Fisher, Stuart A. Kauffman, Robert Sapolsky, Maria Popova, Steven Pinker, and many others - 186 in all.

2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?



"Dahlia" by Katinka Matson |  Click to Expand www.katinkamatson.com
_________________________________________________________________

In recent years, the 1980s-era philosophical discussions about artificial intelligence (AI)—whether computers can "really" think, refer, be conscious, and so on—have led to new conversations about how we should deal with the forms that many argue actually are implemented. These "AIs", if they achieve "Superintelligence" (Nick Bostrom), could pose "existential risks" that lead to "Our Final Hour" (Martin Rees). And Stephen Hawking recently made international headlines when he noted "The development of full artificial intelligence could spell the end of the human race."   
THE EDGE QUESTION—2015 
WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

But wait! Should we also ask what machines that think, or, "AIs", might be thinking about? Do they want, do they expect civil rights? Do they have feelings? What kind of government (for us) would an AI choose? What kind of society would they want to structure for themselves? Or is "their" society "our" society? Will we, and the AIs, include each other within our respective circles of empathy?

Numerous Edgies have been at the forefront of the science behind the various flavors of AI, either in their research or writings. AI was front and center in conversations between charter members Pamela McCorduck (Machines Who Think) and Isaac Asimov (Machines That Think) at our initial meetings in 1980. And the conversation has continued unabated, as is evident in the recent Edge feature "The Myth of AI", a conversation with Jaron Lanier, that evoked rich and provocative commentaries.

Is AI becoming increasingly real? Are we now in a new era of the "AIs"? To consider this issue, it's time to grow up. Enough already with the science fiction and the movies, Star Maker, Blade Runner, 2001, Her, The Matrix, "The Borg".  Also, 80 years after Turing's invention of his Universal Machine, it's time to honor Turing, and other AI pioneers, by giving them a well-deserved rest. We know the history. (See George Dyson's 2004 Edge feature "Turing's Cathedral".) So, once again, this time with rigor, the Edge Question—2105: WHAT DO YOU THINK ABOUT MACHINES THAT THINK?
_________________________________________________________________


[182 Responses—126,000 words:] Pamela McCorduck, George Church, James J. O'Donnell, Carlo Rovelli, Nick Bostrom, Daniel C. Dennett, Donald Hoffman, Roger Schank, Mark Pagel, Frank Wilczek, Robert Provine, Susan Blackmore, Haim Harari, Andy Clark, William Poundstone, Peter Norvig, Rodney Brooks, Jonathan Gottschall, Arnold Trehub, Giulio Boccaletti, Michael Shermer, Chris DiBona, Aubrey De Grey, Juan Enriquez, Satyajit Das, Quentin Hardy, Clifford Pickover, Nicholas Humphrey, Ross Anderson, Paul Saffo, Eric J. Topol, M.D., Dylan Evans, Roger Highfield, Gordon Kane, Melanie Swan, Richard Nisbett, Lee Smolin, Scott Atran, Stanislas Dehaene, Stephen Kosslyn, Emanuel Derman, Richard Thaler, Alison Gopnik, Ernst Pöppel, Luca De Biase, Maraget Levi, Terrence Sejnowski, Thomas Metzinger, D.A. Wallach, Leo Chalupa, Bruce Sterling, Kevin Kelly, Martin Seligman, Keith Devlin, S. Abbas Raza, Neil Gershenfeld, Daniel Everett, Douglas Coupland, Joshua Bongard, Ziyad Marar, Thomas Bass, Frank Tipler, Mario Livio, Marti Hearst, Randolph Nesse, Alex (Sandy) Pentland, Samuel Arbesman, Gerald Smallberg, John Mather, Ursula Martin, Kurt Gray, Gerd Gigerenzer, Kevin Slavin, Nicholas Carr, Timo Hannay, Kai Krause, Alun Anderson, Seth Lloyd, Mary Catherine Bateson, Steve Fuller, Virginia Heffernan, Barbara Strauch, Sean Carroll, Sheizaf Rafaeli, Edward Slingerland, Nicholas Christakis, Joichi Ito, David Christian, George Dyson, Paul Davies, Douglas Rushkoff, Tim O'Reilly, Irene Pepperberg, Helen Fisher, Stuart A. Kauffman, Stuart Russell, Tomaso Poggio, Robert Sapolsky, Maria Popova, Martin Rees, Lawrence M. Krauss, Jessica Tracy & Kristin Laurin, Paul Dolan, Kate Jefferey, June Gruber & Raul Saucedo, Bruce Schneier, Rebecca MacKinnon, Antony Garrett Lisi, Thomas Dietterich, John Markoff, Matthew Lieberman, Dimitar Sasselov, Michael Vassar, Gregory Paul, Hans Ulrich Obrist, Andrian Kreye, Andrés Roemer, N.J. Enfield, Rolf Dobelli, Nina Jablonski, Marcelo Gleiser, Gary Klein, Tor Nørretranders, David Gelernter, Cesar Hidalgo, Gary Marcus, Sam Harris, Molly Crockett, Abigail Marsh, Alexander Wissner-Gross, Koo Jeong-A, Sarah Demers, Richard Foreman, Julia Clarke, Georg Diez, Jaan Tallinn, Michael McCullough, Hans Halvorson, Kevin Hand, Christine Finn, Tom Griffiths, Dirk Helbing, Brian Knutson, John Tooby, Maximilian Schich, Athena Vouloumanos, Brian Christian, Timothy Taylor, Bruce Parker, Benjamin Bergen, Laurence Smith, Ian Bogost, W. Tecumseh Fitch, Michael Norton, Scott Draves, Gregory Benford, Chris Anderson, Raphael Bousso, Christopher Chabris, James Croak, Beatrice Golomb, Moshe Hoffman, Matt Ridley, Matthew Ritchie, Eduardo Salcedo-Albaran, Eldar Shafir, Maria Spiropulu, Tania Lombrozo, Bart Kosko, Joscha Bach, Esther Dyson, Anthony Aguirre, Steve Omohundro, Murray Shanahan, Eliezer Yudkowsky, Steven Pinker, Max Tegmark, Jon Kleinberg & Senhil Mullainathan, Freeman Dyson, Brian Eno, W. Daniel Hillis, Katinka Matson

Thursday, December 11, 2014

Perspectives on Artificial Intelligence

Artificial intelligence.

Kevin Kelly  
 
Conversations at the Edge 2.3.14
 
* * * * * 

Superintelligence by Nick Bostrom and A Rough Ride to the Future by James Lovelock – review

Will technology remain our slave? Caspar Henderson on two attempts to read the future for humanity

Caspar Henderson | The Guardian
Thursday 17 July 2014

* * * * *

What Your Computer Can’t Know


John R. Searle | New York Review of Books
October 9, 2014
The 4th Revolution: How the Infosphere Is Reshaping Human Reality
by Luciano Floridi
Oxford University Press, 248 pp., $27.95

Superintelligence: Paths, Dangers, Strategies
by Nick Bostrom
Oxford University Press, 328 pp., $29.95

* * * * *

Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts

Big-data boondoggles and brain-inspired chips are just two of the things we’re really getting wrong

By Lee Gomes | IEEE Spectrum
Posted 20 Oct 2014

* * * * *

The Myth Of AI

A Conversation with Jaron Lanier

Conversations at the Edge 11.14.14

* * * * *

Artificial Intelligence, Really, Is Pseudo-Intelligence


Alva Noë | NPR 13.7 Cosmos and Culture Blog
November 21, 2014

* * * * *

Enthusiasts and Skeptics Debate Artificial Intelligence

Kurt Andersen wonders: If the Singularity is near, will it bring about global techno-Nirvana or civilizational ruin?

By Kurt Andersen
November 26, 2014

* * * * *

Is AI a Myth?


By Rick Searle | IEET
Utopia or Dystopia
Nov 30, 2014

* * * * *

Stephen Hawking warns artificial intelligence could end mankind


By Rory Cellan-Jones
BBC News | 2 December 2014

Wednesday, September 24, 2014

Nick Bostrom: Superintelligence - Authors@Google

 

Nick Bostrom is the author of Superintelligence: Paths, Dangers, Strategies (September, 2014). Bostrom is Professor in the Faculty of Philosophy at Oxford University. He is the founding Director of the Future of Humanity Institute, a multidisciplinary research center which enables a few exceptional mathematicians, philosophers, and scientists to think carefully about global priorities and big questions for humanity.
Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009), and the groundbreaking Superintelligence: Paths, Dangers, Strategies (OUP, 2014). He is best known for his work in five areas: (i) existential risk; (ii) the simulation argument; (iii) anthropics (developing the first mathematically explicit theory of observation selection effects); (iv) impacts of future technology; and (v) implications of consequentialism for global strategy.

He is recipient of a Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). Earlier this year he was included on Prospect magazine's World Thinkers list, the youngest person in the top 15 from all fields and the highest-ranked analytic philosopher. His writings have been translated into 22 languages. There have been more than 100 translations and reprints of his works.

For more, see www.nickbostrom.com
This is an excellent talk.

Nick Bostrom: Superintelligence - Authors@Google

Published on Sept 22, 2014


Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.

This talk was hosted by Boris Debic.

Monday, August 18, 2014

A Thousand Cooperative, Self-Organising Robots - This Is How the End of Human Dominance on Earth Will Occur

Kilobots

I suspect I am not the only one who finds a swarm of cooperative, self-organizing robots to be a rather obvious foreshadowing of the end of human dominance on Earth. Welcome to the singularity.

This is an excellent review of the new study published in Science (pay-walled, of course). It comes from National Geographic's Phenomena blog, of which Not Exactly Rocket Science is a part.

A Swarm of a Thousand Cooperative, Self-Organising Robots

by Ed Yong | Not Exactly Rocket Science
August 14, 2014

In a lab at Harvard’s Wyss Institute, the world’s largest swarm of cooperative robots is building a star… out of themselves. There are 1024 of these inch-wide ‘Kilobots’, and they can arrange themselves into different shapes, from a letter to a wrench. They are slow and comically jerky in their movements, but they are also autonomous. Once they’re given a shape, they can recreate it without any further instructions, simply by cooperating with their neighbours and organising themselves.

The Kilobots are the work of Mike Rubenstein, Alejandro Cornejo and Radhika Nagpal, who were inspired by natural swarms, where simple and limited units can cooperate to do great things. Thousands of fire ants can unite into living bridges, rafts and buildings. Billions of unthinking neurons can create the human brain. Trillions of cells can fashion a tree or a tyrannosaur. Scientists have tried to make artificial swarms with similar abilities, but building and programming them is expensive and difficult. Most of these robot herds consist of a few dozen units, and only a few include more than a hundred. The Kilobots smash that record.

They’re still a far cry from the combiner robots of my childhood cartoons: they’re arrange themselves into two-dimensional shapes rather than assembling Voltron-style into actual objects. But they’re already an impressive achievement. “This is not only the largest swarm of robots in the world but also an excellent test bed, allowing us to validate collective algorithms in practice,” says Roderich Gross from the University of Sheffield, who has bought 900 of the robots himself to use in his own experiments.

“This is a staggering work,” adds Iain Couzin, who studies collective animal behaviour at Princeton University. “It offers a vision of the future where robot groups could form structures on demand as, for example, in search-and-rescue in dangerous environments, or even the formation of miniature swarms within the body to detect and treat disease.”

“And I’ll form… the wrench!” Credit: Michael Rubenstein, Harvard University.

To create their legion, the team had to rethink every aspect of a typical robot. “If you have a power switch, it takes four seconds to push that, so it’ll take over an hour to turn on a thousand robots,” says Rubenstein. “Charging them, turning them on, sending them new instructions… everything you do with a thousand robots has to be at the level of all the robots at once.”

They also have to be cheap. Fancy parts might make each bot more powerful, but would turn a swarm into a budget-breaker. Even wheels were out. Instead, the team used simpler vibration motors. If you leave your phone on a table and it vibrates, it will also slide slightly: that’s how the Kilobots move. They have two motors: if either vibrates individually, the robot rotates; if both vibrate, it goes straight.

Well, straight-ish, anyway. The tyranny of cost-efficiency meant that the team had to lose any sensors that might tell the robots their bearings or positions. They can’t tell where they are, or if they’re going straight. But each one can shoot infrared beams to the surface below it, and sense the beams reflecting from its neighbours. By measuring how bright the reflections are, it can calculate its distance from other Kilobots.

This combination of stilted motion and dulled senses meant that each robot costs just $20. It also meant that “the robots were even more limited than we expected,” says Rubenstein. “The way they sense distance is noisy and imprecise. You can tell them to move and they won’t, and they’ll have no idea that they’re not moving.”

Fortunately, they have each other. A stuck Kilobot can’t tell if it’s stuck on its own, but it can communicate with its neighbours. If it thinks it’s moving but the distances from its neighbours change, it can deduce that something is wrong. And if neighbours estimate the distances between them and use the average, they can smooth out individual errors.

Using these principles, the team created a simple program that allows the robots to independently assemble into different shapes using just three behaviours. First, they move by skirting along the edges of a group. Second, they create gradients as a crude way of noting their position in the swarm. (A nominated source robot gets a gradient value of 0. Any adjacent robot that can see it sets its gradient value to 1. Any robot that sees 1 but not 0 sets its gradient to 2, and so on.) Finally, although they have no GPS, they can triangulate their position by talking to their neighbours. As long as the team nominates some robots as seeds, effectively turning them into the zero-point on a invisible graph, the rest of the swarm can then work out where they are.

Every Kilobot runs on the same program. The team only has to give them a shape and nominate four of them as seeds. Once that’s done, the rest slowly pour into the right pattern, in an endearingly life-like way. It takes them around 12 hours, but they do it all without any human intervention. And although the final shapes are always a little warped, that’s life-like too. Fire ants don’t have a Platonic ideal of what a bridge or raft should look like; they just work with their neighbours to get the job done.

Stills from movies showing the Kilobots assembling into a K and a star. 
Credit: Michael Rubenstein, Harvard University.

Scientists have long been able to simulate huge swarms of life-like virtual particles in computers, using very simple rules. But the real world is full of pesky physics, inconvenient noise, and temperamental circuitry. Stuff goes wrong. By building an actual swarm, the team can address these problems and make their programs more robust. They’ve already had to deal with a litany of failed motors, stalled robots, collisions, and traffic jams. “The more times you run it, the more likely some random thing will show up that you don’t expect,” says Rubenstein. “That’s the problem with 1,000 robots: even rare things can happen very frequently.”

The next step will be to build robots that actually self-assemble by attaching to each other, says Marco Dorigo from the Free University of Brussels.. “We did so with tens of robots,” he says. “It will not be easy with one thousand.” Rubenstein agrees: “Physical connection is always difficult. If you have a dock, you tend to design the rest of the robot around that dock. It has a huge impact.”

Eventually, he also wants to get to a position where the robots can sense their environment and react accordingly, rather than just slide into some pre-determined shape. Like fire ants, when they get to a body of water, they wouldn’t have to be fed the image of a bridge; they would just self-assemble into one. “That’s a whole other level of intelligence, and it’s not really understood how to do that in robotics,” says Rubenstein. “But nature does it well.”


Reference:
Rubenstein, Cornejo, and Nagpal. (2014, Aug 15). Programmable self-assembly in a thousand-robot swarm. Science; 345(6198):795-799.  http://dx.doi.org/10.1126/science.1254295

More on robots:

Thursday, June 12, 2014

Kurzweil Does Not Accept Victory in the Turing Test Bet

 

The other day, Kevin Warwick and his team reported that their computer program, a chatbot named Eugene Goostman, had become the first artificial intelligence to pass the Turing Test.

For those who follow such things, inventor, futurist, and Google's engineering director, Ray Kurzweil has a standing wager of $20,000 with Mitch Kapor that a computer would pass the Turing Test by 2029. Based on the report cited above, it would appear Kurzweil has won the bet.

The only problem is that Kurzweil does not think so. Which is not good news for the researchers and their bot.

Here is Kurzweil's statement from his blog:

Response by Ray Kurzweil to the Announcement of Chatbot Eugene Goostman Passing the Turing test

June 10, 2014 by Ray Kurzweil
Eugene Goostman chatbot. (credit: Vladimir Veselov and Eugene Demchenko)

Two days ago, on June 8, 2014, the University of Reading announced that a computer program “has passed the Turing test for the first time.”

University of Reading Professor Kevin Warwick described it this way:
“Some will claim that the test has already been passed. The words ‘Turing test’ have been applied to similar competitions around the world. However, this event involved more simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing test does not set the questions or topics prior to the conversations. We are therefore proud to declare that Alan Turing’s test was passed for the first time on Saturday.” — Kevin Warwick, PhD
I have had a long-term wager with Mitch Kapor in which I predicted that a computer program would pass the Turing test by 2029 and he predicted that this would not happen, see links below.

This was the first long-term wager on the “Long Now” website. The bet called for $20,000 to be donated from us to the charity of the winner’s choice.

As a result, messages have been streaming in from around the world congratulating me for having won the bet.

However, I think this is premature. I am disappointed that Professor Warwick, with whom I agree on many things, would make this statement. There are several problems that I describe below, including a transcript of a conversation that I had with Eugene Goostman, the chatbot in question.

In my 2004 book The Singularity Is Near, I anticipated that there would be premature announcements of this kind:
The Singularity Is Near, page 295 | Turing was carefully imprecise in setting the rules for his test, and significant literature has been devoted to the subtleties of establishing the exact procedures for determining how to assess when the Turing test has been passed. In 2002 I negotiated the rules for a Turing test wager with Mitch Kapor on the Long Now website.
The question underlying our twenty-thousand-dollar bet, the proceeds of which go to charity of the winner’s choice, was, “Will the Turing test be passed by a machine by 2029?” I said yes, and Kapor said no. It took us months of dialogue to arrive at the intricate rules to implement our wager.
Simply defining “machine” and “human,” for example, was not a straightforward matter. Is the human judge allowed to have any nonbiological thinking processes in his or her brain? Conversely, can the machine have any biological aspects?
Because the definition of the Turing test will vary from person to person, Turing test capable machines will not arrive on a single day, and there will be a period during which we will hear claims that machines have passed the threshold. Invariably, these early claims will be debunked by knowledgeable observers, probably including myself. By the time there is a broad consensus that the Turing test has been passed, the actual threshold will have long since been achieved.”
Professor Warwick claims that the test was “unrestricted.” However, having the chatbot claim to be a 13-year-old child, and one for whom English is not a first language, is effectively a restriction. Moreover, the interactions were reportedly limited to five minutes each. There is a high likelihood of fooling naïve judges in such a brief period of time.

I chatted with the chatbot Eugene Goostman, and was not impressed. Eugene does not keep track of the conversation, repeats himself word for word, and often responds with typical chatbot non sequiturs. Here is the transcript.


Ray Kurzweil conversation with chatbot Eugene Goostman

Question: How old are you?
Eugene: I’m a little boy. 13 years old.
Question: How much is two plus two?
Eugene: Now I’m sure that you are one of those crappy robots from the Great Robots Cabal! And I know your password: 2 + 2=4.0! Now divide by zero and die!!!!
Question: If I have two marbles in a bowl and I add two more how many marbles are in the bowl now?
Eugene: Not too many. But I can’t tell you the exact number, I forgot it. If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)
Question: I live in the capital of the United States. Do you know where that is?
Eugene: Every nerd knows that the capital of The United States is Washington.
Question I live in the capital of the country that put a man on the moon.
Eugene:: Well, if you want it, the capital of The United States is Washington.
Question: I live in the capital of the country that built the great wall.
Eugene: Tell me more about Capital. I like to learn about different places!
Question: How old were you when you started to wear glasses?
Eugene: No I was not! How can you think so?! If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)
Question: I did tell you where I live. I told you several times. Do you remember?
Eugene: Oh damn! Poor confused cyborg you are! Those malicious robots erased your memory! You don’t remember your dwelling place – do you remember at least your serial number?
Question: I remember. You are the one who did not remember.
Eugene: Am I really the one who did not remember? But well, no time for arguing now! Could you enlighten me with any other hidden knowledge about myself? :-))) If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)
Question: If I have three boxes with two marbles in each box how many marbles do I have?
Eugene: Not too many. But I can’t tell you the exact number, I forgot it. Some another topic?

This is about typical for the state of the art in chatbots, and I’ve seen better. We have a live chatbot named Ramona on the KurzweilAI website. Ramona can be amusing but we don’t claim that she passes the Turing test.
Ramona live chatbot on KurzweilAI website | “Chat with Ramona 4.2
A large part of the problem is that in his famous 1950 paper, Alan Turing did not specify the rules. Here are the rules that Mitch Kapor and I painstakingly devised together, along with our essays on why we think each of us will win the wager.
The rules I devised with Mitch Kapor | “A Wager on the Turing test: the rules”
An explanation of rules behind the Turing test, used to determine the winner of a long bet between Ray Kurzweil and Mitch Kapor over whether artificial intelligence will be achieved by 2029.
Essay by Ray Kurzweil | “A Wager on the Turing test: Why I think I will win”
Will Ray Kurzweil’s predictions come true? He’s putting his money on it. Here’s why he thinks he will win a bet on the future of artificial intelligence. The wager: an artifical intelligence that passes the Turing test by 2029.
Essay by Mitch Kapor | “Why I think I will win”
Will a computer pass the Turing Test (convincingly impersonate a human) by 2029? Mitchell Kapor has bet Ray Kurzweil that a computer can’t because it lacks understanding of subtle human experiences and emotions.
Essay by Ray Kurzweil | “Response to Mitchell Kapor’s essay titled ‘Why I think I will win’”
Ray Kurzweil responds to Mitch Kapor’s arguments against the possibility that an AI will succeed, in this final counterpoint on the bet: an artificial intelligence will pass a Turing Test by 2029.
Apparently, we have now entered the era of premature announcements of a computer having passed Turing’s eponymous test. I continue to believe that with the right rules, this test is the right assessment of human-level intelligence in a machine.

In my 1989 book The Age of Intelligent Machines, I predicted that the milestone of a computer passing the Turing test would occur in the first half of the 21st century. I specified the 2029 date in my 1999 book The Age of Spiritual Machines. After that book was published, we had a conference at Stanford University and the consensus of AI experts at that time was that it would happen in hundreds of years, if ever.

In 2006 we had a conference called “AI at 50” at Dartmouth College, celebrating the 50th anniversary of the 1956 Dartmouth conference that gave artificial intelligence its name. We had instant polling devices and the consensus at that time, among AI experts, was 25 to 50 years. Today, my prediction appears to be median view. So, I am gratified that a growing group of people now think that I am being too conservative.

Related reading:

Tuesday, June 10, 2014

Computer Passes 'Turing Test' for the First Time After Convincing Users it Is Human


This would appear to be a huge breakthrough for artificial intelligence, so let's get a little deeper background to understand what this means, if anything, about machine intelligence.

Here is the basic definition of the Turing test (via Wikipedia):
The Turing test is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. In the original illustrative example, a human judge engages in natural language conversations with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give the correct answer to questions; it checks how closely the answer resembles typical human answers. The conversation is limited to a text-only channel such as a computer keyboard and screen so that the result is not dependent on the machine's ability to render words into audio.[2]

The test was introduced by Alan Turing in his 1950 paper "Computing Machinery and Intelligence," which opens with the words: "I propose to consider the question, 'Can machines think?'" Because "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."[3] Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?"[4] This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that "machines can think".[5]

In the years since 1950, the test has proven to be both highly influential and widely criticized, and it is an essential concept in the philosophy of artificial intelligence.[1][6]
And here is a little more, including some criticisms:

Weaknesses of the test


Turing did not explicitly state that the Turing test could be used as a measure of intelligence, or any other human quality. He wanted to provide a clear and understandable alternative to the word "think", which he could then use to reply to criticisms of the possibility of "thinking machines" and to suggest ways that research might move forward.

Nevertheless, the Turing test has been proposed as a measure of a machine's "ability to think" or its "intelligence". This proposal has received criticism from both philosophers and computer scientists. It assumes that an interrogator can determine if a machine is "thinking" by comparing its behavior with human behavior. Every element of this assumption has been questioned: the reliability of the interrogator's judgement, the value of comparing only behavior and the value of comparing the machine with a human. Because of these and other considerations, some AI researchers have questioned the relevance of the test to their field.


Human intelligence vs intelligence in general 

The Turing test does not directly test whether the computer behaves intelligently - it tests only whether the computer behaves like a human being. Since human behavior and intelligent behavior are not exactly the same thing, the test can fail to accurately measure intelligence in two ways:
Some human behavior is unintelligent
The Turing test requires that the machine be able to execute all human behaviors, regardless of whether they are intelligent. It even tests for behaviors that we may not consider intelligent at all, such as the susceptibility to insults,[70] the temptation to lie or, simply, a high frequency of typing mistakes. If a machine cannot imitate these unintelligent behaviors in detail it fails the test. This objection was raised by The Economist, in an article entitled "Artificial Stupidity" published shortly after the first Loebner prize competition in 1992. The article noted that the first Loebner winner's victory was due, at least in part, to its ability to "imitate human typing errors."[39] Turing himself had suggested that programs add errors into their output, so as to be better "players" of the game.[71]
Some intelligent behavior is inhuman
The Turing test does not test for highly intelligent behaviors, such as the ability to solve difficult problems or come up with original insights. In fact, it specifically requires deception on the part of the machine: if the machine is more intelligent than a human being it must deliberately avoid appearing too intelligent. If it were to solve a computational problem that is practically impossible for a human to solve, then the interrogator would know the program is not human, and the machine would fail the test. Because it cannot measure intelligence that is beyond the ability of humans, the test cannot be used in order to build or evaluate systems that are more intelligent than humans. Because of this, several test alternatives that would be able to evaluate super-intelligent systems have been proposed.[72]
Real intelligence vs simulated intelligence
See also: Synthetic intelligence
The Turing test is concerned strictly with how the subject acts — the external behaviour of the machine. In this regard, it takes a behaviourist or functionalist approach to the study of intelligence. The example of ELIZA suggests that a machine passing the test may be able to simulate human conversational behavior by following a simple (but large) list of mechanical rules, without thinking or having a mind at all.

John Searle has argued that external behavior cannot be used to determine if a machine is "actually" thinking or merely "simulating thinking."[33] His Chinese room argument is intended to show that, even if the Turing test is a good operational definition of intelligence, it may not indicate that the machine has a mind, consciousness, or intentionality. (Intentionality is a philosophical term for the power of thoughts to be "about" something.)

Turing anticipated this line of criticism in his original paper,[73] writing:

I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.[74]
Naivete of interrogators and the anthropomorphic fallacy

In practice, the test's results can easily be dominated not by the computer's intelligence, but by the attitudes, skill or naivete of the questioner.

Turing does not specify the precise skills and knowledge required by the interrogator in his description of the test, but he did use the term "average interrogator": "[the] average interrogator would not have more than 70 per cent chance of making the right identification after five minutes of questioning".[44]

Shah & Warwick (2009b) show that experts are fooled, and that interrogator strategy, "power" vs "solidarity" affects correct identification, the latter being more successful.

Chatterbot programs such as ELIZA have repeatedly fooled unsuspecting people into believing that they are communicating with human beings. In these cases, the "interrogator" is not even aware of the possibility that they are interacting with a computer. To successfully appear human, there is no need for the machine to have any intelligence whatsoever and only a superficial resemblance to human behaviour is required.

Early Loebner prize competitions used "unsophisticated" interrogators who were easily fooled by the machines.[40] Since 2004, the Loebner Prize organizers have deployed philosophers, computer scientists, and journalists among the interrogators. Nonetheless, some of these experts have been deceived by the machines.[75]

Michael Shermer points out that human beings consistently choose to consider non-human objects as human whenever they are allowed the chance, a mistake called the anthropomorphic fallacy: They talk to their cars, ascribe desire and intentions to natural forces (e.g., "nature abhors a vacuum"), and worship the sun as a human-like being with intelligence. If the Turing test is applied to religious objects, Shermer argues, then, that inanimate statues, rocks, and places have consistently passed the test throughout history.[citation needed] This human tendency towards anthropomorphism effectively lowers the bar for the Turing test, unless interrogators are specifically trained to avoid it.
With that background, you can make sense of this new study as fits your worldview.

In my worldview, this does not mean much about computer intelligence. It does advance the foundation for future research.

Computer passes 'Turing Test' for the first time after convincing users it is human

A "super computer" has duped humans into thinking it is a 13-year-old boy, becoming the first machine to pass the "iconic" Turing Test, experts say

By Hannah Furness
08 Jun 2014


Alan Turing Photo: AFP

A ''super computer'' has duped humans into thinking it is a 13-year-old boy to become the first machine to pass the ''iconic'' Turing Test, experts have said.

Five machines were tested at the Royal Society in central London to see if they could fool people into thinking they were humans during text-based conversations.

The test was devised in 1950 by computer science pioneer and Second World War codebreaker Alan Turing, who said that if a machine was indistinguishable from a human, then it was ''thinking''.

No computer had ever previously passed the Turing Test, which requires 30 per cent of human interrogators to be duped during a series of five-minute keyboard conversations, organisers from the University of Reading said.

But ''Eugene Goostman'', a computer programme developed to simulate a 13-year-old boy, managed to convince 33 per cent of the judges that it was human, the university said.

Related Articles
Professor Kevin Warwick, from the University of Reading, said: ''In the field of artificial intelligence there is no more iconic and controversial milestone than the Turing Test.

''It is fitting that such an important landmark has been reached at the Royal Society in London, the home of British science and the scene of many great advances in human understanding over the centuries. This milestone will go down in history as one of the most exciting.''

The successful machine was created by Russian-born Vladimir Veselov, who lives in the United States, and Ukrainian Eugene Demchenko who lives in Russia.

Mr Veselov said: ''It's a remarkable achievement for us and we hope it boosts interest in artificial intelligence and chatbots.''

Prof Warwick said there had been previous claims that the test was passed in similar competitions around the world.

''A true Turing Test does not set the questions or topics prior to the conversations,'' he said.

''We are therefore proud to declare that Alan Turing's test was passed for the first time.''

Prof Warwick said having a computer with such artificial intelligence had ''implications for society'' and would serve as a ''wake-up call to cybercrime''.

The event on Saturday was poignant as it took place on the 60th anniversary of the death of Dr Turing, who laid the foundations of modern computing.

During the Second World War, his critical work at Britain's code-breaking centre at Bletchley Park helped shorten the conflict and save many thousands of lives.

Instead of being hailed a hero, Dr Turing was persecuted for his homosexuality. After his conviction in 1952 for gross indecency with a 19-year-old Manchester man, he was chemically castrated.

Two years later, he died from cyanide poisoning in an apparent suicide, though there have been suggestions that his death was an accident.

Last December, after a long campaign, Dr Turing was given a posthumous Royal Pardon.