Showing posts with label intelligent machines. Show all posts
Showing posts with label intelligent machines. Show all posts

Sunday, January 18, 2015

EDGE Question 2015: WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

It's time again for the annual EDGE question for 2015 - What do you think about machines that think? Among this years respondents are many authors and thinkers who work in psychology, neuroscience, philosophy, and consciousness research. Here are a few:

Stanislas Dehaene, Alison Gopnik, Thomas Metzinger, Bruce Sterling, Kevin Kelly, Sam Harris, Daniel Dennett, Andy Clark, Michael Shermer, Nicholas Humphrey, Gary Marcus, George Dyson, Paul Davies, Douglas Rushkoff, Helen Fisher, Stuart A. Kauffman, Robert Sapolsky, Maria Popova, Steven Pinker, and many others - 186 in all.

2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?



"Dahlia" by Katinka Matson |  Click to Expand www.katinkamatson.com
_________________________________________________________________

In recent years, the 1980s-era philosophical discussions about artificial intelligence (AI)—whether computers can "really" think, refer, be conscious, and so on—have led to new conversations about how we should deal with the forms that many argue actually are implemented. These "AIs", if they achieve "Superintelligence" (Nick Bostrom), could pose "existential risks" that lead to "Our Final Hour" (Martin Rees). And Stephen Hawking recently made international headlines when he noted "The development of full artificial intelligence could spell the end of the human race."   
THE EDGE QUESTION—2015 
WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

But wait! Should we also ask what machines that think, or, "AIs", might be thinking about? Do they want, do they expect civil rights? Do they have feelings? What kind of government (for us) would an AI choose? What kind of society would they want to structure for themselves? Or is "their" society "our" society? Will we, and the AIs, include each other within our respective circles of empathy?

Numerous Edgies have been at the forefront of the science behind the various flavors of AI, either in their research or writings. AI was front and center in conversations between charter members Pamela McCorduck (Machines Who Think) and Isaac Asimov (Machines That Think) at our initial meetings in 1980. And the conversation has continued unabated, as is evident in the recent Edge feature "The Myth of AI", a conversation with Jaron Lanier, that evoked rich and provocative commentaries.

Is AI becoming increasingly real? Are we now in a new era of the "AIs"? To consider this issue, it's time to grow up. Enough already with the science fiction and the movies, Star Maker, Blade Runner, 2001, Her, The Matrix, "The Borg".  Also, 80 years after Turing's invention of his Universal Machine, it's time to honor Turing, and other AI pioneers, by giving them a well-deserved rest. We know the history. (See George Dyson's 2004 Edge feature "Turing's Cathedral".) So, once again, this time with rigor, the Edge Question—2105: WHAT DO YOU THINK ABOUT MACHINES THAT THINK?
_________________________________________________________________


[182 Responses—126,000 words:] Pamela McCorduck, George Church, James J. O'Donnell, Carlo Rovelli, Nick Bostrom, Daniel C. Dennett, Donald Hoffman, Roger Schank, Mark Pagel, Frank Wilczek, Robert Provine, Susan Blackmore, Haim Harari, Andy Clark, William Poundstone, Peter Norvig, Rodney Brooks, Jonathan Gottschall, Arnold Trehub, Giulio Boccaletti, Michael Shermer, Chris DiBona, Aubrey De Grey, Juan Enriquez, Satyajit Das, Quentin Hardy, Clifford Pickover, Nicholas Humphrey, Ross Anderson, Paul Saffo, Eric J. Topol, M.D., Dylan Evans, Roger Highfield, Gordon Kane, Melanie Swan, Richard Nisbett, Lee Smolin, Scott Atran, Stanislas Dehaene, Stephen Kosslyn, Emanuel Derman, Richard Thaler, Alison Gopnik, Ernst Pöppel, Luca De Biase, Maraget Levi, Terrence Sejnowski, Thomas Metzinger, D.A. Wallach, Leo Chalupa, Bruce Sterling, Kevin Kelly, Martin Seligman, Keith Devlin, S. Abbas Raza, Neil Gershenfeld, Daniel Everett, Douglas Coupland, Joshua Bongard, Ziyad Marar, Thomas Bass, Frank Tipler, Mario Livio, Marti Hearst, Randolph Nesse, Alex (Sandy) Pentland, Samuel Arbesman, Gerald Smallberg, John Mather, Ursula Martin, Kurt Gray, Gerd Gigerenzer, Kevin Slavin, Nicholas Carr, Timo Hannay, Kai Krause, Alun Anderson, Seth Lloyd, Mary Catherine Bateson, Steve Fuller, Virginia Heffernan, Barbara Strauch, Sean Carroll, Sheizaf Rafaeli, Edward Slingerland, Nicholas Christakis, Joichi Ito, David Christian, George Dyson, Paul Davies, Douglas Rushkoff, Tim O'Reilly, Irene Pepperberg, Helen Fisher, Stuart A. Kauffman, Stuart Russell, Tomaso Poggio, Robert Sapolsky, Maria Popova, Martin Rees, Lawrence M. Krauss, Jessica Tracy & Kristin Laurin, Paul Dolan, Kate Jefferey, June Gruber & Raul Saucedo, Bruce Schneier, Rebecca MacKinnon, Antony Garrett Lisi, Thomas Dietterich, John Markoff, Matthew Lieberman, Dimitar Sasselov, Michael Vassar, Gregory Paul, Hans Ulrich Obrist, Andrian Kreye, Andrés Roemer, N.J. Enfield, Rolf Dobelli, Nina Jablonski, Marcelo Gleiser, Gary Klein, Tor Nørretranders, David Gelernter, Cesar Hidalgo, Gary Marcus, Sam Harris, Molly Crockett, Abigail Marsh, Alexander Wissner-Gross, Koo Jeong-A, Sarah Demers, Richard Foreman, Julia Clarke, Georg Diez, Jaan Tallinn, Michael McCullough, Hans Halvorson, Kevin Hand, Christine Finn, Tom Griffiths, Dirk Helbing, Brian Knutson, John Tooby, Maximilian Schich, Athena Vouloumanos, Brian Christian, Timothy Taylor, Bruce Parker, Benjamin Bergen, Laurence Smith, Ian Bogost, W. Tecumseh Fitch, Michael Norton, Scott Draves, Gregory Benford, Chris Anderson, Raphael Bousso, Christopher Chabris, James Croak, Beatrice Golomb, Moshe Hoffman, Matt Ridley, Matthew Ritchie, Eduardo Salcedo-Albaran, Eldar Shafir, Maria Spiropulu, Tania Lombrozo, Bart Kosko, Joscha Bach, Esther Dyson, Anthony Aguirre, Steve Omohundro, Murray Shanahan, Eliezer Yudkowsky, Steven Pinker, Max Tegmark, Jon Kleinberg & Senhil Mullainathan, Freeman Dyson, Brian Eno, W. Daniel Hillis, Katinka Matson

Tuesday, June 10, 2014

Computer Passes 'Turing Test' for the First Time After Convincing Users it Is Human


This would appear to be a huge breakthrough for artificial intelligence, so let's get a little deeper background to understand what this means, if anything, about machine intelligence.

Here is the basic definition of the Turing test (via Wikipedia):
The Turing test is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. In the original illustrative example, a human judge engages in natural language conversations with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give the correct answer to questions; it checks how closely the answer resembles typical human answers. The conversation is limited to a text-only channel such as a computer keyboard and screen so that the result is not dependent on the machine's ability to render words into audio.[2]

The test was introduced by Alan Turing in his 1950 paper "Computing Machinery and Intelligence," which opens with the words: "I propose to consider the question, 'Can machines think?'" Because "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."[3] Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?"[4] This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that "machines can think".[5]

In the years since 1950, the test has proven to be both highly influential and widely criticized, and it is an essential concept in the philosophy of artificial intelligence.[1][6]
And here is a little more, including some criticisms:

Weaknesses of the test


Turing did not explicitly state that the Turing test could be used as a measure of intelligence, or any other human quality. He wanted to provide a clear and understandable alternative to the word "think", which he could then use to reply to criticisms of the possibility of "thinking machines" and to suggest ways that research might move forward.

Nevertheless, the Turing test has been proposed as a measure of a machine's "ability to think" or its "intelligence". This proposal has received criticism from both philosophers and computer scientists. It assumes that an interrogator can determine if a machine is "thinking" by comparing its behavior with human behavior. Every element of this assumption has been questioned: the reliability of the interrogator's judgement, the value of comparing only behavior and the value of comparing the machine with a human. Because of these and other considerations, some AI researchers have questioned the relevance of the test to their field.


Human intelligence vs intelligence in general 

The Turing test does not directly test whether the computer behaves intelligently - it tests only whether the computer behaves like a human being. Since human behavior and intelligent behavior are not exactly the same thing, the test can fail to accurately measure intelligence in two ways:
Some human behavior is unintelligent
The Turing test requires that the machine be able to execute all human behaviors, regardless of whether they are intelligent. It even tests for behaviors that we may not consider intelligent at all, such as the susceptibility to insults,[70] the temptation to lie or, simply, a high frequency of typing mistakes. If a machine cannot imitate these unintelligent behaviors in detail it fails the test. This objection was raised by The Economist, in an article entitled "Artificial Stupidity" published shortly after the first Loebner prize competition in 1992. The article noted that the first Loebner winner's victory was due, at least in part, to its ability to "imitate human typing errors."[39] Turing himself had suggested that programs add errors into their output, so as to be better "players" of the game.[71]
Some intelligent behavior is inhuman
The Turing test does not test for highly intelligent behaviors, such as the ability to solve difficult problems or come up with original insights. In fact, it specifically requires deception on the part of the machine: if the machine is more intelligent than a human being it must deliberately avoid appearing too intelligent. If it were to solve a computational problem that is practically impossible for a human to solve, then the interrogator would know the program is not human, and the machine would fail the test. Because it cannot measure intelligence that is beyond the ability of humans, the test cannot be used in order to build or evaluate systems that are more intelligent than humans. Because of this, several test alternatives that would be able to evaluate super-intelligent systems have been proposed.[72]
Real intelligence vs simulated intelligence
See also: Synthetic intelligence
The Turing test is concerned strictly with how the subject acts — the external behaviour of the machine. In this regard, it takes a behaviourist or functionalist approach to the study of intelligence. The example of ELIZA suggests that a machine passing the test may be able to simulate human conversational behavior by following a simple (but large) list of mechanical rules, without thinking or having a mind at all.

John Searle has argued that external behavior cannot be used to determine if a machine is "actually" thinking or merely "simulating thinking."[33] His Chinese room argument is intended to show that, even if the Turing test is a good operational definition of intelligence, it may not indicate that the machine has a mind, consciousness, or intentionality. (Intentionality is a philosophical term for the power of thoughts to be "about" something.)

Turing anticipated this line of criticism in his original paper,[73] writing:

I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.[74]
Naivete of interrogators and the anthropomorphic fallacy

In practice, the test's results can easily be dominated not by the computer's intelligence, but by the attitudes, skill or naivete of the questioner.

Turing does not specify the precise skills and knowledge required by the interrogator in his description of the test, but he did use the term "average interrogator": "[the] average interrogator would not have more than 70 per cent chance of making the right identification after five minutes of questioning".[44]

Shah & Warwick (2009b) show that experts are fooled, and that interrogator strategy, "power" vs "solidarity" affects correct identification, the latter being more successful.

Chatterbot programs such as ELIZA have repeatedly fooled unsuspecting people into believing that they are communicating with human beings. In these cases, the "interrogator" is not even aware of the possibility that they are interacting with a computer. To successfully appear human, there is no need for the machine to have any intelligence whatsoever and only a superficial resemblance to human behaviour is required.

Early Loebner prize competitions used "unsophisticated" interrogators who were easily fooled by the machines.[40] Since 2004, the Loebner Prize organizers have deployed philosophers, computer scientists, and journalists among the interrogators. Nonetheless, some of these experts have been deceived by the machines.[75]

Michael Shermer points out that human beings consistently choose to consider non-human objects as human whenever they are allowed the chance, a mistake called the anthropomorphic fallacy: They talk to their cars, ascribe desire and intentions to natural forces (e.g., "nature abhors a vacuum"), and worship the sun as a human-like being with intelligence. If the Turing test is applied to religious objects, Shermer argues, then, that inanimate statues, rocks, and places have consistently passed the test throughout history.[citation needed] This human tendency towards anthropomorphism effectively lowers the bar for the Turing test, unless interrogators are specifically trained to avoid it.
With that background, you can make sense of this new study as fits your worldview.

In my worldview, this does not mean much about computer intelligence. It does advance the foundation for future research.

Computer passes 'Turing Test' for the first time after convincing users it is human

A "super computer" has duped humans into thinking it is a 13-year-old boy, becoming the first machine to pass the "iconic" Turing Test, experts say

By Hannah Furness
08 Jun 2014


Alan Turing Photo: AFP

A ''super computer'' has duped humans into thinking it is a 13-year-old boy to become the first machine to pass the ''iconic'' Turing Test, experts have said.

Five machines were tested at the Royal Society in central London to see if they could fool people into thinking they were humans during text-based conversations.

The test was devised in 1950 by computer science pioneer and Second World War codebreaker Alan Turing, who said that if a machine was indistinguishable from a human, then it was ''thinking''.

No computer had ever previously passed the Turing Test, which requires 30 per cent of human interrogators to be duped during a series of five-minute keyboard conversations, organisers from the University of Reading said.

But ''Eugene Goostman'', a computer programme developed to simulate a 13-year-old boy, managed to convince 33 per cent of the judges that it was human, the university said.

Related Articles
Professor Kevin Warwick, from the University of Reading, said: ''In the field of artificial intelligence there is no more iconic and controversial milestone than the Turing Test.

''It is fitting that such an important landmark has been reached at the Royal Society in London, the home of British science and the scene of many great advances in human understanding over the centuries. This milestone will go down in history as one of the most exciting.''

The successful machine was created by Russian-born Vladimir Veselov, who lives in the United States, and Ukrainian Eugene Demchenko who lives in Russia.

Mr Veselov said: ''It's a remarkable achievement for us and we hope it boosts interest in artificial intelligence and chatbots.''

Prof Warwick said there had been previous claims that the test was passed in similar competitions around the world.

''A true Turing Test does not set the questions or topics prior to the conversations,'' he said.

''We are therefore proud to declare that Alan Turing's test was passed for the first time.''

Prof Warwick said having a computer with such artificial intelligence had ''implications for society'' and would serve as a ''wake-up call to cybercrime''.

The event on Saturday was poignant as it took place on the 60th anniversary of the death of Dr Turing, who laid the foundations of modern computing.

During the Second World War, his critical work at Britain's code-breaking centre at Bletchley Park helped shorten the conflict and save many thousands of lives.

Instead of being hailed a hero, Dr Turing was persecuted for his homosexuality. After his conviction in 1952 for gross indecency with a 19-year-old Manchester man, he was chemically castrated.

Two years later, he died from cyanide poisoning in an apparent suicide, though there have been suggestions that his death was an accident.

Last December, after a long campaign, Dr Turing was given a posthumous Royal Pardon.

Thursday, June 05, 2014

Who's Afraid of Robots? [UPDATED]

 

Once again, from Bookforum's Omnivore blog, here is a cool collection of links on all things robotic, from killer robots to ethical robots (U.S. military!). Oh, and it seems you would probably f**k a robot, according to Gawker. I don't know, it would at least have to buy me dinner and get me drunk....

UPDATE: This morning Aeon Magazine posted an interesting and highly related article on its site, "Sexbot slaves: Thanks to new technology, sex toys are becoming tools for connection - but will sexbots reverse that trend?" by Leah Reich. Here is a little of the article:
‘Right now, we’re at an inflection point on the meaning of sexbot,’ says Kyle Machulis, the California-based world expert on sex technology. ‘Tracing the history of the term will lead you to a fork: robots for sex (idealised version: Jude Law in the movie AI), and people that fetishise being robots (clockworks, etc). There was a crossover of these in the days of alt.sex.fetish.robots, but I see less and less people fetishising the media/aesthetics, and more talking about actually having sex with robots.’
Strange times we live in, eh?

Who's afraid of robots?

Jun 4 2014
9:00AM

Thursday, February 28, 2013

Bookforum Omnivore - The Age of Moral Machines

From Bookforum's Omnivore blog, a new collection of links that offer various perspectives on intelligent machines, from Ray Kurzweil's new project with Google to the not-too-far-away future of robots as "autonomous weapons," i.e., drone variations without the need for human navigators.


The age of moral machines

FEB 25 2013 
12:00PM


  • From Technology Review, Ray Kurzweil plans tocreate a mind at Google — and have it serve you (and more). 
  • From Transhumanity, Mark Waser on the “wicked problem” of existential risk with AI (artificial intelligence).
  • Colin Allen reviews The Machine Question: Critical Perspectives on AI, Robots, and Ethics by David J. Gunkel. 
  • Killer instinct: Advances in neuroscience and technology could lead to the mind becoming the ultimate weapon. 
  • Stephen Pincock on the rise of the (mini) machines: Mimicking nature, nanotechnology is creating machines that can self-assemble and take charge of their environment. 
  • Our robot children: At what point will we trust robots to kill
  • Killer robots must be stopped, say campaigners: “Autonomous weapons”, which could be ready within a decade, pose grave risk to international law. 
  • The age of moral machines: An interview with Josh Storrs Hall on nanotech, AI and the Singularity.

Tuesday, December 11, 2012

Intelligence and Machines: Creating Intelligent Machines by Modeling the Brain with Jeff Hawkins


Here is another video lecture from Jeff Hawkins on the creation of intelligent machines through the modeling of the human brain. This lecture comes via UCTV and was delivered at UC Berkeley.

Jeffrey Hawkins is the founder of Palm Computing and Handspring. He has since turned to work on neuroscience full-time, founded the Redwood Center for Theoretical Neuroscience in 2002. He is the author of On Intelligence, with Sandra Blakeslee (a science ghost-writer that she gets her name on the cover, too).

For the record, and for the Nth time, we will never model the human brain in any real sense unless we can also model the human body in which that brain resides. Personality, identity, and consciousness are not confined to the brain, we are embodied creatures embedded in biopsychosocial environments. We can't even understand the body (or the brain for that matter), so there is little chance we will model it anytime soon.

Granted, this does not preclude creating intelligent machines, but they will be intelligent in a far different way that we are - and that might not be for the best.


Intelligence and Machines: Creating Intelligent Machines by Modeling the Brain with Jeff Hawkins
Are intelligent machines possible? If they are, what will they be like? Jeff Hawkins, an inventor, engineer, neuroscientist, author and entrepreneur, frames these questions by reviewing some of the efforts to build intelligent machines. He posits that machine intelligence is only possible by first understanding how the brain works and then building systems that work on the same principles. He describes Numenta's work using neocortical models to understand the torrent of machine-generated data being created today. He will conclude with predictions on how machine intelligence will unfold in the near and long term future and why creating intelligent machines is important for humanity. Series: "UC Berkeley Graduate Council Lectures" [12/2012]

Tuesday, December 04, 2012

Alva Noë - We Are the Singularity

From NPR's 13.7 Cosmos and Culture blog, philosopher Alva Noë argues that the singularity everyone in the transhumanism community is waiting for has already arrived.

 The human machine in action.
Will machines one day take over the world?

Yes. In fact, they already have.

I don't mean auto-trading computers on Wall Street, unmanned weapons systems, Deep Blue and the World Wide Web.

I mean us. We are machines.

From the dawn of our history we have amplified ourselves with tools. We have cultivated skills that take our basic body schema and extend it out into the world with sticks and rakes and then arrows and guns and rails and phones. Where do you find yourself? Spread out all over the universe. And it has always been so, really.

We don't just change our body-power — what we can do, how we can move, where we can go, how fast, how far, how strong. No, we change our mind-power. Or rather, we ourselves are the result of processes of artificial reorganization of cognition and consciousness.

Artificial intelligence? That's us. Naturally artificial. We think with tokens, symbols, artifacts. Words, spoken and written, are our medium, and together with pictures, moving and still, and other kinds of images, they structure our minds. Look inward, reach deep inside and you find a universe within that is made out of the currency of our shared, social lives together.

Ask again: where do you find yourself? Spread out all over the universe.

Futurists and physicists (on this blog!) see the singularity coming — a future in which intelligent machines take over. But the singularity has already happened. We are the singularity.


You can keep up with more of what Alva Noë is thinking on Facebook and on Twitter: @alvanoe

Monday, December 03, 2012

UCTV - On Intelligence with Jeff Hawkins


Conversations with History host Harry Kreisler welcomes Jeff Hawkins, founder of both Palm Computing and Handspring and creator of the Redwood Neuroscience Institute, to promote research on memory and cognition. Hawkins traces his intellectual journey focusing on his lifelong passion to develop a theory of the brain. Hawkins explicates the brain's operating principles and explores the implications of human intelligence for engineering intelligent machines, the goal of his new company Numenta. Hawkins (along with Sandra Blakeslee) wrote On Intelligence (2004).