Showing posts with label computers. Show all posts
Showing posts with label computers. Show all posts

Sunday, January 18, 2015

EDGE Question 2015: WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

It's time again for the annual EDGE question for 2015 - What do you think about machines that think? Among this years respondents are many authors and thinkers who work in psychology, neuroscience, philosophy, and consciousness research. Here are a few:

Stanislas Dehaene, Alison Gopnik, Thomas Metzinger, Bruce Sterling, Kevin Kelly, Sam Harris, Daniel Dennett, Andy Clark, Michael Shermer, Nicholas Humphrey, Gary Marcus, George Dyson, Paul Davies, Douglas Rushkoff, Helen Fisher, Stuart A. Kauffman, Robert Sapolsky, Maria Popova, Steven Pinker, and many others - 186 in all.

2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?



"Dahlia" by Katinka Matson |  Click to Expand www.katinkamatson.com
_________________________________________________________________

In recent years, the 1980s-era philosophical discussions about artificial intelligence (AI)—whether computers can "really" think, refer, be conscious, and so on—have led to new conversations about how we should deal with the forms that many argue actually are implemented. These "AIs", if they achieve "Superintelligence" (Nick Bostrom), could pose "existential risks" that lead to "Our Final Hour" (Martin Rees). And Stephen Hawking recently made international headlines when he noted "The development of full artificial intelligence could spell the end of the human race."   
THE EDGE QUESTION—2015 
WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

But wait! Should we also ask what machines that think, or, "AIs", might be thinking about? Do they want, do they expect civil rights? Do they have feelings? What kind of government (for us) would an AI choose? What kind of society would they want to structure for themselves? Or is "their" society "our" society? Will we, and the AIs, include each other within our respective circles of empathy?

Numerous Edgies have been at the forefront of the science behind the various flavors of AI, either in their research or writings. AI was front and center in conversations between charter members Pamela McCorduck (Machines Who Think) and Isaac Asimov (Machines That Think) at our initial meetings in 1980. And the conversation has continued unabated, as is evident in the recent Edge feature "The Myth of AI", a conversation with Jaron Lanier, that evoked rich and provocative commentaries.

Is AI becoming increasingly real? Are we now in a new era of the "AIs"? To consider this issue, it's time to grow up. Enough already with the science fiction and the movies, Star Maker, Blade Runner, 2001, Her, The Matrix, "The Borg".  Also, 80 years after Turing's invention of his Universal Machine, it's time to honor Turing, and other AI pioneers, by giving them a well-deserved rest. We know the history. (See George Dyson's 2004 Edge feature "Turing's Cathedral".) So, once again, this time with rigor, the Edge Question—2105: WHAT DO YOU THINK ABOUT MACHINES THAT THINK?
_________________________________________________________________


[182 Responses—126,000 words:] Pamela McCorduck, George Church, James J. O'Donnell, Carlo Rovelli, Nick Bostrom, Daniel C. Dennett, Donald Hoffman, Roger Schank, Mark Pagel, Frank Wilczek, Robert Provine, Susan Blackmore, Haim Harari, Andy Clark, William Poundstone, Peter Norvig, Rodney Brooks, Jonathan Gottschall, Arnold Trehub, Giulio Boccaletti, Michael Shermer, Chris DiBona, Aubrey De Grey, Juan Enriquez, Satyajit Das, Quentin Hardy, Clifford Pickover, Nicholas Humphrey, Ross Anderson, Paul Saffo, Eric J. Topol, M.D., Dylan Evans, Roger Highfield, Gordon Kane, Melanie Swan, Richard Nisbett, Lee Smolin, Scott Atran, Stanislas Dehaene, Stephen Kosslyn, Emanuel Derman, Richard Thaler, Alison Gopnik, Ernst Pöppel, Luca De Biase, Maraget Levi, Terrence Sejnowski, Thomas Metzinger, D.A. Wallach, Leo Chalupa, Bruce Sterling, Kevin Kelly, Martin Seligman, Keith Devlin, S. Abbas Raza, Neil Gershenfeld, Daniel Everett, Douglas Coupland, Joshua Bongard, Ziyad Marar, Thomas Bass, Frank Tipler, Mario Livio, Marti Hearst, Randolph Nesse, Alex (Sandy) Pentland, Samuel Arbesman, Gerald Smallberg, John Mather, Ursula Martin, Kurt Gray, Gerd Gigerenzer, Kevin Slavin, Nicholas Carr, Timo Hannay, Kai Krause, Alun Anderson, Seth Lloyd, Mary Catherine Bateson, Steve Fuller, Virginia Heffernan, Barbara Strauch, Sean Carroll, Sheizaf Rafaeli, Edward Slingerland, Nicholas Christakis, Joichi Ito, David Christian, George Dyson, Paul Davies, Douglas Rushkoff, Tim O'Reilly, Irene Pepperberg, Helen Fisher, Stuart A. Kauffman, Stuart Russell, Tomaso Poggio, Robert Sapolsky, Maria Popova, Martin Rees, Lawrence M. Krauss, Jessica Tracy & Kristin Laurin, Paul Dolan, Kate Jefferey, June Gruber & Raul Saucedo, Bruce Schneier, Rebecca MacKinnon, Antony Garrett Lisi, Thomas Dietterich, John Markoff, Matthew Lieberman, Dimitar Sasselov, Michael Vassar, Gregory Paul, Hans Ulrich Obrist, Andrian Kreye, Andrés Roemer, N.J. Enfield, Rolf Dobelli, Nina Jablonski, Marcelo Gleiser, Gary Klein, Tor Nørretranders, David Gelernter, Cesar Hidalgo, Gary Marcus, Sam Harris, Molly Crockett, Abigail Marsh, Alexander Wissner-Gross, Koo Jeong-A, Sarah Demers, Richard Foreman, Julia Clarke, Georg Diez, Jaan Tallinn, Michael McCullough, Hans Halvorson, Kevin Hand, Christine Finn, Tom Griffiths, Dirk Helbing, Brian Knutson, John Tooby, Maximilian Schich, Athena Vouloumanos, Brian Christian, Timothy Taylor, Bruce Parker, Benjamin Bergen, Laurence Smith, Ian Bogost, W. Tecumseh Fitch, Michael Norton, Scott Draves, Gregory Benford, Chris Anderson, Raphael Bousso, Christopher Chabris, James Croak, Beatrice Golomb, Moshe Hoffman, Matt Ridley, Matthew Ritchie, Eduardo Salcedo-Albaran, Eldar Shafir, Maria Spiropulu, Tania Lombrozo, Bart Kosko, Joscha Bach, Esther Dyson, Anthony Aguirre, Steve Omohundro, Murray Shanahan, Eliezer Yudkowsky, Steven Pinker, Max Tegmark, Jon Kleinberg & Senhil Mullainathan, Freeman Dyson, Brian Eno, W. Daniel Hillis, Katinka Matson

Thursday, December 11, 2014

Perspectives on Artificial Intelligence

Artificial intelligence.

Kevin Kelly  
 
Conversations at the Edge 2.3.14
 
* * * * * 

Superintelligence by Nick Bostrom and A Rough Ride to the Future by James Lovelock – review

Will technology remain our slave? Caspar Henderson on two attempts to read the future for humanity

Caspar Henderson | The Guardian
Thursday 17 July 2014

* * * * *

What Your Computer Can’t Know


John R. Searle | New York Review of Books
October 9, 2014
The 4th Revolution: How the Infosphere Is Reshaping Human Reality
by Luciano Floridi
Oxford University Press, 248 pp., $27.95

Superintelligence: Paths, Dangers, Strategies
by Nick Bostrom
Oxford University Press, 328 pp., $29.95

* * * * *

Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts

Big-data boondoggles and brain-inspired chips are just two of the things we’re really getting wrong

By Lee Gomes | IEEE Spectrum
Posted 20 Oct 2014

* * * * *

The Myth Of AI

A Conversation with Jaron Lanier

Conversations at the Edge 11.14.14

* * * * *

Artificial Intelligence, Really, Is Pseudo-Intelligence


Alva Noë | NPR 13.7 Cosmos and Culture Blog
November 21, 2014

* * * * *

Enthusiasts and Skeptics Debate Artificial Intelligence

Kurt Andersen wonders: If the Singularity is near, will it bring about global techno-Nirvana or civilizational ruin?

By Kurt Andersen
November 26, 2014

* * * * *

Is AI a Myth?


By Rick Searle | IEET
Utopia or Dystopia
Nov 30, 2014

* * * * *

Stephen Hawking warns artificial intelligence could end mankind


By Rory Cellan-Jones
BBC News | 2 December 2014

Thursday, October 16, 2014

Nicholas Carr | The Glass Cage: Automation and Us (Talks at Google)

http://www.vancouversun.com/cms/binary/10248737.jpg

Technology writer Nicholas Carr has a new book out, The Glass Cage: Automation and Us, that both celebrates technology and offers more warnings about the impact of technology of human beings. his previous books include The Big Switch: Rewiring the World, from Edison to Google (2008) and The Shallows: What the Internet Is Doing to Our Brains (2010).

Here is the publisher's ad copy from Amazon:
At once a celebration of technology and a warning about its misuse, The Glass Cage will change the way you think about the tools you use every day.

In The Glass Cage, best-selling author Nicholas Carr digs behind the headlines about factory robots and self-driving cars, wearable computers and digitized medicine, as he explores the hidden costs of granting software dominion over our work and our leisure. Even as they bring ease to our lives, these programs are stealing something essential from us. 
Drawing on psychological and neurological studies that underscore how tightly people’s happiness and satisfaction are tied to performing hard work in the real world, Carr reveals something we already suspect: shifting our attention to computer screens can leave us disengaged and discontented.

From nineteenth-century textile mills to the cockpits of modern jets, from the frozen hunting grounds of Inuit tribes to the sterile landscapes of GPS maps, The Glass Cage explores the impact of automation from a deeply human perspective, examining the personal as well as the economic consequences of our growing dependence on computers.

With a characteristic blend of history and philosophy, poetry and science, Carr takes us on a journey from the work and early theory of Adam Smith and Alfred North Whitehead to the latest research into human attention, memory, and happiness, culminating in a moving meditation on how we can use technology to expand the human experience.
Carr stopped by Google recently to discuss his new book, which seems kind of like stepping into a Hell's Angels chapter meeting to discuss the perils of drugs and violence (he is also the author of Is Google Making Us Stupid?).

Nicholas Carr | The Glass Cage: Automation and Us


Talks at Google
Published on Oct 14, 2014



Nicholas Carr writes about technology and culture. His latest book, The Glass Cage: Automation and Us, asks:

What kind of world are we building for ourselves? That’s the question bestselling author Nicholas Carr tackles in this urgent, absorbing book on the human consequences of automation. At once a celebration of technology and a warning about its misuse, The Glass Cage will change the way you think about the tools you use every day.

With a characteristic blend of history and philosophy, poetry and science, Carr takes us on a journey from the work and early theory of Adam Smith and Alfred North Whitehead to the latest research into human attention, memory, and happiness, culminating in a moving meditation on how we can use technology to expand the human experience.

Tuesday, June 10, 2014

Computer Passes 'Turing Test' for the First Time After Convincing Users it Is Human


This would appear to be a huge breakthrough for artificial intelligence, so let's get a little deeper background to understand what this means, if anything, about machine intelligence.

Here is the basic definition of the Turing test (via Wikipedia):
The Turing test is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. In the original illustrative example, a human judge engages in natural language conversations with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give the correct answer to questions; it checks how closely the answer resembles typical human answers. The conversation is limited to a text-only channel such as a computer keyboard and screen so that the result is not dependent on the machine's ability to render words into audio.[2]

The test was introduced by Alan Turing in his 1950 paper "Computing Machinery and Intelligence," which opens with the words: "I propose to consider the question, 'Can machines think?'" Because "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."[3] Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?"[4] This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that "machines can think".[5]

In the years since 1950, the test has proven to be both highly influential and widely criticized, and it is an essential concept in the philosophy of artificial intelligence.[1][6]
And here is a little more, including some criticisms:

Weaknesses of the test


Turing did not explicitly state that the Turing test could be used as a measure of intelligence, or any other human quality. He wanted to provide a clear and understandable alternative to the word "think", which he could then use to reply to criticisms of the possibility of "thinking machines" and to suggest ways that research might move forward.

Nevertheless, the Turing test has been proposed as a measure of a machine's "ability to think" or its "intelligence". This proposal has received criticism from both philosophers and computer scientists. It assumes that an interrogator can determine if a machine is "thinking" by comparing its behavior with human behavior. Every element of this assumption has been questioned: the reliability of the interrogator's judgement, the value of comparing only behavior and the value of comparing the machine with a human. Because of these and other considerations, some AI researchers have questioned the relevance of the test to their field.


Human intelligence vs intelligence in general 

The Turing test does not directly test whether the computer behaves intelligently - it tests only whether the computer behaves like a human being. Since human behavior and intelligent behavior are not exactly the same thing, the test can fail to accurately measure intelligence in two ways:
Some human behavior is unintelligent
The Turing test requires that the machine be able to execute all human behaviors, regardless of whether they are intelligent. It even tests for behaviors that we may not consider intelligent at all, such as the susceptibility to insults,[70] the temptation to lie or, simply, a high frequency of typing mistakes. If a machine cannot imitate these unintelligent behaviors in detail it fails the test. This objection was raised by The Economist, in an article entitled "Artificial Stupidity" published shortly after the first Loebner prize competition in 1992. The article noted that the first Loebner winner's victory was due, at least in part, to its ability to "imitate human typing errors."[39] Turing himself had suggested that programs add errors into their output, so as to be better "players" of the game.[71]
Some intelligent behavior is inhuman
The Turing test does not test for highly intelligent behaviors, such as the ability to solve difficult problems or come up with original insights. In fact, it specifically requires deception on the part of the machine: if the machine is more intelligent than a human being it must deliberately avoid appearing too intelligent. If it were to solve a computational problem that is practically impossible for a human to solve, then the interrogator would know the program is not human, and the machine would fail the test. Because it cannot measure intelligence that is beyond the ability of humans, the test cannot be used in order to build or evaluate systems that are more intelligent than humans. Because of this, several test alternatives that would be able to evaluate super-intelligent systems have been proposed.[72]
Real intelligence vs simulated intelligence
See also: Synthetic intelligence
The Turing test is concerned strictly with how the subject acts — the external behaviour of the machine. In this regard, it takes a behaviourist or functionalist approach to the study of intelligence. The example of ELIZA suggests that a machine passing the test may be able to simulate human conversational behavior by following a simple (but large) list of mechanical rules, without thinking or having a mind at all.

John Searle has argued that external behavior cannot be used to determine if a machine is "actually" thinking or merely "simulating thinking."[33] His Chinese room argument is intended to show that, even if the Turing test is a good operational definition of intelligence, it may not indicate that the machine has a mind, consciousness, or intentionality. (Intentionality is a philosophical term for the power of thoughts to be "about" something.)

Turing anticipated this line of criticism in his original paper,[73] writing:

I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.[74]
Naivete of interrogators and the anthropomorphic fallacy

In practice, the test's results can easily be dominated not by the computer's intelligence, but by the attitudes, skill or naivete of the questioner.

Turing does not specify the precise skills and knowledge required by the interrogator in his description of the test, but he did use the term "average interrogator": "[the] average interrogator would not have more than 70 per cent chance of making the right identification after five minutes of questioning".[44]

Shah & Warwick (2009b) show that experts are fooled, and that interrogator strategy, "power" vs "solidarity" affects correct identification, the latter being more successful.

Chatterbot programs such as ELIZA have repeatedly fooled unsuspecting people into believing that they are communicating with human beings. In these cases, the "interrogator" is not even aware of the possibility that they are interacting with a computer. To successfully appear human, there is no need for the machine to have any intelligence whatsoever and only a superficial resemblance to human behaviour is required.

Early Loebner prize competitions used "unsophisticated" interrogators who were easily fooled by the machines.[40] Since 2004, the Loebner Prize organizers have deployed philosophers, computer scientists, and journalists among the interrogators. Nonetheless, some of these experts have been deceived by the machines.[75]

Michael Shermer points out that human beings consistently choose to consider non-human objects as human whenever they are allowed the chance, a mistake called the anthropomorphic fallacy: They talk to their cars, ascribe desire and intentions to natural forces (e.g., "nature abhors a vacuum"), and worship the sun as a human-like being with intelligence. If the Turing test is applied to religious objects, Shermer argues, then, that inanimate statues, rocks, and places have consistently passed the test throughout history.[citation needed] This human tendency towards anthropomorphism effectively lowers the bar for the Turing test, unless interrogators are specifically trained to avoid it.
With that background, you can make sense of this new study as fits your worldview.

In my worldview, this does not mean much about computer intelligence. It does advance the foundation for future research.

Computer passes 'Turing Test' for the first time after convincing users it is human

A "super computer" has duped humans into thinking it is a 13-year-old boy, becoming the first machine to pass the "iconic" Turing Test, experts say

By Hannah Furness
08 Jun 2014


Alan Turing Photo: AFP

A ''super computer'' has duped humans into thinking it is a 13-year-old boy to become the first machine to pass the ''iconic'' Turing Test, experts have said.

Five machines were tested at the Royal Society in central London to see if they could fool people into thinking they were humans during text-based conversations.

The test was devised in 1950 by computer science pioneer and Second World War codebreaker Alan Turing, who said that if a machine was indistinguishable from a human, then it was ''thinking''.

No computer had ever previously passed the Turing Test, which requires 30 per cent of human interrogators to be duped during a series of five-minute keyboard conversations, organisers from the University of Reading said.

But ''Eugene Goostman'', a computer programme developed to simulate a 13-year-old boy, managed to convince 33 per cent of the judges that it was human, the university said.

Related Articles
Professor Kevin Warwick, from the University of Reading, said: ''In the field of artificial intelligence there is no more iconic and controversial milestone than the Turing Test.

''It is fitting that such an important landmark has been reached at the Royal Society in London, the home of British science and the scene of many great advances in human understanding over the centuries. This milestone will go down in history as one of the most exciting.''

The successful machine was created by Russian-born Vladimir Veselov, who lives in the United States, and Ukrainian Eugene Demchenko who lives in Russia.

Mr Veselov said: ''It's a remarkable achievement for us and we hope it boosts interest in artificial intelligence and chatbots.''

Prof Warwick said there had been previous claims that the test was passed in similar competitions around the world.

''A true Turing Test does not set the questions or topics prior to the conversations,'' he said.

''We are therefore proud to declare that Alan Turing's test was passed for the first time.''

Prof Warwick said having a computer with such artificial intelligence had ''implications for society'' and would serve as a ''wake-up call to cybercrime''.

The event on Saturday was poignant as it took place on the 60th anniversary of the death of Dr Turing, who laid the foundations of modern computing.

During the Second World War, his critical work at Britain's code-breaking centre at Bletchley Park helped shorten the conflict and save many thousands of lives.

Instead of being hailed a hero, Dr Turing was persecuted for his homosexuality. After his conviction in 1952 for gross indecency with a 19-year-old Manchester man, he was chemically castrated.

Two years later, he died from cyanide poisoning in an apparent suicide, though there have been suggestions that his death was an accident.

Last December, after a long campaign, Dr Turing was given a posthumous Royal Pardon.

Monday, April 14, 2014

A.I. Has Grown Up and Left Home

http://static.nautil.us/2065_0ae3f79a30234b6c45a6f7d298ba1310.png

As my regular readers well know, I don't think we will ever have human-like robots who can interact with us as though they are not machines. This article from Nautilus presents recent advances in what is known as subsymbolic approaches to AI, "Trying to get computers to behave intelligently without worrying about whether the code actually “represents” thinking at all."

A.I. Has Grown Up and Left Home

It matters only that we think, not how we think.

By David Auerbach Illustration by Olimpia Zagnoli December 19, 2013

"The history of Artificial Intelligence,” said my computer science professor on the first day of class, “is a history of failure.” This harsh judgment summed up 50 years of trying to get computers to think. Sure, they could crunch numbers a billion times faster in 2000 than they could in 1950, but computer science pioneer and genius Alan Turing had predicted in 1950 that machines would be thinking by 2000: Capable of human levels of creativity, problem solving, personality, and adaptive behavior. Maybe they wouldn’t be conscious (that question is for the philosophers), but they would have personalities and motivations, like Robbie the Robot or HAL 9000. Not only did we miss the deadline, but we don’t even seem to be close. And this is a double failure, because it also means that we don’t understand what thinking really is.

Our approach to thinking, from the early days of the computer era, focused on the question of how to represent the knowledge about which thoughts are thought, and the rules that operate on that knowledge. So when advances in technology made artificial intelligence a viable field in the 1940s and 1950s, researchers turned to formal symbolic processes. After all, it seemed easy to represent “There’s a cat on the mat” in terms of symbols and logic:
ai_formula
Literally translated, this reads as “there exists variable x and variable y such that x is a cat, y is a mat, and x is sitting on y.” Which is no doubt part of the puzzle. But does this get us close to understanding what it is to think that there is a cat sitting on the mat? The answer has turned out be “no,” in part because of those constants in the equation. “Cat,” “mat,” and “sitting” aren’t as simple as they seem. Stripping them of their relationship to real-world objects, and all of the complexity that entails, dooms the project of making anything resembling a human thought.

This lack of context was also the Achilles heel of the final attempted moonshot of symbolic artificial intelligence. The Cyc Project was a decades-long effort, begun in 1984, that attempted to create a general-purpose “expert system” that understood everything about the world. A team of researchers under the direction of Douglas Lenat set about manually coding a comprehensive store of general knowledge. What it boiled down to was the formal representation of millions of rules, such as “Cats have four legs” and “Richard Nixon was the 37th President of the United States.” Using formal logic, the Cyc (from “encyclopedia”) knowledge base could then draw inferences. For example, it could conclude that the author of Ulysses was less than 8 feet tall:

(implies
(writtenBy Ulysses-Book ? SPEAKER)
(equals ?SPEAKER JamesJoyce))
(isa JamesJoyce IrishCitizen)
(isa JamesJoyce Human)
(implies
(isa ?SOMEONE Human)
(maximumHeightInFeet ?SOMEONE 8)

Unfortunately, not all facts are so clear-cut. Take the statement “Cats have four legs.” Some cats have three legs, and perhaps there is some mutant cat with five legs out there. (And Cat Stevens only has two legs.) So Cyc needed a more complicated rule, like “Most cats have four legs, but some cats can have fewer due to injuries, and it’s not out of the realm of possibility that a cat could have more than four legs.” Specifying both rules and their exceptions led to a snowballing programming burden.

After more than 25 years, Cyc now contains 5 million assertions. Lenat has said that 100 million would be required before Cyc would be able to reason like a human does. No significant applications of its knowledge base currently exist, but in a sign of the times, the project in recent years has begun developing a “Terrorist Knowledge Base.” Lenat announced in 2003 that Cyc had “predicted” the anthrax mail attacks six months before they had occurred. This feat is less impressive when you consider the other predictions Cyc had made, including the possibility that Al Qaeda might bomb the Hoover Dam using trained dolphins.

Cyc, and the formal symbolic logic on which it rested, implicitly make a crucial and troublesome assumption about thinking. By gathering together in a single virtual “space” all of the information and relationships relevant to a particular thought, the symbolic approach pursues what Daniel Dennett has called a “Cartesian theater”—a kind of home for consciousness and thinking. It is in this theater that the various strands necessary for a thought are gathered, combined, and transformed in the right kinds of ways, whatever those may be. In Dennett’s words, the theater is necessary to the “view that there is a crucial finish line or boundary somewhere in the brain, marking a place where the order of arrival equals the order of ‘presentation’ in experience because what happens there is what you are conscious of.” The theater, he goes on to say, is a remnant of a mind-body dualism which most modern philosophers have sworn off, but which subtly persists in our thinking about consciousness.

The impetus to believe in something like the Cartesian theater is clear. We humans, more or less, behave like unified rational agents, with a linear style of thinking. And so, since we think of ourselves as unified, we tend to reduce ourselves not to a single body but to a single thinker, some “ghost in the machine” that animates and controls our biological body. It doesn’t have to be in the head—the Greeks put the spirit (thymos) in the chest and the breath—but it remains a single, indivisible entity, our soul living in the house of the senses and memory. Therefore, if we can be boiled to an indivisible entity, surely that entity must be contained or located somewhere.
AI_Descartes_BREAKER.
Philosophy of mind: René Descartes’ illustration of dualism.Wikimedia Commons
This has prompted much research looking for “the area” where thought happens. Descartes hypothesized that our immortal soul interacted with our animal brain through the pineal gland. Today, studies of brain-damaged patients (as Oliver Sacks has chronicled in his books) have shown how functioning is corrupted by damage to different parts of the brain. We know facts like, language processing occurs in Broca’s area in the frontal lobe of the left hemisphere. But some patients with their Broca’s area destroyed can still understand language, due to the immense neuroplasticity of the brain. And language, in turn, is just a part of what we call “thinking.” If we can’t even pin down where the brain processes language, we are a far way from locating that mysterious entity, “consciousness.” That may be because it doesn’t exist in a spot you can point at.

Symbolic artificial intelligence, the Cartesian theater, and the shadows of mind-body dualism plagued the early decades of research into consciousness and thinking. But eventually researchers began to throw the yoke off. Around 1960, linguistics pioneer Noam Chomsky made a bold argument: Forget about meaning, forget about thinking, just focus on syntax. He claimed that linguistic syntax could be represented formally, was a computational problem, and was universal to all humans and hard-coded into every baby’s head. The process of exposure to language caused certain switches to be flipped on or off to determine what particular form the grammar would take (English, Chinese, Inuit, and so on). But the process was one of selection, not acquisition. The rules of grammar, however they were implemented, became the target of research programs around the world, supplanting a search for “the home of thought.”

Chomsky made progress by abandoning the attempt to directly explain meaning and thought. But he remained firmly inside the Cartesian camp. His theories were symbolic in nature, postulating relationships among a variety of potential vocabularies rooted in native rational faculties, and never making any predictions that proved true without exception. Modern artificial intelligence programs have gone one step further, by giving up on the idea of any form of knowledge representation. These so-called subsymbolic approaches, which also go under such names as connectionism, neural networks, and parallel distributed processing take a unique approach. Rather than going from the inside out—injecting symbolic “thoughts” into computer code and praying that the program will exhibit sufficiently human-like thinking—subsymbolic approaches proceed from the outside in: Trying to get computers to behave intelligently without worrying about whether the code actually “represents” thinking at all.

Subsymbolic approaches were pioneered in the late 1950s and 1960s, but lay fallow for years because they initially seemed to generate worse results than symbolic approaches. In 1957, Frank Rosenblatt pioneered what he called the “perceptron,” which used a re-entrant feedback algorithm in order to “train” itself to compute various logical functions correctly, and thereby “learn” in the loosest sense of the term. This approach was also called “connectionism” and gave rise to the term “neural networks,” though a perceptron is vastly simpler than an actual neuron. Rosenblatt was drawing on oddball cybernetic pioneers like Norbert Wiener, Warren McCulloch, Ross Ashby, and Grey Walter, who theorized and even experimented with homeostatic machines that sought equilibrium with their environment, such as Grey Walter’s light-seeking robotic “turtles” and Claude Shannon’s maze-running “rats.”

In 1969, Rosenblatt was met with a scathing attack by symbolic artificial intelligence advocate Marvin Minsky. The attack was so successful that subsymbolic approaches were more or less abandoned during the 1970s, a time which has been called the AI Winter. As symbolic approaches continued to flail in the 1970s and 1980s, people like Terrence Sejnowski and David Rumelhart returned to subsymbolic artificial intelligence, modeling it after learning in biological systems. They studied how simple organisms relate to their environment, and how the evolution of these organisms gradually built up increasingly complex behavior. Biology, genetics, and neuropsychology are what figured here, rather than logic and ontology.

This approach more or less abandons knowledge as a starting point. In contrast to Chomsky, a subsymbolic approach to grammar would say that grammar is determined and conditioned by environmental and organismic constraints (what psychologist Joshua Hartshorne calls “design constraints”), not by a set of hardcoded computational rules in the brain. These constraints aren’t expressed in strictly formal terms. Rather, they are looser contextual demands such as, “There must be a way for an organism to refer to itself” and “There must be a way to express a change in the environment.”

By abandoning the search for a Cartesian theater, containing a library of symbols and rules, researchers made the leap from instilling machines with data, to instilling them with knowledge. The essential truth behind subsymbolism is that language and behavior exist in relation to an environment, not in a vacuum, and they gain meaning from their usage in that environment. To use language is to use it for some purpose. To behave is to behave for some end. In this view, any attempt to generate a universal set of rules will always be riddled with exceptions, because contexts are constantly shifting. Without the drive toward concrete environmental goals, representation of knowledge in a computer is meaningless, and fruitless. It remains locked in the realm of data.


For certain classes of problems, modern subsymbolic approaches have proved far more generalizable and ubiquitous than any previous symbolic approach to the same problems. This success speaks to the advantage of not worrying about whether a computer “knows” or “understands” the problem it is working on. For example, genetic approaches represent algorithms with varying parameters as chromosomal “strings,” and “breed” successful algorithms with one another. These approaches do not improve through better understanding of the problem. All that matters is the fitness of the algorithm with respect to its environment—in other words, how the algorithm behaves. This black-box approach has yielded successful applications in everything from bioinformatics to economics, yet one can never give a concise explanation of just why the fittest algorithm is the most fit.

Neural networks are another successful subsymbolic technology, and are used for image, facial, and voice recognition applications. No representation of concepts is hardcoded into them, and the factors that they use to identify a particular subclass of images emerge from the operation of the algorithm itself. They can also be surprising: Pornographic images, for instance, are frequently identified not by the presence of particular body parts or structural features, but by the dominance of certain colors in the images.

These networks are usually “primed” with test data, so that they can refine their recognition skills on carefully selected samples. Humans are often involved in assembling this test data, in which case the learning environment is called “supervised learning.” But even the requirement for training is being left behind. Influenced by theories arguing that parts of the brain are specifically devoted to identifying particular types of visual imagery, such as faces or hands, a 2012 paper by Stanford and Google computer scientists showed some progress in getting a neural network to identify faces without priming data, among images that both did and did not contain faces. Nowhere in the programming was any explicit designation made of what constituted a “face.” The network evolved this category on its own. It did the same for “cat faces” and “human bodies” with similar success rates (about 80 percent).

While the successes behind subsymbolic artificial intelligence are impressive, there is a catch that is very nearly Faustian: The terms of success may prohibit any insight into how thinking “works,” but instead will confirm that there is no secret to be had—at least not in the way that we’ve historically conceived of it. It is increasingly clear that the Cartesian model is nothing more than a convenient abstraction, a shorthand for irreducibly complex operations that somehow (we don’t know how) give the appearance, both to ourselves and to others, of thinking. New models for artificial intelligence ask us to, in the words of philosopher Thomas Metzinger, rid ourselves of an “Ego Tunnel,” and understand that, while our sense of self dominates our thoughts, it does not dominate our brains.

Instead of locating where in our brains we have the concept of “face,” we have made a computer whose code also seems to lack the concept of “face.” Surprisingly, this approach succeeds where others have failed, giving the computer an inkling of the very idea whose explicit definition we gave up on trying to communicate. In moving out of our preconceived notion of the home of thought, we have gained in proportion not just a new level of artificial intelligence, but perhaps also a kind of self-knowledge.

David Auerbach is a writer and software engineer who lives in New York. He writes the Bitwise column for Slate.

Friday, March 21, 2014

Michel Maharbiz - Cyborg Insects and Other Things: Building Interfaces Between the Synthetic and Multicellular


Via UCTV and the University of California at Berkeley, this video talk by Michel Maharbiz (faculty webpage) takes a look at the future of cyborg technology, especially in insects. His work is in developing electronic interfaces to cells, to organisms, and to brains.

Professor Maharbiz (personal webpage) is:
Associate professor of Electrical Engineering and Computer Sciences at UC Berkeley. His current research centers on building micro/nano interfaces to cells and organisms and exploring bio-derived fabrication methods. His research group is also known for developing the world’s first remotely radio-controlled cyborg beetles; this was named one of the top 10 emerging technologies of 2009 by MIT’s Technology Review (TR10) and was among Time magazine’s Top 50 Inventions of 2009. His long-term goal is understanding developmental mechanisms as a way to engineer and fabricate machines. He received his Ph.D. in 2003 from UC Berkeley for his work on microbioreactor systems, which led to the foundation of Microreactor Technologies Inc., which was recently acquired by Pall Corporation.
This technology is both very cool and kind of creepy. I really hate flying beetle-type insects - and now we can turn them into drones I'm guessing.

Cyborg Insects and Other Things: Building Interfaces Between the Synthetic and Multicellular

Published on Mar 10, 2014


Prof. Michel Maharbiz presents an overview of ongoing exploration of the remote control of insects in free flight via implantable radio-equipped miniature neural stimulating systems; recent results with pupally-implanted neural interfaces and extreme miniaturization directions.

Tuesday, March 11, 2014

Why Ray Kurzweil is Wrong: Computers Won’t Be Smarter Than Us Anytime Soon

Recently, I shared an article from George Dvorsky called "You Might Never Upload Your Brain Into a Computer," in which he outlined a series of reasons for his position:
1. Brain functions are not computable
2. We’ll never solve the hard problem of consciousness
3. We’ll never solve the binding problem
4. Panpsychism is true
5. Mind-body dualism is true
6. It would be unethical to develop
7. We can never be sure it works
8. Uploaded minds would be vulnerable to hacking and abuse
While I disagree with at least two of his points (I am not convinced panpsychism is true and I am VERY skeptical of mind-body dualism), I applaud the principle of it.

Likewise, this recent post from John Grohol at Psych Central's World of Psychology calls out futurist Ray Kurzweil on his claims around computer sentience, i.e., the singularity.

Why Ray Kurzweil is Wrong: Computers Won’t Be Smarter Than Us Anytime Soon

By John M. Grohol, Psy.D.



“When Kurzweil first started talking about the “singularity”, a conceit he borrowed from the science-fiction writer Vernor Vinge, he was dismissed as a fantasist. He has been saying for years that he believes that the Turing test – the moment at which a computer will exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human – will be passed in 2029.”

Sorry, but Ray Kurzweil is wrong. It’s easy to understand why computers are nowhere near close to surpassing humans… And here’s why.

Intelligence is one thing. But it’s probably the pinnacle of human narcissism to believe that we could design machines to understand us long before we even understood ourselves. Shakespeare, after all, said, “Know thyself.”

Yet, here it is squarely in 2014, and we still have only an inkling of how the human brain works. The very font of out intelligence and existence is contained in the brain — a simple human organ like the heart. Yet we don’t know how it works. All we have are theories.

Let me reiterate this: We don’t know how the human brain works.

How can anyone in their right mind say that, after a century of study into how the brain operates, we’re suddenly going to crack the code in the next 15 years?

And crack the code one must. Without understanding how the brain works, it’s ludicrous to say we could design a machine to replicate the brain’s near-instantaneous processing of hundreds of different sensory inputs from dozens of trajectories. That would be akin to saying we could design a space craft to travel to the moon, before designing — and understanding how to design — the computers that would take the craft there.

It’s a little backwards to think you could create a machine to replicate the human mind before you understand the basics of how the human mind makes so many connections, so easily.

Human intelligence, as any psychologist can tell you, is a complicated, complex thing. The standard tests for intelligence aren’t just paper-and-pencil knowledge quizzes. They involve the manipulation of objects in three-dimensional spaces (something most computers can’t do at all), understanding how objects fit within a larger system of objects, and other tests like this. It’s not just good vocabulary that makes a person smart. It’s a combination of skills, thought, knowledge, experience and visual-spatial skills. Most of which even the smartest computer today only has a rudimentary grasp of (especially without the help of human-created GPS systems).

Robots and computers are nowhere close to humanity in approaching its intelligence. They are probably around an ant in terms of their proximity today to “outsmarting” their makers. A driving car that relies on other computer systems — again, created by humans — is hardly an example of computer-based, innate intelligence. A computer than can answer trivia in a game show or play a game of chess isn’t really equivalent to the knowledge that even the most rudimentary blue-collar job holder holds. It’s a sideshow act. A distraction meant to demonstrate the very limited, singular focus computers have historically excelled at.

The fact that anyone even needs to point out that single-purpose computers are only good at the singular task they’ve been designed to do is ridiculous. A Google-driven car can’t beat a Jeopardy player. And the Jeopardy computer that won can’t tell you a thing about tomorrow’s weather forecast. Or how to solve a chess problem. Or what’s the best way to retrieve a failed space mission. Or when’s the best time to plant crops in the Mississippi delta. Or even the ability to turn a knob in the right direction in order to ensure the water turns off.

If you can design a computer to pretend to be a human in a very artificial, lab-created task of answering random dumb questions from a human — that’s not a computer that’s “smarter” than us. That’s a computer that’s incredibly dumb, yet was able to fool a stupid panel of judges judging from a criteria that’s all but disconnected from the real world.

And so that’s the primary reason Ray Kurzweil is wrong — we will not have any kind of sentient intelligence — in computers, robots, or anything else — in a mere 15 years. Until we understand the foundation of our own minds, it’s narcissistic (and a little bit naive) to believe we could design an artificial one that could function just as well as our own.

We are in the 1800s in terms of our understanding of our minds, and until we reach the 21st century, computers too will be in the 1800s of their ability to become sentient.


Read more:
Why robots will not be smarter than humans by 2029 in reply to 2029: the year when robots will have the power to outsmart their makers

Saturday, March 01, 2014

John Martinis, "Design of a Superconducting Quantum Computer"


This Google Tech Talk is way on the geeky side, but as much of it as I could follow was really interesting.

Tech Talk: John Martinis, "Design of a Superconducting Quantum Computer"

Published on Feb 28, 2014 


John Martinis visited Google LA to give a tech talk: "Design of a Superconducting Quantum Computer." This talk took place on October 15, 2013.

Abstract:

Superconducting quantum computing is now at an important crossroad, where "proof of concept" experiments involving small numbers of qubits can be transitioned to more challenging and systematic approaches that could actually lead to building a quantum computer. Our optimism is based on two recent developments: a new hardware architecture for error detection based on "surface codes" [1], and recent improvements in the coherence of superconducting qubits [2]. I will explain how the surface code is a major advance for quantum computing, as it allows one to use qubits with realistic fidelities, and has a connection architecture that is compatible with integrated circuit technology. Additionally, the surface code allows quantum error detection to be understood using simple principles. I will also discuss how the hardware characteristics of superconducting qubits map into this architecture, and review recent results that suggest gate errors can be reduced to below that needed for the error detection threshold.

References

[1] Austin G. Fowler, Matteo Mariantoni, John M. Martinis and Andrew N. Cleland, PRA 86, 032324 (2012).
[2] R. Barends, J. Kelly, A. Megrant, D. Sank, E. Jeffrey, Y. Chen, Y. Yin, B. Chiaro, J. Mutus, C. Neill, P. O'Malley, P. Roushan, J. Wenner, T. C. White, A. N. Cleland and John M. Martinis, arXiv:1304:2322.

Bio:

John M. Martinis attended the University of California at Berkeley from 1976 to 1987, where he received two degrees in Physics: B.S. (1980) and Ph.D. (1987). His thesis research focused on macroscopic quantum tunneling in Josephson Junctions. After completing a post-doctoral position at the Commisiariat Energie Atomic in Saclay, France, he joined the Electromagnetic Technology division at NIST in Boulder. At NIST he was involved in understanding the basic physics of the Coulomb Blockade, and worked to use this phenomenon to make a new fundamental electrical standard based on counting electrons. While at NIST he also invented microcalorimeters based on superconducting sensors for x-ray microanalysis and astrophysics. In June of 2004 he moved to the University of California, Santa Barbara where he currently holds the Worster Chair. At UCSB, he has continued work on quantum computation. Along with Andrew Cleland, he was awarded in 2010 the AAAS science breakthrough of the year for work showing quantum behavior of a mechanical oscillator.

Friday, February 28, 2014

George Dvorsky - You Might Never Upload Your Brain Into a Computer

I think we need to drop the "might" from that headline and replace it with "will." Still, George Dvorsky gets a big AMEN from me on this piece from io9 (even if it is a year old).

For the record, however, I feel compelled to lodge my disagreement with point #5, that "mind-body dualism" is true. Nonsense. There is actually a logical fallacy at work here - if dualism were true, our minds would not be "located somewhere outside our bodies — like in a vat somewhere, or oddly enough, in a simulation (a la The Matrix)," they would reside in the body but separate from it. This is exactly the premise necessary to believe our minds can be uploaded into a computer.

Even if we believe that the mind is simply a by-product of brain activity, there is no way to transfer a wet biological system built from fat, proteins, neurotransmitters, and electrical current into a dry computer mainframe. I don't see this EVER being an option.

You Might Never Upload Your Brain Into a Computer

George Dvorsky
Debunkery | 4/17/13


Many futurists predict that one day we'll upload our minds into computers, where we'll romp around in virtual reality environments. That's possible — but there are still a number of thorny issues to consider. Here are eight reasons why your brain may never be digitized.

Indeed, this isn’t just idle speculation. Many important thinkers have expressed their support of the possibility, including the renowned futurist Ray Kurzweil (author of How to Create a Mind), roboticist Hans Moravec, cognitive scientist Marvin Minsky, neuroscientist David Eagleman, and many others.

Skeptics, of course, relish the opportunity to debunk uploads. The claim that we’ll be able to transfer our conscious thoughts to a computer, after all, is a rather extraordinary one.

But many of the standard counter-arguments tend to fall short. Typical complaints cite insufficient processing power, inadequate storage space, or the fear that the supercomputers will be slow, unstable and prone to catastrophic failures — concerns that certainly don’t appear intractable given the onslaught of Moore’s Law and the potential for megascale computation. Another popular objection is that the mind cannot exist without a body. But an uploaded mind could be endowed with a simulated body and placed in a simulated world.

To be fair, however, there are a number of genuine scientific, philosophical, ethical, and even security concerns that could significantly limit or even prevent consciousness uploads from ever happening. Here are eight of the most serious.

1. Brain functions are not computable


Proponents of mind uploading tend to argue that the brain is a Turing Machine — the idea that organic minds are nothing more than classical information-processors. It’s an assumption derived from the strong physical Church-Turing thesis, and one that now drives much of cognitive science.


But not everyone believes the brain/computer analogy works. Speaking recently at the annual meeting of the American Association for the Advancement of Science in Boston, neuroscientist Miguel Nicolelis said that, “The brain is not computable and no engineering can reproduce it.” He referred to the idea of uploads as “bunk,” saying that it’ll never happen and that “[t]here are a lot of people selling the idea that you can mimic the brain with a computer.” Nicolelis argues that human consciousness can’t be replicated in silicon because most of its important features are the result of unpredictable, nonlinear interactions among billions of cells.

“You can’t predict whether the stock market will go up or down because you can’t compute it,” he said. “You could have all the computer chips ever in the world and you won’t create a consciousness.” Image credit: Jeff Cameron Collingwood/Shutterstock.

2. We’ll never solve the hard problem of consciousness


The computability of the brain aside, we may never be able to explain how and why we have qualia, or what’s called phenomenal experience.


According to David Chalmers — the philosopher of mind who came up with the term “hard problem” — we’ll likely solve the easy problems of human cognition, like how we focus our attention, recall a memory, discriminate, and process information. But explaining how incoming sensations get translated into subjective feelings — like the experience of color, taste, or the pleasurable sound of music — is proving to be much more difficult. Moreover, we’re still not entirely sure why we even have consciousness, and why we’re not just “philosophical zombies” — hypothetical beings who act and respond as if they’re conscious, but have no internal mental states.

In his paper, “Facing Up to the Problem of Consciousness,” Chalmers writes:
How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.
If any problem qualifies as the problem of consciousness, argues Chalmers, it is this one. Image: blog.lib.umn.edu.

3. We’ll never solve the binding problem


And even if we do figure out how the brain generates subjective experience, classical digital computers may never be able to support unitary phenomenal minds. This is what’s referred to as the binding problem — our inability to understand how a mind is able to segregate elements and combine problems as seamlessly as it does. Needless to say, we don’t even know if a Turing Machine can even support these functions.


More specifically, we still need to figure out how our brains segregate elements in complex patterns, a process that allows us to distinguish them as discrete objects. The binding problem also describes the issue of how objects, like those in the background or in our peripheral experience — or even something as abstract as emotions — can still be combined into a unitary and coherent experience. As the cognitive neuroscientist Antti Revonsuo has said, “Binding is thus seen as a problem of finding the mechanisms which map the ‘objective’ physical entities in the external world into corresponding internal neural entities in the brain.”

He continues:
Once the idea of consciousness-related binding is formulated, it becomes immediately clear that it is closely associated with two central problems in consciousness research. The first concerns the unity of phenomenal consciousness. The contents of phenomenal consciousness are unified into one coherent whole, containing a unified ‘‘me’’ in the center of one unified perceptual world, full of coherent objects. How should we describe and explain such experiential unity? The second problem of relevance here concerns the neural correlates of consciousness. If we are looking for an explanation to the unity of consciousness by postulating underlying neural mechanisms, these neural mechanisms surely qualify for being direct neural correlates of unified phenomenal states.
No one knows how our organic brains perform this trick — at least not yet — or if digital computers will ever be capable of phenomenal binding. Image credit: agsandrew/Shutterstock.

4. Panpsychism is true


Though still controversial, there’s also the potential for panpsychism to be in effect. This is the notion that consciousness is a fundamental and irreducible feature of the cosmos. It might sound a bit New Agey, but it’s an idea that’s steadily gaining currency (especially in consideration of our inability to solve the Hard Problem).


Panpsychists speculate that all parts of matter involve mind. Neuroscientist Stuart Hameroff has suggested that consciousness is related to a fundamental component of physical reality — components that are akin to phenomenon like mass, spin or charge. According to this view, the basis of consciousness can be found in an additional fundamental force of nature not unlike gravity or electromagnetism. This would be something like an elementary sentience or awareness. As Hameroff notes, "these components just are." Likewise, David Chalmers has proposed a double-aspect theory in which information has both physical and experiential aspects. Panpsychism has also attracted the attention of quantum physicists (who speculate about potential quantum aspects of consciousness given our presence in an Everett Universe), and physicalists like Galen Strawson (who argues that mental/experiential is physical).

Why this presents a problem to mind uploading is that consciousness may not substrate neutral — a central tenant of the Church-Turing Hypothesis — but is in fact dependent on specific physical/material configurations. It’s quite possible that there’s no digital or algorithmic equivalent to consciousness. Having consciousness arise in a classical Von Neumann architecture, therefore, may be as impossible as splitting an atom in a virtual environment by using ones and zeros. Image credit: agsandrew/Shutterstock.

5. Mind-body dualism is true



Perhaps even more controversial is the suggestion that consciousness lies somewhere outside the brain, perhaps as some ethereal soul or spirit. It’s an idea that’s primarily associated with Rene Descartes, the 17th century philosopher who speculated that the mind is a nonphysical substance (as opposed to physicalist interpretations of mind and consciousness). Consequently, some proponents of dualism (or even vitalism) suggest that consciousness lies outside knowable science.

Needless to say, if our minds are located somewhere outside our bodies — like in a vat somewhere, or oddly enough, in a simulation (a la The Matrix) — our chances of uploading ourselves are slim to none.

6. It would be unethical to develop


Philosophical and scientific concerns aside, there may also be some moral reasons to forego the project. If we’re going to develop upload technologies, we’re going to have to conduct some rather invasive experiments, both on animals and humans. The potential for abuse is significant.


Uploading schemas typically describe the scanning and mapping of an individual’s brain, or serial sectioning. While a test subject, like a mouse or monkey, could be placed under a general anesthetic, it will eventually have to be re-animated in digital substrate. Once this happens, we’ll likely have no conception of its internal, subjective experience. It’s brain could be completely mangled, resulting terrible psychological or physical anguish. It’s reasonable to assume that our early uploading efforts will be far from perfect, and potentially cruel.

And when it comes time for the first human to be uploaded, there could be serious ethical and legal issues to consider — especially considering that we’re talking about the re-location of a living, rights-bearing human being. Image credit: K. Zhuang.

7. We can never be sure it works



Which leads to the next point, that of post-upload skepticism. A person can never really be sure they created a sentient copy of themselves. This is the continuity of consciousness problem — the uncertainty we’ll have that, instead of moving our minds, we simply copied ourselves instead.

Because we can’t measure for consciousness — either qualitatively or quantitatively — uploading will require a tremendous leap of faith — a leap that could lead to complete oblivion (e.g. a philosophical zombie), or something completely unexpected. And relying on the advice from uploaded beings won’t help either (“Come on in, the water’s fine...”).

In an email to me, philosopher David Pearce put it this way:
Think of it like a game of chess. If I tell you the moves, you can faithfully replicate the gameplay. But you know nothing whatsoever of the textures of the pieces, or indeed, whether they have any textures at all (perhaps I played online). Likewise, I think, the same can be said with the textures of consciousness. The possibility of substrate-independent minds needs to be distinguished from the possibility of substrate-independent qualia.
In other words, the quality of conscious experience in digital substrate could be far removed from that experienced by an analog consciousness. Image: Rikomatic.

8. Uploaded minds would be vulnerable to hacking and abuse



Once our minds are uploaded, they’ll be physically and inextricably connected to the larger computational superstructure. By consequence, uploaded brains will be perpetually vulnerable to malicious attacks and other unwanted intrusions.

To avoid this, each uploaded person will have to set-up a personal firewall to prevent themselves from being re-programmed, spied upon, damaged, exploited, deleted, or copied against their will. These threats could come from other uploads, rogue AI, malicious scripts, or even the authorities in power (e.g. as a means to instill order and control).

Indeed, as we know all too well today, even the tightest security measures can't prevent the most sophisticated attacks; an uploaded mind can never be sure it’s safe.
  • Special thanks to David Pearce for helping with this article.
  • Top image: Jurgen Ziewe/Shutterstock.

Monday, January 06, 2014

Gary Marcus - Hyping Artificial Intelligence, Yet Again

Over at The New Yorker, psychologist and cognitive scientist Gary Marcus (author of Kluge: The Haphazard Evolution of the Human Mind [2008] and The Birth of the Mind: How a Tiny Number of Genes Creates The Complexities of Human Thought [2004]) does a nice job of stripping away the hype from artificial intelligence promotion. I am grateful for Marcus.

Hyping Artificial Intelligence, Yet Again

Posted by Gary Marcus
January 1, 2014


According to the Times, true artificial intelligence is just around the corner. A year ago, the paper ran a front-page story about the wonders of new technologies, including deep learning, a neurally-inspired A.I. technique for statistical analysis. Then, among others, came an article about how I.B.M.’s Watson had been repurposed into a chef, followed by an upbeat post about quantum computation. On Sunday, the paper ran a front-page story about “biologically inspired processors,” “brainlike computers” that learn from experience.

This past Sunday’s story, by John Markoff, announced that “computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.” The deep-learning story, from a year ago, also by Markoff, told us of “advances in an artificial intelligence technology that can recognize patterns offer the possibility of machines that perform human activities like seeing, listening and thinking.” For fans of “Battlestar Galactica,” it sounds like exciting stuff.

But, examined carefully, the articles seem more enthusiastic than substantive. As I wrote before, the story about Watson was off the mark factually. The deep-learning piece had problems, too. Sunday’s story is confused at best; there is nothing new in teaching computers to learn from their mistakes. Instead, the article seems to be about building computer chips that use “brainlike” algorithms, but the algorithms themselves aren’t new, either. As the author notes in passing, “the new computing approach” is “already in use by some large technology companies.” Mostly, the article seems to be about neuromorphic processors—computer processors that are organized to be somewhat brainlike—though, as the piece points out, they have been around since the nineteen-eighties. In fact, the core idea of Sunday’s article—nets based “on large groups of neuron-like elements … that learn from experience”—goes back over fifty years, to the well-known Perceptron, built by Frank Rosenblatt in 1957. (If you check the archives, the Times billed it as a revolution, with the headline “NEW NAVY DEVICE LEARNS BY DOING.” The New Yorker similarly gushed about the advancement.) The only new thing mentioned is a computer chip, as yet unproven but scheduled to be released this year, along with the claim that it can “potentially [make] the term ‘computer crash’ obsolete.” Steven Pinker wrote me an e-mail after reading the Times story, saying “We’re back in 1985!”—the last time there was huge hype in the mainstream media about neural networks.

What’s the harm? As Yann LeCun, the N.Y.U. researcher who was just appointed to run Facebook’s new A.I. lab, put it a few months ago in a Google+ post, a kind of open letter to the media, “AI [has] ‘died’ about four times in five decades because of hype: people made wild claims (often to impress potential investors or funding agencies) and could not deliver. Backlash ensued. It happened twice with neural nets already: once in the late 60’s and again in the mid-90’s.”

A.I. is, to be sure, in much better shape now than it was then. Google, Apple, I.B.M., Facebook, and Microsoft have all made large commercial investments. There have been real innovations, like driverless cars, that may soon become commercially available. Neuromorphic engineering and deep learning are genuinely exciting, but whether they will really produce human-level A.I. is unclear—especially, as I have written before, when it comes to challenging problems like understanding natural language.

The brainlike I.B.M. system that the Times mentioned on Sunday has never, to my knowledge, been applied to language, or any other complex form of learning. Deep learning has been applied to language understanding, but the results are feeble so far. Among publicly available systems, the best is probably a Stanford project, called Deeply Moving, that applies deep learning to the task of understanding movie reviews. The cool part is that you can try it for yourself, cutting and pasting text from a movie review and immediately seeing the program’s analysis; you even teach it to improve. The less cool thing is that the deep-learning system doesn’t really understand anything.

It can’t, say, paraphrase a review or mention something the reviewer liked, things you’d expect of an intelligent sixth-grader. About the only thing the system can do is so-called sentiment analysis, reducing a review to a thumbs-up or thumbs-down judgment. And even there it falls short; after typing in “better than ‘Cats!’ ” (which the system correctly interpreted as positive), the first thing I tested was a Rotten Tomatoes excerpt of a review of the last movie I saw, “American Hustle”: “A sloppy, miscast, hammed up, overlong, overloud story that still sends you out of the theater on a cloud of rapture.” The deep-learning system couldn’t tell me that the review was ironic, or that the reviewer thought the whole was more than the sum of the parts. It told me only, inaccurately, that the review was very negative. When I sent the demo to my collaborator, Ernest Davis, his luck was no better than mine. Ernie tried “This is not a book to be ignored” and “No one interested in the subject can afford to ignore this book.” The first came out as negative, the second neutral. If Deeply Moving is the best A.I. has to offer, true A.I.—of the sort that can read a newspaper as well as a human can—is a long way away.

Overhyped stories about new technologies create short-term enthusiasm, but they also often lead to long-term disappointment. As LeCun put it in his Google+ post, “Whenever a startup claims ‘90% accuracy’ on some random task, do not consider this newsworthy. If the company also makes claims like ‘we are developing machine learning software based on the computational principles of the human brain’ be even more suspicious.”

As I noted in a recent essay, some of the biggest challenges in A.I. have to do with common-sense reasoning. Trendy new techniques like deep learning and neuromorphic engineering give A.I. programmers purchase on a particular kind of problem that involves categorizing familiar stimuli, but say little about how to cope with things we haven’t seen before. As machines get better at categorizing things they can recognize, some tasks, like speech recognition, improve markedly, but others, like comprehending what a speaker actually means, advance more slowly. Neuromorphic engineering will probably lead to interesting advances, but perhaps not right away. As a more balanced article on the same topic in Technology Review recently reported, some neuroscientists, including Henry Markram, the director of a European project to simulate the human brain, are quite skeptical of the currently implemented neuromorphic systems on the grounds that their representations of the brain are too simplistic and abstract.

As a cognitive scientist, I agree with Markram. Old-school behaviorist psychologists, and now many A.I. programmers, seem focused on finding a single powerful mechanism—deep learning, neuromorphic engineering, quantum computation, or whatever—to induce everything from statistical data. This is much like what the psychologist B. F. Skinner imagined in the early nineteen-fifties, when he concluded all human thought could be explained by mechanisms of association; the whole field of cognitive psychology grew out of the ashes of that oversimplified assumption.

At times like these, I find it useful to remember a basic truth: the human brain is the most complicated organ in the known universe, and we still have almost no idea how it works. Who said that copying its awesome power was going to be easy?

Gary Marcus is a professor of psychology at N.Y.U. and a visiting cognitive scientist at the new Allen Institute for Artificial Intelligence. This essay was written in memory of his late friend Michael Dorfman—friend of science, enemy of hype.

Photograph: Chris Ratcliffe/Bloomberg/Getty