Pages

Tuesday, March 11, 2014

Why Ray Kurzweil is Wrong: Computers Won’t Be Smarter Than Us Anytime Soon

Recently, I shared an article from George Dvorsky called "You Might Never Upload Your Brain Into a Computer," in which he outlined a series of reasons for his position:
1. Brain functions are not computable
2. We’ll never solve the hard problem of consciousness
3. We’ll never solve the binding problem
4. Panpsychism is true
5. Mind-body dualism is true
6. It would be unethical to develop
7. We can never be sure it works
8. Uploaded minds would be vulnerable to hacking and abuse
While I disagree with at least two of his points (I am not convinced panpsychism is true and I am VERY skeptical of mind-body dualism), I applaud the principle of it.

Likewise, this recent post from John Grohol at Psych Central's World of Psychology calls out futurist Ray Kurzweil on his claims around computer sentience, i.e., the singularity.

Why Ray Kurzweil is Wrong: Computers Won’t Be Smarter Than Us Anytime Soon

By John M. Grohol, Psy.D.



“When Kurzweil first started talking about the “singularity”, a conceit he borrowed from the science-fiction writer Vernor Vinge, he was dismissed as a fantasist. He has been saying for years that he believes that the Turing test – the moment at which a computer will exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human – will be passed in 2029.”

Sorry, but Ray Kurzweil is wrong. It’s easy to understand why computers are nowhere near close to surpassing humans… And here’s why.

Intelligence is one thing. But it’s probably the pinnacle of human narcissism to believe that we could design machines to understand us long before we even understood ourselves. Shakespeare, after all, said, “Know thyself.”

Yet, here it is squarely in 2014, and we still have only an inkling of how the human brain works. The very font of out intelligence and existence is contained in the brain — a simple human organ like the heart. Yet we don’t know how it works. All we have are theories.

Let me reiterate this: We don’t know how the human brain works.

How can anyone in their right mind say that, after a century of study into how the brain operates, we’re suddenly going to crack the code in the next 15 years?

And crack the code one must. Without understanding how the brain works, it’s ludicrous to say we could design a machine to replicate the brain’s near-instantaneous processing of hundreds of different sensory inputs from dozens of trajectories. That would be akin to saying we could design a space craft to travel to the moon, before designing — and understanding how to design — the computers that would take the craft there.

It’s a little backwards to think you could create a machine to replicate the human mind before you understand the basics of how the human mind makes so many connections, so easily.

Human intelligence, as any psychologist can tell you, is a complicated, complex thing. The standard tests for intelligence aren’t just paper-and-pencil knowledge quizzes. They involve the manipulation of objects in three-dimensional spaces (something most computers can’t do at all), understanding how objects fit within a larger system of objects, and other tests like this. It’s not just good vocabulary that makes a person smart. It’s a combination of skills, thought, knowledge, experience and visual-spatial skills. Most of which even the smartest computer today only has a rudimentary grasp of (especially without the help of human-created GPS systems).

Robots and computers are nowhere close to humanity in approaching its intelligence. They are probably around an ant in terms of their proximity today to “outsmarting” their makers. A driving car that relies on other computer systems — again, created by humans — is hardly an example of computer-based, innate intelligence. A computer than can answer trivia in a game show or play a game of chess isn’t really equivalent to the knowledge that even the most rudimentary blue-collar job holder holds. It’s a sideshow act. A distraction meant to demonstrate the very limited, singular focus computers have historically excelled at.

The fact that anyone even needs to point out that single-purpose computers are only good at the singular task they’ve been designed to do is ridiculous. A Google-driven car can’t beat a Jeopardy player. And the Jeopardy computer that won can’t tell you a thing about tomorrow’s weather forecast. Or how to solve a chess problem. Or what’s the best way to retrieve a failed space mission. Or when’s the best time to plant crops in the Mississippi delta. Or even the ability to turn a knob in the right direction in order to ensure the water turns off.

If you can design a computer to pretend to be a human in a very artificial, lab-created task of answering random dumb questions from a human — that’s not a computer that’s “smarter” than us. That’s a computer that’s incredibly dumb, yet was able to fool a stupid panel of judges judging from a criteria that’s all but disconnected from the real world.

And so that’s the primary reason Ray Kurzweil is wrong — we will not have any kind of sentient intelligence — in computers, robots, or anything else — in a mere 15 years. Until we understand the foundation of our own minds, it’s narcissistic (and a little bit naive) to believe we could design an artificial one that could function just as well as our own.

We are in the 1800s in terms of our understanding of our minds, and until we reach the 21st century, computers too will be in the 1800s of their ability to become sentient.


Read more:
Why robots will not be smarter than humans by 2029 in reply to 2029: the year when robots will have the power to outsmart their makers

No comments:

Post a Comment