The Columbia Journalism Review has a great article that examines what little we know about neuroscience and how the media tends to get it wrong. I'm totally on-board with this review. I think the "singularity" is a load of crap. I think Ray Kurzweil, while brilliant, is also nuts if he thinks he will be able to download his consciousness into a machine within his lifetime.
What we know about how the brain works is fragmentary at best -- and what we know about consciousness is even less coherent. One of the take home lessons from the Toward a Science of Consciousness conference last week is that no one knows what the hell consciousness is or how it emerges. Taking that into account, we are FAR away from building a conscious machine.
Here is the beginning of the CJR article:
Within the last month, Wired magazine’s Mark Anderson and author Tom Wolfe, in an interview published in the San Francisco Chronicle, did something rarely seen in the manic world of neuroscience reporting. They broke rank with the chorus of hypesters, saying, in essence, that we barely know what we think we know about the human brain. It was a stark departure from the usual drumbeat of flackery and dubious extrapolation common to the topic.
In the Wolfe interview by Steve Heilig, Wolfe appeared to backtrack from his own hyping of neuroscience in Hooking Up, his 2000 book about attraction in America, by saying he is fascinated by evidence of how little we actually know about the brain. In his words, theorists, and the reporters who give them ink, “are writing literature, which doesn’t mean they are wrong, but they don’t have a scientific leg to stand on. They literally don’t know what they are talking about.”
Call it a moment of clarity for a newspaper more prone to run articles like February’s, “Stressed at work? Rewire your brain!”. That article, by Chris Colin, was basically an advertorial posing as a news story about the Napa, California “peak performance” enhancement company ProAttitude - which uses “neuroscience … guided imagery … cognitive behavior therapy, humanistic psychology, positive psychology, and a form of learned optimism” to reduce workplace stress. The story concluded with, “ProAttitude is hosting a three-day workshop in Mill Valley.”
Similarly, Wired’s April issue is a bit schizo itself. Anderson’s criticism of neuroscience is contained in a succinct sidebar that injects a dose of reality into a fawning feature by Gary Wolf about eternal life guru Ray Kurzweil. “Almost nothing is known about how the brain produces awareness, and current models of brain function don’t accord with the little that is known,” Anderson writes; he then offers a point-by-point takedown of the accompanying feature and neuroscience hype in general. Specifically, Anderson rebuts the notion that brains are like computers and that advances in neuro- and computer science will enable the sixty-year-old Kurzweil to download his consciousness into a machine and extend his life to some time past 2030.
Wired’s parroting of neuro-hype is more in tune with the almost-daily strains of flackery and extrapolation found in some of the nation’s top newspapers.
Read the whole article.
This piece mentioned an article from the April issue of Wired that I quite liked, but never got around to posting. It was a sidebar to the fawning article they ran on Ray Kurzweil, debunking many of his claims and beliefs. I want to post the whole thing because it's worthy of being read -- especially since Stuart Hameroff's work with anesthesia, which is noted here, totally confounds the idea that consciousness arises from the function of neurons in the brain.
Never Mind the Singularity, Here's the ScienceMany computer scientists take it on faith that one day machines will become conscious. Led by futurist Ray Kurzweil, proponents of the so-called strong-AI school believe that a sufficient number of digitally simulated neurons, running at a high enough speed, can awaken into awareness. Once computing speed reaches 1016 operations per second — roughly by 2020 — the trick will be simply to come up with an algorithm for the mind. When we find it, machines will become self-aware, with unpredictable consequences. This event is known as the singularity.
These techno-utopians should pay closer attention to developments in neuroscience. Sure, artificial intelligence techniques like neural networks have led to better spam filters. But research suggests that the current approach to AI won't result in a conscious machine on anything like Kurzweil's timeline. The latest evidence shows that, when it comes to consciousness, the brain simply doesn't work the way computer scientists think it does. Almost nothing is known about how the brain produces awareness, and current models of brain function don't accord with the little that is known.
Singulatarians would respond by predicting that exponentially growing scientific progress will fill the gap. This notion sweeps under the rug a messy philosophical problem: An algorithm is only a set of instructions, and even the most sophisticated machine executing the most elaborate instructions is still an unconscious automaton. Philosophy aside, a constellation of recent scientific findings indicates that no matter how fast CPUs become in future decades, they'll be no more aware than a toaster. Building a conscious machine will likely require paradigm shifts in brain science — conceptual leaps that, by definition, won't come on a schedule. Here, then, are five reasons why the singularity is not near.
The mind is synchronized, but no one knows how. New York University neurologist E. Roy John has established that the hallmark of consciousness is a regular electrical oscillation, or gamma wave, readily detected by electrodes attached to the scalp. More recently, Wolf Singer and his colleagues at the Max Planck Institute for Brain Research in Frankfurt, Germany, confirmed that brain cells flicker in time with the gamma wave. This flickering takes place among widely dispersed neurons throughout the brain with no apparent spatial pattern. What keeps these ever-shifting, widely distributed groups of cells in sync? Neurochemical reactions take place too slowly to explain the phenomenon. This mystery alone seems to demand a wholesale rethinking of AI's underpinnings.
Current brain maps are of little use in explaining awareness. For more than a century, the brain cell, or neuron, has been seen as a tiny switching station with multiple signals coming in through many input wires, known as dendrites, but only one signal going out through a single output wire, or axon. AI is based on this circuitry model. When it comes to consciousness, though, the model has its wires crossed. Singer has discovered that gamma waves — the indicators of consciousness — issue from the neuron's supposed inputs, not its output. Confusing matters further, researchers, including Takaichi Fukuda and Toshio Kosaka of Japan's Kyushu University, have revealed that many inputs interconnect, forming an altogether different set of networks. In other words, the vast strides made by neuroscientists in their attempt to map the brain may reveal little about consciousness.
The brain is faster than singularity theorists think. AI assumes that the neuron is analogous to a single computer bit. But it turns out that each neuron is supported by a supercomputer's worth of additional circuitry. MIT bioengineer Andreas Mershin and UCLA psychologist Nancy Woolf have independently confirmed the importance of microtubules, the scaffolding that undergirds each neuron, in animal memory and learning. At the University of Alberta, physicist Jack Tuszynski has developed computational models suggesting that these supposedly dumb structures could be smarter than previously recognized. Stuart Hameroff at the University of Arizona argues that trillions of computations per second take place in the microtubules of each neuron. If he's right, the brain's speed is 1028 operations per second — a trillion times faster than is generally thought — which pushes the vaunted singularity back by decades.
The on/off switch isn't where it's supposed to be. As it happens, doctors have a handy way to flick the switch of consciousness: anesthesia. When you're under, awareness is disabled, but everything else in the brain operates normally. So how does anesthesia work? Hameroff has come up with a simple model in which anesthetic drugs interact almost exclusively with microtubules; the rest of the neuron plays only a marginal role. This model is the closest anyone has come to a unified theory of anesthesia — yet it flatly contradicts the notion that consciousness arises from firing neurons.
Understanding consciousness may require new physics. In his 1989 book, The Emperor's New Mind, Oxford physicist Roger Penrose proposed that the classical physics ruling neurobiology can't explain consciousness. The mind, he declared, relies on the baffling mechanics of quantum physics. Although his point remains controversial, evidence in its favor is accumulating. Most recently, physicist Efstratios Manousakis at Florida State University showed that certain confounding quirks of visual perception are most easily explained by quantum mechanics. If consciousness is indeed a quantum phenomenon, then AI becomes an entirely new game. The singularity will have to wait for engineers to catch up.
The article in the CJR concludes with this:
Journalists should continue to incite the public’s sense of awe about the brain and the exciting research in the field. But they should do so with humility given neuroscience’s infancy, imperfection, immense financial motives, limited applicability of any one study, and our own natural desire to believe reductionist explanations of cognitive phenomena. Or maybe they simply shouldn’t rely as much on those multi-colored maps of the brain to attract readers. As researchers at the University of Colorado have shown, “presenting brain images with articles summarizing cognitive neuroscience research resulted in higher ratings of scientific reasoning for arguments made in those articles, as compared to articles accompanied by bar graphs, a topographical map of brain activation, or no image.” In other words, neuroscience isn’t as simple as a colorful graphic, and reporters should never present it that way.
I agree completely. As I noted yesterday, what we know about consciousness -- even within a limited area such as decision-making -- is sketchy at best when everything is taken into account.
We still have a long way to go to even define consciousness, let alone understand it.
No comments:
Post a Comment