Pages

Thursday, July 15, 2010

Building One Big Brain - Robert Wright

http://0.tqn.com/d/gocalifornia/1/0/O/l/img_vertchamb.jpg

Robert Wright, author of The Evolution of God, among other books, had an interesting article in the New York Times the other day, looking at Nicholas Carr, evolution, technology, and a new book from Kevin Kelly, What Technology Wants (due out October 14, 2010, pre-order price is 1/3 off). Are we heading toward a superconscious technological superorganism?

Building One Big Brain

By ROBERT WRIGHT

For your own sake, focus on this column. Don’t think about your Facebook feed or your inbox. Don’t click on the ad above or the links to the right. Don’t even click on links within the column.

Failing to focus — succumbing to digital distraction — can make you lose your mind, fears Nicholas Carr, author of the much-discussed book “The Shallows.” At least, it can make you lose little parts of your mind. The Internet, Carr suspects, “is chipping away my capacity for concentration and contemplation.”

He’s not alone in his fears. Since his book came out there have been lots of ruminations — including one or two or three in The Times alone — on whether online technology is friend or foe, good for our brains or bad.

But maybe the terms of the debate — good for us or bad for us? — are a sign that we’re missing the point. Maybe the essential thing about technological evolution is that it’s not about us. Maybe it’s about something bigger than us — maybe something big and wonderful, maybe something big and spooky, but in any event something really, really big.

Don’t get me wrong. I join other humans in considering human welfare — and the welfare of one human in particular — very important. But if we’re going to reconcile human flourishing with the march of technology, it might help to understand what technology is marching toward.

This autumn will see the publication of a book that promises to help us out here: “What Technology Wants,” by Kevin Kelly, a long-time tech-watcher who helped launch Wired magazine and was its executive editor back in its young, edgy days.

Don’t let the title of Kelly’s book terrify you. He assures us that he doesn’t think technology is conscious — at least, not “at this point.” For now, he says, technology’s “mechanical wants are not carefully considered deliberations but rather leanings.”

So relax; apparently we have a few years before Keanu Reeves gets stuffed into a gooey pod by robotic overlords who use people as batteries. Still, it’s notable that, before Reeves played that role in “The Matrix,” the movie’s directors gave him a copy of Kelly’s earlier book, “Out of Control,” as preparation. And Kelly does say in “What Technology Wants” that technology is increasingly like “a very complex organism that often follows its own urges.”

Well, I don’t know about the “urges” part, but it’s true that technology is weaving humans into electronic webs that resemble big brains — corporations, online hobby groups, far-flung N.G.O.s. And I personally don’t think it’s outlandish to talk about us being, increasingly, neurons in a giant superorganism; certainly an observer from outer space, watching the emergence of the Internet, could be excused for looking at us that way. In fact, the superorganism scenario is in a sense just the cosmic flip side of the diagnosis offered by Carr and other techno-skeptics.

To begin with, note that the new technologies, though derided by some of these skeptics for eroding the simple social bonds of yesteryear, are creating new social bonds. We’re not just being lured away from kin and next-door neighbors by machines; we’re being lured away by other people — people on Facebook, people in our inbox, people who write columns about giant superorganisms.

And, as the author Steven Johnson recently noted, these social connections, though so distracting that it’s hard to focus on any task for long, nonetheless bring new efficiencies. In a given hour of failing to focus, you may: 1) check your e-mail and receive key input from a colleague as well as a lunch confirmation from a friend; 2) check Facebook and be led by a friend to an article that bears on your political passions, while also checking out the Web site of a group that harnesses that passion, giving you a channel for activism; 3) and, yes, waste some time reading or watching something frivolous.

But frivolity isn’t a recent invention. On balance, technology is letting people link up with more and more people who share a vocational or avocational interest. And it’s at this level, the social level, that the new efficiencies reside. The fact that we don’t feel efficient — that we feel, as Carr puts it, like “chronic scatterbrains” — is in a sense the source of the new efficiencies; the scattering of attention among lots of tasks is what allows us to add value to lots of social endeavors. The incoherence of the individual mind lends coherence to group minds.

No wonder Carr finds technology oppressive. Its needs trump ours! We’re just cells, and the organism’s the main thing.

If it’s any consolation, we’re not the first humans to go cellular. The telephone (and for that matter the postal system before it) let people increase the number of other brains they linked up with. People spent less time with their few inherited affiliations — kin and neighbors — and more time with affiliations that reflected vocational or avocational choices.

Of course, having more affiliations meant having more superficial affiliations — and this led earlier social observers to conclusions that resonate with Carr’s thesis. In the 1950 sociology classic “The Lonely Crowd,” David Riesman and two colleagues argued that the “inner-directed” American, guided by values shared with a small and stable group of kin and friends, was giving way to an “other-directed” American. Other-directed people had more social contacts, and shallower contacts, and they had more malleable values — a flexibility that let them network with more kinds of people.

In other words, Riesman, like Carr, noted a loss of coherence within the individual. He saw a loss of normative coherence — a weakening of our internal moral gyroscope — and Carr sees a loss of cognitive coherence. But in both cases this fragmenting at the individual level translates, however ironically, into broader and more intricate cohesion at the social level — cohesion of an increasingly organic sort. We’ve been building bigger social brains for some time.

Could it be that, in some sense, the point of evolution — both the biological evolution that created an intelligent species and the technological evolution that a sufficiently intelligent species is bound to unleash — has been to create these social brains, and maybe even to weave them into a giant, loosely organized planetary brain? Kind of in the way that the point of the maturation of an organism is to create an adult organism?

Unlike many other card-carrying Darwinians, I’ve long considered this prospect compatible with Darwinism and with scientific materialism broadly — but this isn’t the place to hash that issue out. (And don’t be distracted by my video argument with Daniel Dennett about this question or by our subsequent argument about the argument or by my less contentious written exchange with Steven Pinker on the subject. And avoid this like the plague.) Instead, let’s focus on the issue at hand: If we grant the superorganism scenario for the sake of argument, is it spooky? Is it bad news for humans if in some sense the “point” of the evolutionary process is something bigger than us, something that subsumes us?

I have to admit that I’m not totally loving the life of a cell. I’m as nostalgic as the next middle-aged guy for the time when focus was easier to come by, and I do sometimes feel, after a hard day of getting lots of tiny little things more-or-less done, that the superorganism I’m serving is tyrannical — as if I’m living that line in Orwell’s “1984”: “Can you not understand, Winston, that the individual is only a cell? The weariness of the cell is the vigor of the organism.”

But at least the superorganism that seems to be emerging, though in some ways demanding, isn’t the totalitarian monster that Orwell feared; it’s more diffuse, more decentralized, more reconcilable — in principle, at least — with liberty.

And that’s good news, because I do think we ultimately have to embrace a superorganism of some kind — not because it’s inevitable, but because the alternative is worse. If technological progress grinds to a halt, it will be because chaos has engulfed the world; and if we don’t use technology to weave people together and turn our species into a fairly unified body, chaos will probably engulf the world — because technology offers so much destructive power that a sharply divided human species can’t flourish.

If you accept that premise, then the questions are: What sort of human existence is implied by the ongoing construction of a social brain; and, within the constraints of that brain, how much room is there to choose our fate?

I have my own views on this, and some of them are upbeat, but they’re hard to summarize without sounding comically cosmic. (For example: keeping the superorganism project on track — that is, not letting planet Earth dissolve into chaos — will mean getting closer to moral truth, I think.)

As for Kevin Kelly’s view: I’ll let Kelly speak for himself as the timely publication of his fascinating book approaches. But it’s safe to say that he’s upbeat. He writes of technology “stitching together all the minds of the living, wrapping the planet in a vibrating cloak of electronic nerves” and asks, “How can this not stir that organ in us that is sensitive to something larger than ourselves?”

No doubt some of his critics will think of ways. But the question he’s asking strikes me as the right long-term question: Not so much how do we reconcile ourselves to technology, but how do we reconcile ourselves to — and help shape — the very big thing that technology seems devoted to building?


No comments:

Post a Comment