Pages

Monday, March 31, 2008

Douglas Hofstadter - I Am a Strange Loop


I've recently started reading Douglas Hofstadter's I Am a Strange Loop. This is the first of his books I've read since Godel, Escher, Bach, back in college. Unfortunately, the book is merely a vague memory; fortunately, he wrote this new book to explain why everyone got the first one so wrong.

From Wikipedia (which DH doesn't much like):

Hofstadter had previously expressed disappointment with how Gödel, Escher, Bach was received. In the preface to the twentieth-anniversary edition, Hofstadter laments that his book has been misperceived as a hodge-podge of neat things with no central theme. He states: "GEB is a very personal attempt to say how it is that animate beings can come out of inanimate matter. What is a self, and how can a self come out of stuff that is as selfless as a stone or a puddle?"[1]

He sought to remedy this problem in I Am a Strange Loop, by focusing on and expounding upon the central message of Gödel, Escher, Bach. He seeks to demonstrate how the properties of self-referential systems, demonstrated most famously in Gödel's Incompleteness Theorem, can be used to describe the unique properties of minds.[2][3]


Scientific American reviewed the book back when it came out. Susan Blackmore offers a less sympathetic review. Here is a brief Q&A with Hofstadter from Wired last spring.

WIRED: How is your new book different from Gödel, which touched on physics, genetics, mathematics, and computer science?

HOFSTADTER: This time I’m only trying to figure out “What am I?”

Well, given the book’s title, you seem to have found out. But what is a strange loop?

One good prototype is the Escher drawing of two hands sketching each other. A more abstract one is the sentence I am lying. Such loops are, I think anyone would agree, strange. They seem paradoxical and even strike some people as dangerous. I argue that such a strange loop, paradoxical or not, is at the core of each human being. It is an abstract pattern that gives each of us an “I,” or, if you don’t mind the term, a soul.

Does this insight increase your understanding of yourself?

Of course. I believe that a soul is an abstract pattern, and we can therefore internalize in our brain the souls of other people.

You have a great line: “I am a mirage that perceives itself.” If our fundamental sense of what is real — our own existence — is merely a self-reinforcing mirage, does that call into question the reality of the universe itself?

I don’t think so. Even though subatomic particles engage in a deeply recursive process called renormalization, they don’t contain a self-model, and everything I talk about in this book — consciousness — derives from a self-model.

Strange Loop describes the soul as a self-model that is very weak in insects and stronger in mammals. What happens when machines have very large souls?

It’s a continuum, and a strange loop can arise in any substrate.

Thinking about different sizes of souls led you to vegetarianism. Would you hesitate to turn off the small soul of Stanley, the autonomous robot that found its way across the desert during the Darpa Grand Challenge?

Why not? Stanley doesn’t have a model of itself of any significance, let alone a persistent self-image built up over time. Unlike you and I, Stanley is no strange loop.

What if Stanley had as much self-awareness as a chicken?

Then I wouldn’t eat it, just as I wouldn’t eat a chicken.

In Loop, you shy away from speculating about the souls or the intelligence of computers, yet you’ve been working in AI for 30 years.

I avoid speculating about futuristic sci-fi AI scenarios, because I don’t think they respect the complexity of what we are thanks to evolution.

But isn’t your research all about trying to bring about such scenarios?

Thirty years ago, I didn’t distinguish between modeling the human mind and making smarter machines. After I realized this crucial difference, I focused exclusively on using computer models to try to understand the human mind. I no longer think of myself as an AI researcher but as a cognitive scientist.

One of the attractions of your writing is the wordplay, a fascination with the kind of recursions that appeal to programmers and nerds.

It is ironic because my whole life I have felt uncomfortable with the nerd culture that centers on computers. I always hope my writings will resonate with people who love literature, art, and music. But instead, a large fraction of my audience seems to be those who are fascinated by technology and who assume that I am, too.


DH is a strange man in some ways, but at least it's a good kind of strange -- he loves wordplay and puns, self-referential humor (which goes along with his idea that humans are abstract self-referential creatures), and strange analogies. I'm with him on futuristic AI claims (the singularity ain't coming in my lifetime or Ray Kurzweil's), but I'm not down with the whole vegetarian thing.

Speaking of the singularity nonsense, here is Hofstadter at their conference in April of 2007 (the talk was called Thinking Rationally About the Singularity).



Anyway, back to the book, where he presents a brief section on his [out-dated] reasoning for having been a partial vegetarian (he did not eat mammals). It was based solely on levels and depth of consciousness. This is actually an argument I make as to why I'll eat cow, but not pig (I prefer to call it what it is instead of saying pork and beef, which allows us to dissociate from what we are actually consuming).

In his model, humans are at the top and atoms are at the bottom. The inverted cone moves from little or no consciousness at the bottom, to less (but some) consciousness in the middle, to lots of consciousness at the top. Here is the hierarchy, from the top down:

normal adult humans
mentally retarded, brain-damaged, and senile humans
dogs
bunnies
chickens
goldfish
bees
mosquitoes
mites
microbes
viruses
atoms

If it were my chart, cows would between chickens and bunnies, although having been around all three, I'd put chickens higher than both cows and bunnies. And I'd place pigs, ravens, and elephants higher than dogs.

Essentially, what he is trying to do here is create a hierarchical model of interiority, the degree to which creatures are self aware. And then from that, he allows himself to make judgments as to the right he has to take such a life (no problem with mosquitoes, but it isn't cool to kill vertebrates.

In the scope of the book, this is an aside, but it is still an important idea. What he is really concerned with in this volume is the meaning and qualities of the soul. This is from the SciAm review:

Think of your eyes as that video camera, but with a significant upgrade: a mechanism, the brain, that not only registers images but abstracts them, arranging and constantly rearranging the data into mental structures--symbols, Hofstadter calls them--that stand as proxies for the exterior world. Along with your models of things and places are symbols for each of your friends, family members and colleagues, some so rich that the people almost live in your head.

Among this library of simulations there is naturally one of yourself, and that is where the strangeness begins. "You make decisions, take actions, affect the world, receive feedback from the world, incorporate it into yourself, then the updated 'you' makes more decisions, and so forth, round and round," Hofstadter writes. What blossoms from the Gödelian vortex--this symbol system with the power to represent itself--is the "anatomically invisible, terribly murky thing called I." A self, or, to use the name he favors, a soul.


He even proposes a measurement term for soul size, which is one of his concerns -- equated with the degree of interiority an organism possesses:

Souls, as Hofstadter puts it, come in "different sizes." In a whimsical moment, he even suggests that soulness might be measured--in units called "hunekers," after an American music critic, James Huneker, who once wrote of a certain Chopin étude that "small-souled men" should not attempt it. The scale might start with a mosquito, with a tiny fraction of a huneker, ascending to 100 for an average human and upward to maybe 200 for Mahatma Gandhi.

He is not concerned with neurons, neurotransmitters, synapses, columns, or even structures in the brain (at least not in the way neuroscientists are). What he is concerned with is concepts and analogies. What does the concept of love look like in the brain? What does hope look like? What does an I, a soul, look like?

This is from American Scientist:

Electrical signals and neurochemicals, or the porridge-like matter inside the skull, seem distinctly unpromising as origins of mind or meaning. Indeed, Hofstadter scorns John Searle's suggestion that neuroprotein constitutes "the right stuff" for intentionality and consciousness, whereas silicon or old beer cans obviously do not. What's important is not the stuff in itself, but the looping patterns of activity that emerge from it—whatever its chemistry happens to be. So whereas many philosophers despair of there being any scientific, naturalistic explanation of meaning, Hofstadter does not. But he doesn't accept the currently popular neuroscientific reductionism either. In his view, neuroscience can never capture the essence of mind. Indeed, the neuroscientific details are in an important sense irrelevant—even though they are, at base, what makes mind possible.

If a brain were all that one needed, then a newborn baby would greet the world with a mind ready-formed, albeit nearly empty. Indeed, many people assume that each human individual is equipped with a special inner essence at birth, perhaps even from the moment of conception. On the contrary, says Hofstadter, the mind-pattern develops gradually. Newborn babies are human beings only genetically, biologically or potentially. They don't yet have human minds, still less reflective human selves. Such patterns take many years to emerge.

The self, in short, is a lifelong construction. Up to a point, it's amenable to deliberate (reflexive) self-molding. It's a unifying pattern that enables its subpatterns—our desires, beliefs, plans and actions—to cohere and to advance toward freely (that is, personally) chosen ends. Hofstadter stresses the reality, and even the necessity, of the self. Far from being an arbitrary pattern, it emerges naturally from our neural activity, much as the video image on the screen emerges from the physics of the self-looping video camera. And it's a pattern without which the person concerned simply couldn't exist, because for that self to exist at all just is for that pattern to be instantiated....

Most of what I have covered here is in the first 30 pages or so of the book, so this promises to be a wild wide. The first half of the book largely serves its role of clarifying Godel, Escher, Bach, and does so quite well (especially for those of us who haven't read it in ages). The second half of the book details the painful and debilitating sudden death of his wife in 1993 -- and its aftermath in his life.

I'm really enjoying this book.


1 comment:

  1. Bill--
    Thank you for reviewing Hofstadter's book. I met Hofstadter and his dad Robert, a Nobel Prize winning physicist, back in the early 80's at a lecture Hofstadter delivered at my university. I had read "GEB" a few years before and was thrilled to see its author in person along with his dad, the first Nobel Prize winner I had ever seen in person. i agree with you that Douglas Hofstadter is kind of a strange guy but in a good way. He's brilliant and thought-provoking, and he's written some great books and articles. I'm going to order "Strange Loop" right now.

    ReplyDelete