Pages

Monday, February 08, 2010

ShrinkWrapped - Empathy and AI (five parts)

I think the development of AI (artificial intelligence) is great, I just don't think it will EVER be competitive with human consciousness in any at all. The single greatest reason this is true is that a computer will never have a fully flesh and blood human body, and consciousness is not a mere by-product of brain function but, rather, a construct of body (including the brain), culture, and the subjective intersection of the two. So much of what we experience as subjectivity is a result of the senses and perceptions of the body.

Anyway, even though I hold that view, I find discussions about the possibility of AI to be interesting. And this is the first time I have seen the issue of empathy addressed in an AI context.

From ShrinkWrapped:

Empathy and AI: Part V

In Empathy and AI: Part I I discussed the possibility of coding for empathy in our imagined AI offspring.

In Empathy and AI: Part II I wondered about how to perform the mirroring function for an AI. Most AI researchers believe that the AI's mind will evolve rather than spring into being fully formed. As such, encoding for empathy becomes a significant issue.

In Empathy and AI: Part III I briefly described some of the factors involved in the development of a coherent sense of self and how such development depends upon an empathic connection to another's mind.

In Empathy and AI: Part IV I speculated on how the development of a self in an AI could go awry analogous to disorders of the self in humans.

In this final post in the series I want to discuss the higher order functions that a healthy mind must contain and my concern the there may be a relative neglect of such higher order functions in the field of AI.

Fortuitously, J Stoors Hall, who blogs on AI at the Foresight Institute, wrote today about the father of AI, Marvin Minsky, whose ideas have been somewhat neglected in the field but are germane to the discussion. Minsky was speculating about the organization of a mature AI mind:

The first AI blog

The first AI blog was written by a major, highly respected figure in the field. It consisted, as a blog should, of a series of short essays on various subjects relating to the central topic. It appeared in the mid-80s, just as the ARPAnet was transforming over into the internet.

The only little thing I forgot to mention was that it didn’t actually appear in blog form, which of course hadn’t been invented. The WWW didn’t appear until the next decade. It appeared in book form, albeit a somewhat unusual one since it was, as mentioned, a series of short essays, one to a page. It was, of course, Marvin Minsky’s Society of Mind.

Of course, you’re reading a blog about AI right now. The difference is that that was Minsky, and this is merely me. If you haven’t read SOM, put down your computer and go read it now.

Good. You’re back. Here’s why SoM is relevant to our subject of whether and how soon AI is possible:

It remains a curious fact that the AI community has, for the most part, not pursued Society of Mind-like theories. It is likely that that Minsky’s framework was simply ahead of its time, in the sense that in the 1980s and 1990s, there were few AI researchers who could comfortably conceive of the full scope of issues Minsky discussed—including learning, reasoning, language, perception, action, representation, and so forth. Instead the field has shattered into dozens of subfields populated by researchers with very different goals and who speak very different technical languages. But as the field matures, the population of AI researchers with broad perspectives will surely increase, and we hope that they will choose to revisit the Society of Mind theory with a fresh eye. (Push Singh — further quotes from the same source)

In other words, here’s a comprehensive theory of what an AI architecture ought to look like that is the summary of the lifework of one of the founders and leaders of the field, and yet no one has seriously tried to implement it. (When I say serious, I mean put as much effort into it as has gone into, say, Grand Theft Auto.) (There has been a serious effort to implement the theoretical approach of the CMU wing of classical AI, namely SOAR.)

Part of the reason for this is that SoM is in some sense only half a theory:

Minsky sees the mind as a vast diversity of cognitive processes each specialized to perform some type of function, such as expecting, predicting, repairing, remembering, revising, debugging, acting, comparing, generalizing, exemplifying, analogizing, simplifying, and many other such ‘ways of thinking’. There is nothing especially common or uniform about these functions; each agent can be based on a different type of process with its own distinct kinds of purposes, languages for describing things, ways of representing knowledge, methods for producing inferences, and so forth.

To get a handle on this diversity, Minsky adopts a language that is rather neutral about the internal composition of cognitive processes. He introduces the term ‘agent’ to describe any component of a cognitive process that is simple enough to understand, and the term ‘agency’ to describe societies of such agents that together performs functions more complex than any single agent could.

J. Storrs Hall focuses on how the field has focused at most on discrete elements (agents) of Minsky's fractionated mind model and points out that Minsky avoided discussing the actual internal composition of such agents. Neuroscience has increasingly supported the notion that our minds, despite our subjective experience of its unitary character, are composed of many unique modules, analogous to agents, that operate somewhat independently. This fits nicely with the psychoanalytic theory of mind, ie that our minds contain multiple desires, wishes, prohibitions, inhibitions which conflict with each other in myriad ways. In Psychoanalysis we explicitly focus on increasing the capacity of our patients to first identify and then integrate and synthesize the various conflicting strands within their minds. In other words the various competing outputs of the various modules of the mind must be summed, integrated with each other, in order to produce an outcome acceptable to the executive apparatus, ego/self/mind. The most effective minds have the highest capacity for synthesis, as well, taking old conflicts or contents and reworking them to produce novel, more adaptive, outcomes.

The integrative and synthetic functions of the mind are among the most sophisticated, probably the most recent evolutionary advances, and the most sensitive to disruption. I suspect our computer scientists will have great success in developing the various agents that Minsky mentions. Their energies will be devoted to making those agents as powerful as possible. The greatest and last test for our ability to determine the friendliness of our AIs will lie in how such agents are integrated and whether or not they are able to contain or develop a synthesizing functionality, balancing the various agents in such a way as to enable the kind of conscience that would foster friendliness. J Storrs Hall implies that the current work on AI does not include particular attention to integrating and synthesizing the various discrete agents being worked on; this does not seem promising as it neglects what may well be the most important determinants of the future friendliness of our offspring.

No comments:

Post a Comment