Pages

Friday, July 03, 2009

The Three Minds Argument by Jamie Cullen

I'm not a fan of AI, and it seems John Searle isn't either. But this paper aims to refute Searle's best known objection to artificial intelligence.

The Three Minds Argument

Jamie Cullen

Artificial Intelligence Laboratory

The University of New South Wales

jsc@cse.unsw.edu.au

Journal of Evolution and Technology - Vol. 20 Issue 1 – June 2009 - pgs 51-60

http://jetpress.org/v20/cullen.htm

Abstract

Searle (1980, 2002, 2006) has long maintained a position that non-biologically based machines (including robots and digital computers), no matter how intelligently they may appear to behave, cannot achieve “intentionality” or “consciousness,” have a “mind,” and so forth. Standard replies to Searle’s argument, as commonly cited by researchers in Artificial Intelligence and related communities, are sometimes considered unsatisfactory by readers outside of such fields. One possible reason for this is that the Chinese Room Argument makes a strong appeal to some people’s intuitions regarding “understanding” and necessary conditions for consciousness. Rather than contradict any such intuitions or conditions, I present what in my view is an independent and largely compatible intuition: If Searle’s argument is sound, then surely a human placed under similar testing conditions as a non-biological machine should succeed where a machine would allegedly fail. The outcome is a new rebuttal to the Chinese Room that is ideologically independent of one’s views on the necessary and sufficient conditions for having a “mind.”


1 Introduction

Searle’s Chinese Room Argument (CRA) claims to examine and reject the assertion that:

[The] appropriately programmed computer really is a mind, in the sense that computers given the right programs can literally be said to understand and have other cognitive states. (Searle 1980.)

The CRA has been re-described with many variations over the years, and is perhaps the most frequently cited argument against the possibility of “Artificial General Intelligence” (AGI) and related notions.1 While many in the Artificial Intelligence (AI) community may readily dismiss Searle’s claims, perhaps citing well-known replies, such as the Robot Reply or the Systems Reply (described later), many of the better-known replies have counter arguments provided by Searle (such as in his original 1980 paper). Regardless of whether or not one accepts such counter arguments, after more than twenty five years of intense debate, the CRA apparently still refuses to die.

In my opinion, a significant amount of this continued debate is caused by two inter-related factors: (a) Some opponents of the CRA are possibly not aware of the key belief underlying Searle’s argument; and (b) A strong appeal is frequently made to a “commonsense” intuition that sometimes misleads people into incorrectly accepting the CRA.

The dual purpose of this paper is to (a) draw explicit attention to the earlier mentioned underlying belief, and (b) provide an alternative appeal to a commonsense intuition that makes the sleight of hand underlying the Chinese Room more readily apparent, whilst neither affirming nor contradicting the underlying belief. I hope that the argument presented here is found to be intuitive by people both inside and outside of the Artificial Intelligence research community.

I will conclude the paper by discussing the implications of the presented argument, and by re-examining the roles of the various possible participants in the CRA. I will then draw some conclusions regarding the structure of the CRA, and the relevance (or lack thereof) of “intentionality” and related philosophical topics that are commonly raised in connection with the argument.

No comments:

Post a Comment