Title

Resources for a Christian worldview

Basden/ChineseRoom

Welcome

Newest Members

 

This is an unfinished paper, but is included in this collection because it provides ideas which have yet to be published fully. It is based on an extended abstract submitted to the European Computers and Philosophy Conference, ECAP 2007, but not selected. It suggests that Dooyeweerd's radical non-Cartesian notion of subject-object relationship can provide new ways to address the artificial intelligence question of whether computers can be like human beings, as epitomized by John Searle's thought experiment of the Chinese Room. Debate has polarized into two camps, but Dooyeweerd can provide fresh insight, because he gives priority to meaning rather than to being or causality. A short version of this can be found in chapter V of the author's book, 'Philosophical Frameworks for Understanding Information Systems'.

Fresh light thrown into the Chinese room

Andrew Basden

Searle's [1981] Chinese Room thought experiment was designed to show the impossibility, or at least the meaninglessness, of the claims of strong artificial intelligence. A person is inside a room and, through a hole in the wall, receives and sends pieces of paper with squiggles on them that happen to be Chinese characters. The squiggles they send depend on the ones received and all those previously received in accordance with a book of rules that tells the person what squiggles to respond with in every possible situation. The person is supposed to abide strictly by the rules and never employs his common sense or any other knowledge they might have from previous experience. The Chinese Room is a metaphor for a computer with its processor (the person), memory (the pieces of paper already received) and program (rule book). Searle asks: where is the understanding of Chinese? He believes the question to be rhetorical, in that it seems obvious that the person has no genuine understanding of Chinese. Hence, by analogy, a computer running a symbol-manipulation program can never understand the knowledge level content of the program.

Various attempts have been made to find some way in which there is genuine understanding of Chinese in the room, but Searle [1990] has answered them all.

  • What is called the systems reply is that "while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese." Searle's argument against this is that even if the individual memorizes the whole rule book and all else he still does not understand Chinese.

  • What is called the robot reply suggests that we merely need to attach TV camera for eyes, motorized appendages for arms and legs, on the grounds that genuine understanding involves physical activity such as eating and hammering in nails. Searle counters this by saying that if some of the Chinese characters were from the camera and some go to control limb motors, unbeknown to him, this would still not enable him to understand Chinese.

  • The brain simulator reply is that if the rule book (program) actually simulates the (biotic) activity of the nerve cells of a Chinese speaker then the system can understand Chinese. Apart from noting that this reply is against the spirit of AI and simulating mental activity at the wrong level, Searle counters it in a similar way to the first, in that even if the inhabitant memorizes all the rules of nerve cell behaviour, he still cannot be said to understand Chinese.

  • The combination reply combines all the above: a robot that simulates brain activity. Searle counters by saying that the inhabitant of the room still only knows how to perform mechanical operations on meaningless symbols.

  • The other minds reply is that we cannot know people truly understand Chinese except by their behaviour, which is how we could judge the machine, so if the behaviour of the latter is like that of people, then we attribute understanding to it. Searle counters this with "The problem .. is not about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I [do so]. ... In cognitive sciences one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical systems."

  • the many mansions reply is that eventually we will be able to build types of computing devices that have the causal processes that Searle believes humans have but current computer technology does not. Searle counters this by agreeing with it, but then saying that it avoids the issue by redefining it so widely as to be virtually irrelevant to the claims of strong A.I.

Searle maintains that what he calls biological causality is necessary for intentionality and since computers possess a different type of causality they can never be said to possess intentionality. Searle maintains that a computer has syntax but no semantics.

Boden [1990] then tries to provide two more replies to Searle. First, she suggests that Searle makes a category mistake, in that, in a person, it is not the brain that has intentionality but the person. Therefore Searle's argument that the Searle-in-the-room character cannot possess intentionality is irrelevant because it is likened to a brain rather than a person. I have some sympathy with Boden's argument here. But here second argument is weaker. It has two parts. First, she argues that following the rule book does involve understanding - an understanding of English in which the rules are written. But she seems to miss the obvious rejoinder that this still does not enable the person to understand Chinese in the way a human being would understand Chinese. Second, she argues against Searle's statement that computer has syntax but no semantics by arguing that for the internal rule-following program to run there must be some procedural element, and this constitutes semantics - the semantics of the syntax of the program that follows the rules. This is in line with a common usage of the word 'semantics' in programming communities: the action that is undertaken in response to symbols of the program. But, as with the first part, she seems to miss Searle's point. What Searle meant by semantics was the semantics of Chinese characters, not the procedural element of the rule-following program.

A rather obvious answer to Searle's original question, "Where is the understanding of Chinese?", is "In the rule book." That is, what makes a native Chinese speaker respond to Chinese utterances in a meaningful way might be said to be the 'rule book' inside them. Strange to say, this answer does not seem ever to have been discussed. (If it has, please let me know where, and by whom.) This paper examines the idea that it is the rule book that has the understanding of Chinese (we assume that this is not a category error). We discuss why it has not been proposed and suggesting this is because the mainstream philosophical stances held would not allow it to be posed. The paper then discusses a philosophical stance from which this answer can be meaningful and not be a category error.

There might be several reasons why this answer has not been proposed. But a likely one is that the presuppositions made by the community that includes both Searle and his critics do not allow the question to be answered in such a way. It seems to be presupposed that what a valid answer must contain is mention of the activity of some entity, and that that entity must act as understanding subject. First, the rule book, being static, cannot act in any way. Second, the argument between Searle and his critics is over what or who might be an understanding subject in a Cartesian sense of that term. The book, being, in Cartesian terms, an object, cannot act as understanding subject. For either reason or both, it becomes difficult to propose that the understanding of Chinese lies in the rule book.

But this answer has some merit. The rule book (program of the computer) has been written by somebody, somebody who understands Chinese. The rules in the book are an expression of their understanding. Whatever mechanism is in the room to act on the rules - be it a human being or some machine - does not matter. In either case, the understanding of Chinese within the system resides in the rule book.

This is not the same as the systems reply, which postulates that the understanding comes about by virtue of a conjunction of parts ("the system is more than the sum of its parts"). It is common in the systems community to treat intentionality as an emergent property of such conjunctions. Rather, the understanding of Chinese does not 'emerge' but is designed into (part of) the system by somebody external to it.

There are not many philosophers who can provide a foundation for this view, whether of the modern, mediaeval or Greek period, and the paper will discuss some briefly. But one philosophy that can provide a foundation is that of the late Herman Dooyeweerd [1955]. He questioned the presuppositions that underlie 2,500 years of Western thinking and started from different ones.

The paper explains Dooyeweerd's view and how it makes it meaningful for us to say that the rule book 'knows' Chinese. The argument rests on a very different, non-Cartesian, notion of subject and object proposed by Dooyeweerd. The book 'knows' Chinese by virtue of having been authored by some human being who knows Chinese. The rule book's 'knowing' of Chinese cannot stand alone but must always be understood in the context of its author. This 'knowing' differs in fundamental ways from human knowing, not in terms of mechanism but in terms of it being object-functioning in the aspect of knowing (the lingual aspect) in contradistinction to the subject-functioning in this aspect by its author.

But it happens that what the author wrote in the rule book is not an expression of their full knowledge of Chinese. It is limited to their knowledge of the syntax of Chinese speech, so the book only 'knows' Chinese syntax and not Chinese as a whole. Moreover, the knowledge is written in another language. The person in the room who obeys the rules is also subject-functioning

Fundamental to the cosmos, in Dooyeweerd's view, is not its entities but the law-side. This is constituted of a number of modal aspects that are also spheres of law - such as physics, biotics, logic, linguistics, aesthetics, juridics, etc. These aspectual laws constitute a framework that makes all that we do possible. An entity (such as a person) responds to aspectual law (without necessarily having explicit knowledge of them) as subject to those laws. In this approach, Dooyeweerd brings together the two English language meanings of 'subject': being subject to law and subject as actor or initiator. Humans are subject to laws of all aspects (Dooyeweerd delineated fifteen aspects though acknowledged there might be others), whereas animals, plants and lifeless things are subject only to an increasingly restricted range of aspects. In particular, a lifeless thing such as our rule book is subject only to the laws of the physical, kinematic, spatial and quantitative aspects.

However, the rule book may function as object in any aspect, as part of human subject-functioning. In this case, it functions as object within the lingual aspect of symbolic signification. In Dooyeweerd's scheme, such object-functioning is as much functioning as our subject-functioning, since the object-functioning is not something that is 'done to' the object, but rather a response that the object makes to the subject functioning of the human. Not only do I see the screen in front of me (subject-functioning in psychic aspect), but, in a very real way, the screen allows itself to be seen (object-functioning in the psychic aspect).

This enables us to say meaningfully that the book 'knows' Chinese. It is not subject-functioning-to-know but object-functioning-to-know. But object-functioning cannot stand alone, and can only be understood as part of some other entity's subject-functioning. This means that the rule book only 'knows' Chinese by virtue of its author's subject-functioning in knowing Chinese.

The Dooyeweerdian view is at the same time stronger and weaker than Searle's. If we talk in Dooyeweerdian terms about subject- and object-functioning, then strong AI claims that the computer can function as subject in all aspects that a human can, and specifically in the physical to the lingual aspects. Searle disagrees and holds that whereas the computer can function as subject in the physical to formative aspect (the latter referring to having syntax), it does not function as subject in the lingual aspect (semantics). However, the Dooyeweerdian approach holds that the only aspect in which the book can function as subject is the physical, as shown in the following table:

Aspect Dooyeweerd Searle Strong AI
Lingual (semantics) No No Yes
Formative (structure) No Yes Yes
Analytic (basic data) No Yes Yes
Psychic (bits, cells) No Yes Yes
Physical (materials) Yes Yes Yes

However, if we are to consider functioning of any kind, whether subject or object, we obtain a different picture as seen in the following table:

Aspect Dooyeweerd Searle Strong AI
Lingual (semantics) Yes No Yes
Formative (structure) Yes Yes Yes
Analytic (basic data) Yes Yes Yes
Psychic (bits, cells) Yes No Yes
Physical (materials) Yes Yes Yes

Dooyeweerd's notion of two types of functioning is interesting. Both Searle and strong AI presuppose only one type of functioning.

# functioning as subject = functioning in and of itself

In terms of subject-functioning in the various aspects, Dooyeweerd allows the rule book to function as subject only in the physical aspect whereas ==== to be completed.

References

Boden MA (1990) "Escaping from the Chinese Room", pp. 89-104 in Boden MA (ed.) The Philosophy of Artificial Intelligence, Oxford University Press.


Created: 2007 by Andrew Basden.

Last updated: 12 December 2007