From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!pacific.mps.ohio-state.edu!linac!mp.cs.niu.edu!rickert Tue Mar 24 09:57:10 EST 1992
Article 4586 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!pacific.mps.ohio-state.edu!linac!mp.cs.niu.edu!rickert
>From: rickert@mp.cs.niu.edu (Neil Rickert)
Newsgroups: comp.ai.philosophy
Subject: Re: Chinese room miscellanea
Message-ID: <1992Mar18.233403.15340@mp.cs.niu.edu>
Date: 18 Mar 92 23:34:03 GMT
References: <6417@skye.ed.ac.uk> <1992Mar17.235343.26537@mp.cs.niu.edu> <6434@skye.ed.ac.uk>
Organization: Northern Illinois University
Lines: 42

In article <6434@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>
>I think people on the AI side do their cause a disservice by
>linking it to something they're not in a position to show,
>namely that the system understands.

  Then I'm afraid your interpretation of the systems reply is very
different from mine.  Here is my interpretation:

	Searle:

		(1)  For the sake of attempting to show a contradiction,
		     assume that strong AI is possible.

		(2)  lots of discussion, ending with "the human doesn't
		     understand, thus there is a contradiction, so
		     strong AI is not possible.

	Systems reply:

		(a)  Understanding exists.  This is part of Searle's
		     hypothesis (1).

		(b)  the human should not be expected to understand.  Given
		     that there is understanding as hypothesized by Searle,
		     this understanding is in the system, not in the
		     individual.  Thus no contradiction has been
		     demonstrated, and Searle's proof fails.

 It is you who are linking the systems reply to a requirement that understanding
by the system be demonstrated.  No such demonstration is necessary, because
the understanding is part of the CR hypothesis.  When the anti-AI group
asserts that the system understands, they are not claiming anything
that needs proof.  They are essentially saying: "Well of course the human
doesn't understand - we never said he would.  Of course the understanding
hypothesized in (1) is in the system, not the human."

-- 
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
  Neil W. Rickert, Computer Science               <rickert@cs.niu.edu>
  Northern Illinois Univ.
  DeKalb, IL 60115                                   +1-815-753-6940


