From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor Tue Mar 24 09:57:04 EST 1992
Article 4577 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: What comes after the Systems Reply?
Message-ID: <1992Mar18.221543.6924@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <1992Mar16.171520.15584@psych.toronto.edu> <1992Mar17.020503.9967@bronze.ucs.indiana.edu> <1992Mar18.035719.3394@psych.toronto.edu> <6428@skye.ed.ac.uk>
Date: Wed, 18 Mar 1992 22:15:43 GMT

In article <6428@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
.....
>but on the whole the pro-AI argument seems to be stuck at the Systems
>Reply and the Turing Test.  I don't think there's much to be said
>about the Systems Reply beyond "Searle is just the CPU, so how would
>he know?", and the Turing Test is just a way to avoid facing the
>question of what mechanisms are required (because -- they say --
>anything that generates the right behavior will do).
>
I can't speak for others, but I am certainly not the only one who falls back
on TT not because it 'is just a way to avoid facing the question of what 
mechanisms are required (because -- they say -- anything that generates
the right behavior will do)', but because it is the way in which we judge
understanding in other people and because there is nothing else available. If 
you could provide another criterion, I'd only be happy to try it and I am sure
many others would too. You yourself also avoid the question what other 
mechanisms are required  - you have never said what mechanisms should a machine
have, besides giving correct responses, so that you would accept that it has
understanding. 

>
>-- jeff


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


