From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!pindor Mon May 25 14:06:51 EDT 1992
Article 5811 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: The Systems Reply I
Message-ID: <1992May21.165249.2895@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <1992May18.120933.1683@oracorp.com> <6728@skye.ed.ac.uk> <1992May20.223911.20396@mp.cs.niu.edu>
Date: Thu, 21 May 1992 16:52:49 GMT

In article <1992May20.223911.20396@mp.cs.niu.edu> rickert@mp.cs.niu.edu (Neil Rickert) writes:
>In article <6728@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>>In article <1992May18.120933.1683@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:
>
>>>                   completely closed yourself off to any
>>>progress in the area of the philosophy of AI. All of the promising
>>>areas for investigation you have declared are uninteresting. You don't
>>>want to know how humans understand, you only want to know whether
>>>humans understand. You don't want to discuss the "other minds problem"
>>>when it relates to humans, but you insist on the "other minds problem"
>>>when it relates to computers. You refuse to consider "skeptical
>>>possibilities" when they relate to humans, and insist on them when
>>>they relate to computers. You refuse to make definitions, but it is
>>>only through rigorous definitions that science (and understanding) can
>>>make any progress.
>>
>>This is almost completely wrong, and insulting besides.
>
>  Funny you should say that.  I thought Daryl had it almost completely
>right!  If he is so completely wrong, you must be somehow misstating your
>position in such a way that we reach this type of interpretation.
>
Funny but this is exactly my feeling too! 

>>For instance, I am interested in how humans understand.  I just don't
>>think we have to know this before we can reach any conclusions about
>>computers.
>
>  One way of finding out how humans understand is to try to create the
>equivalent ability in a computer.  You don't have to succeed in order
>to learn.  Indeed the manner in which the attempt fails can be quite
>revealing.  In my opinion failed attempts at creating AI have already
>contributed considerably to our understanding of the nature of mind.
>
>  When you insist on coming to premature conclusions about computers, you
>effectively shut out this method of investigation.
>
Couldn't agree more.


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


