From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!bonnie.concordia.ca!ccu.umanitoba.ca!zirdum Thu Feb 20 15:22:10 EST 1992
Article 3871 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!bonnie.concordia.ca!ccu.umanitoba.ca!zirdum
>From: zirdum@ccu.umanitoba.ca (Antun Zirdum)
Newsgroups: comp.ai.philosophy
Subject: Re: Evidence that would falsify strong AI.
Message-ID: <1992Feb19.155900.5064@ccu.umanitoba.ca>
Date: 19 Feb 92 15:59:00 GMT
References: <6185@skye.ed.ac.uk> <1992Feb14.221018.22990@gpu.utcs.utoronto.ca> <6206@skye.ed.ac.uk>
Organization: University of Manitoba, Winnipeg, Manitoba, Canada
Lines: 42

In article <6206@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <1992Feb14.221018.22990@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>>In article <6185@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>>>Moreover, since a lot of people have little trouble in rejecting
>>>Searle's conclusion, without resolving exactly what "understand"
>>>means, I find it hard to see why it's such a problem.
>>>
>>May be some of them reject Searle's conclusion because  they feel that the
>>whole argument is using notions which are not sufficiently well defined.
>
>And is Searle explaiting this in some way?  If there's some
>equivocation in Searle's argument, why not point it out?
>
This is Searle's argument --> There are pink elephants floating around
the room right now! but you cannot see them, and there is no experiment
that can tell if they are there or not!!! Sure you can construct a room
that is functionally equivalent, but without the pink elephants, but that 
room would be missing the key ingredient that goes into making rooms!

This is Searle's argument on drugs --> (read the above but replace pink
elephant with consciousness, and replace room with brain!)

If anyone would like to reply - please include in your reply some way for
me (in particular) to be able to tell if there is a brain simulation going
on without consciousness! I would really like to know if I am conscious
like like the rest of you. (oh, yes. Since you are not aware of any physical
simularity between you and me, please do not use the 'argument from simularity.'
I may look like you, or I may not!)

>Or do these people think the notions are so ill-defined that we
>can't conclude anything one way or the other?
>
>Or do they think only behavioral definitions can be sufficiently
>well-defined?
>
>There are many possibilities.  People who want defintions should
>say why.
>
>-- jd


AZ - "All opinions expressed are my brain's" --


