From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!ccu.umanitoba.ca!zirdum Tue Mar 24 09:56:42 EST 1992
Article 4543 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!ccu.umanitoba.ca!zirdum
>From: zirdum@ccu.umanitoba.ca (Antun Zirdum)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply I
Message-ID: <1992Mar18.072634.9259@ccu.umanitoba.ca>
Date: 18 Mar 92 07:26:34 GMT
References: <1992Mar11.201637.21875@psych.toronto.edu> <1992Mar12.001918.2564@ccu.umanitoba.ca> <6423@skye.ed.ac.uk>
Organization: University of Manitoba, Winnipeg, Manitoba, Canada
Lines: 77

You keep dancing around, but I would really like a straight
answer out of you! Please, quit suggesting that you know
something that I don't, spill the beans allready!

In article <6423@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <1992Mar12.001918.2564@ccu.umanitoba.ca> zirdum@ccu.umanitoba.ca (Antun Zirdum) writes:
>>In relation to the above, what would the ANTI AI crowd require
>>so that a symbol has a reference (semantics). If you do not
>>know what it means to have semantics, then how is it possible
>>for you to argue that something does not have it?
>
>You might start by showing how "cats" lines up with cats and
>not with cherries.  For which see the Putnam discussion now
>lost in the noise and perhaps abandoned.
>
>>If entity X acts as if it has semantics, does it? Or does it require
>>something else? (WHAT)
>
>It certainly requires something more than behavior.  To anyone
>who thinks that behavior is all that is required, I don't think
>there's anything more to say.
>
Perhaps we can agree on this point, but I really need to know
what is that *thing* that is required for understanding?????

*** what else does it require *** (understanding? ha!)
>>In what way do people attach meaning to symbols, for instance, how
>>do you learn what a word means.
>
>We know humans can do it.  And there are arguments that machines
>can't (just by running the right program).  The correctness of
>those arguments does not depend on knowing how humans do it.
>
All of those arguments boil down to this "machines can't because
they are not people!" What if I showed you that people are
machines?

>>Take the word 'BLUE' (color) what does it mean to say that you know
>>what 'blue' means? Does this mean that you have something in your
>>brain that is blue,
>
>Are you serious?
>
Dead serious! Now please answer the question, how does one
understand blue?

I accept that humans have understanding, but my argument
with you is just what constitutes this *understanding*?
Please no rehtoric, I just want a straight answer.
>>In other words, please explain to me in concrete terms what it is
>>that a machine is missing (but that people have) that enables them
>>to have knowledge of meaning!
>
>That is simply not necessary.
>
I simply do not understand your reasoning! You are claiming that
a machine cannot have understanding! You do not know what
constitutes understanding!
conclusion: You do not know what the machine cannot have!

example:
	I have X!
	You cannot have X!
	I do not know what X is!
	- draw your own conclusion!

I do believe however that you have proved that a machine
cannot have *your* personal understanding! But then again,
neither could any person!

Please! It is necessary to know what understanding is
to argue about it!
-- 
*****************************************************************
*   AZ    -- zirdum@ccu.umanitoba.ca                            *
*     " The first hundred years are the hardest! " - W. Mizner  *
*****************************************************************


