From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!ub!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Mar 24 09:56:30 EST 1992
Article 4525 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!ub!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply I
Message-ID: <6423@skye.ed.ac.uk>
Date: 18 Mar 92 00:32:11 GMT
References: <6374@skye.ed.ac.uk> <1992Mar11.201637.21875@psych.toronto.edu> <1992Mar12.001918.2564@ccu.umanitoba.ca>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 47

In article <1992Mar12.001918.2564@ccu.umanitoba.ca> zirdum@ccu.umanitoba.ca (Antun Zirdum) writes:
>In relation to the above, what would the ANTI AI crowd require
>so that a symbol has a reference (semantics). If you do not
>know what it means to have semantics, then how is it possible
>for you to argue that something does not have it?

You might start by showing how "cats" lines up with cats and
not with cherries.  For which see the Putnam discussion now
lost in the noise and perhaps abandoned.

>If entity X acts as if it has semantics, does it? Or does it require
>something else? (WHAT)

It certainly requires something more than behavior.  To anyone
who thinks that behavior is all that is required, I don't think
there's anything more to say.

>In what way do people attach meaning to symbols, for instance, how
>do you learn what a word means.

We know humans can do it.  And there are arguments that machines
can't (just by running the right program).  The correctness of
those arguments does not depend on knowing how humans do it.

>Take the word 'BLUE' (color) what does it mean to say that you know
>what 'blue' means? Does this mean that you have something in your
>brain that is blue,

Are you serious?

>In other words, please explain to me in concrete terms what it is
>that a machine is missing (but that people have) that enables them
>to have knowledge of meaning!

That is simply not necessary.

>(Do not say that humans have understanding, that is merely to beg
>the question, 'a computer doesn't have understanding because
>it doesn't understand')

What?  To say that humans have understanding isn't even addressing
the question of whether computers can have it to, much less begging
the question.

BTW, I have no interest whatsoever in proving that humans can
understand.  If you're not willing to accept that they do, then
there's no point in going further.


