From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!tdatirv!sarima Tue Jan 28 12:18:23 EST 1992
Article 3193 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <385@tdatirv.UUCP>
Date: 27 Jan 92 22:03:31 GMT
References: <11906@optima.cs.arizona.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 47

In article <11906@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
|In article  <1992Jan23.215711.6793@gpu.utcs.utoronto.ca> Andrzej Pindor writes:
|]On some other occasion I've tried to coax people to spell more clearly what is
|]meant by 'semantical processing', but there were no takers.
|
|Semantical processing of a sentence about X involves thinking about X,
|not about the sentence.  Syntactical processing of a sentence about X
|involves only the sentence and not X.  However, if you didn't know
|that already without me saying it, then you almost certainly do not
|have the background to understand it (I know I didn't when I took my
|first course in pragmatics.)

O.K., so what does it mean to be 'thinking about X'? 
And how does it differ than considering a set of prior 'sentences'
involving the term X?  (Including the internal encoding of prior direct
experience with X as being equivalent to sentences about X)


[Prior experience *must* be encoded in some way, since the experience itself
is no longer available, and much evidence suggests that all memories are
*reconstructions* not direct recall, and prior experience can generally be
expressed as real sentences].

|]... All this talk about self-awareness, feelings, pain etc. etc.
|]is a waste of time till we have _objective_ ways of detecting them.
|
|It is the pro-AIers who are causing this waste of time by claiming
|that the external appearence of internal experiences _is_ an objective
|way of detecting them.

It is the best we have available currently.  Until a better method is developed
we are stuck with it.

Of course, as someone else suggested it is likely that in the process of
creating a system which passes the current set of tests we will achieve a
much better concept of what it means to 'understand' something.  Then we
will have a different, better test (at least if it can be applied to humans
as a way of calibrating it, and making sure it captures the appropriate
scope of performance).

[I insist that the test be applicable to humans, because otherwise we are still
in the 'how can you tell that humans don't do it that way' level].

-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



