From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!tdatirv!sarima Fri Jan 31 10:27:13 EST 1992
Article 3289 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <391@tdatirv.UUCP>
Date: 29 Jan 92 21:35:36 GMT
References: <385@tdatirv.UUCP> <es90eB2w164w@depsych.Gwinnett.COM>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 85

In article <es90eB2w164w@depsych.Gwinnett.COM> rc@depsych.Gwinnett.COM (Richard Carlson) writes:
|sarima@tdatirv.UUCP (Stanley Friesen) writes:
|
|> O.K., so what does it mean to be 'thinking about X'? 
|> And how does it differ than considering a set of prior 'sentences'
|> involving the term X?  (Including the internal encoding of prior direct
|> experience with X as being equivalent to sentences about X)
|
|It is most likely some melange of "text" (semantic elements),
|graphics (imagery) and behavioral intentions ("attitudes").

O.K., now given that these things must be encoded in some manner to be
instantiated in any processing system, be it brain or computer, how are
"text", and "graphics" and "attitudes" different than symbols?

Once encoded, they are now reduced to *representations*, they are no longer
the things themselves.  And mental manipulation of these representations seems
to me to be little different than symbol manipulation, even in my own brain.

This is where I have problems with the Searle approach.  I can see no way
of instantiating things like this, in any hardware, that do not reduce to
symbolization.

As far as I can see we have come full circle, right back to symbol
manipulation.  We are just at a different *level* now.

|> [Prior experience *must* be encoded in some way, since the experience itself
|> is no longer available, and much evidence suggests that all memories are
|> *reconstructions* not direct recall, and prior experience can generally be
|> expressed as real sentences].
|
|A lot of forensic research done on the use of hypnosis and other
|techniques to help eye witnesses recall details indeed suggests
|that memories are reconstructions (not complete recordings as
|Freud, among others, speculated) but the focus on verbal encoding
|into sentences hasn't emerged from the research.  (You can alter a
|person's recall with verbal suggestions.  Any competent hypnotist
|can convince a witness he did or did not see something.)

The last phrase simply refers to the fact that one can generally talk
about ones memories.   And to the fact that most people seem to think
in language (whether written or spoken).  Thus there does not seem to be
any functional difference between linguistic memory and non-linguistic
memory.  At most they simply come from different loci in the brain.

Also, neurobiology has, so far, only found one mechanism for memory, and
only one encoding scheme for internalized data in the brain.  Thus, again,
there seems to be only one memory system, and only one internal encoding.

Thus it seems silly to give special status to non-linguistic data traces,
and treat them as 'causal' or 'non-symbolic', or 'semantic'.  I think this
kind of dichotomy is artificial.

|> It is the best we have available currently.  Until a better method is develop
|> we are stuck with it.
|> 
|> Of course, as someone else suggested it is likely that in the process of
|> creating a system which passes the current set of tests we will achieve a
|> much better concept of what it means to 'understand' something.  Then we
|> will have a different, better test (at least if it can be applied to humans
|> as a way of calibrating it, and making sure it captures the appropriate
|> scope of performance).
|
|I was one of the persons who suggested that.  And the Turing test
|does serve that purpose.  I now think it would be even more useful
|if we added the Houdini test and assumed until proved otherwise
|that if a computer appeared to be conscious and conversing with us
|we should look for the trick the programmer is using.

Perhaps.  I guess it is just a matter of how likely we think such a trick is
in actual practice.

So far all of the systems that used tricks have seemed just plain silly to me.
[Not the hypothetical ones, just the real ones].  Eliza is a meaningless
example, it does not even come close to what I mean when I say something
appears to understand.  I detected the inadequacy in Eliza in mere minutes
using some quite trivial means.

It is just that I look at the complexity of human discourse, especially
discourse intended to test for knowledge, and find it difficult to concieve
of an actually constructable cheater that could duplicate it.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



