Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!gatech!swrinde!cs.utexas.edu!utnut!utgpu!pindor
From: pindor@gpu.utcc.utoronto.ca (Andrzej Pindor)
Subject: Re: Bag the Turing test (was: Penrose and Searle)
Message-ID: <D0GI5y.ALo@gpu.utcc.utoronto.ca>
Organization: UTCC Public Access
References: <CzFr3J.990@cogsci.ed.ac.uk> <3bu54g$t1i@agate.berkeley.edu> <HPM.94Dec5014556@cart.frc.ri.cmu.edu> <3c0vo1$si4@news1.shell>
Date: Wed, 7 Dec 1994 19:58:46 GMT
Lines: 60

In article <3c0vo1$si4@news1.shell>, Hal <hfinney@shell.portal.com> wrote:
........
>Is it "conscious again" each time you use the HLT?  I don't know if this
>is a meaningful question.  The simulated consciousness has already
>experienced everything we are doing, in full detail.  There is no way for
>it to "tell" that it is being run or not; in a timeless sense it has
>lived, is living, will live.
>
I am not sure how you have reached these conclusions. Responses of the
"simulated consciousnes" (I take it you mean based on HLT?) have to be 
dependent on past history, so why has it "experienced everything we are
doing"? What you mean by ' There is no way for it to "tell" that it is being 
run or not'? Ask it! 

(Hans Moravec)
>>If one draws no arbitrary limits on the interpretation allowed in
>>seeing mind and intelligence in physical (and abstract) events, it
>>does in principle become possible to see minds in blank walls and
>>rocks.  In fact, it is quite possible to talk with a rock or a wall,
>>and imagine what it says in return, and such conversations are often
>>helpful in solving one's problems.  But in those cases, there is no
>>easy interpretation, and any imputed mind is as good as any other. Our
>>interpretations are more narrowly channeled when we imagine a
>>conversation with an evocative painting or statue, yet more so when we
>>consider a conversation with a strongly characterized personage from a
>>novel or a film, and even more when the character becomes interactive
>>in a computer game.  The Turing test passer forces on us an
>>interpretation of mind as sharp as any human communicating solely by
>>email.  The Humongous Lookup Table represents that mind as well as any
>>other implementation of the same behavior.
>
>I still see a difference between the HLT and the wall.  Nobody ran an AI
>simulation to create the wall; its existence does not constrain the
>universe to contain a mind which has had certain thoughts.  All these
>examples except the Turing test passers are like the wall.  A video game
>character is not conscious.  Nobody feels pain when a Mortal Kombat
>character gets his head ripped off.  Only the TT cases show us a mind.
>
Although Hans Moravec seems to suggest that a blank wall (or a rock) may
legitimately be interpretted as having a mind to the same extent as a TT
pasing program based on an HLT, the crucial thing mentioned is in his last
sentence - "the same behavior". A wall does not have the same behavior,
does it? I think that this is a very good example of importance of 
verification (which came up in another thread) - if we cannot have a
meaningful interaction with this entity, problem of its "intelligence",
mind etc. is completely moot. There may be no difference (in some sense)
between the HLT and the wall, but HLT is not a mind - a program utilizing 
the HLT may eventually be said to instantiate a mind if it shows a proper
_behavior_ (failure of some periferal devices notwithstanding, like in the
Helen Keller case).
	
>Hal Finney
>hfinney@shell.portal.com

Andrzej
-- 
Andrzej Pindor                        The foolish reject what they see and 
University of Toronto                 not what they think; the wise reject
Instructional and Research Computing  what they think and not what they see.
pindor@gpu.utcc.utoronto.ca                           Huang Po
