Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!cs.utexas.edu!utnut!utgpu!pindor
From: pindor@gpu.utcc.utoronto.ca (Andrzej Pindor)
Subject: Re: Bag the Turing test (was: Penrose and Searle)
Message-ID: <D0pr0p.D54@gpu.utcc.utoronto.ca>
Organization: UTCC Public Access
References: <3c68og$ql8@agate.berkeley.edu> <vlsi_libD0KsFu.LyB@netcom.com> <jqbD0LHE2.GKv@netcom.com> <3cdefc$8f1@news1.shell>
Date: Mon, 12 Dec 1994 19:48:23 GMT
Lines: 67

In article <3cdefc$8f1@news1.shell>, Hal <hfinney@shell.portal.com> wrote:
>Jim Balter's posting did not arrive here, so I am replying to it as
>quoted by Gerard Malecki (or was it Shankar?).  I hope it is all here.
>
>I, Hal, wrote:
>  These internal mental states that are apparently missing from an HLT can
>  be observed to some extent in the brain.  With electrical probes we can
>  observe states of arousal, moments of decision, and other correlates of
>  the subjective aspects of consciousness.  But I maintain that there is no
>  way even in principle to observe these phenomena in the HLT, because they
>  are not there.

You are mistakenly assuming that producing an answer with help of HLT would be
a straightforward, linear process. However, note that at any stage of a
conversation there is a number of possible, acceptable answers. Hence the
program has to make a lot of _decisions_ which branch of take. These will
be your (or rather HLT's) moments of decisions. They may be shorter than in
case of humans for reasons of hardware, but should quickness of decision-
making matter? Also the answers would have to depend on conversation history
i.e. the state of the system (i.e. an analogue of a mental state). People
who advocate HLT-type arguments are trivializing complexity of HLT-based
program, concentrating only on the static part of the database.

>Jim Balter replied:
>  If it is these phenomena that are essential to being conscious "in the sense
>  that you are I are" (I don't know, because no one seems to be willing to
>  say what aspects are essential; they just point to the whole and say "that's
>  it" and then claim that there's a fact of the matter as to whether some thing
>  or the other is "really" in that "natural category" or somesuch), then they
>  can be added simply by adding a bunch of intermediate states to the HLT;
>  entries that contain utterances or partial utterances, but aren't final
>  output states, they just lead to states that are.  Perhaps we could throw
>  in some delay loops ("moments of decision"). Is that's what's needed
>  to ascribe consciousness?  Would that satisfy *you*?  What would, short of
>  an exact replica of a human brain?
>
>It is not so important what would satisfy *me*, it is what would
>address the argument.  These things are apparently missing in the HLT,

Firstly, please note that what you may consider as "addressing the argument"
may not be considered as such by someone else and vice versa. Consequently,
Jim's question is not out of place as you seem to be implying. And have
my comments addressed the argument to your satisfaction?

>and you are proposing to add them.  Yes, that would address the
>argument in this case; such an augmented HLT could no longer be claimed
>not to be conscious because it didn't have intermediate states.
>
>However, the original argument would still have force.  It claimed that
>the HLT which lacked these augmentations would pass the TT but not be
>conscious.  Do you really think you need to add them to make it
>conscious, or would you maintain that the original un-augmented HLT was
>conscious in and of itself?
>
As I have argued above, it is not necessary to add anything to HLT-based
TT-passing program - to pass TT it has to have features which are analogues
of the ingredients which you find necessary for an "artificial mind" to have
before you are ready to grant it consciousness.
>Hal Finney
>hfinney@shell.portal.com


-- 
Andrzej Pindor                        The foolish reject what they see and 
University of Toronto                 not what they think; the wise reject
Instructional and Research Computing  what they think and not what they see.
pindor@gpu.utcc.utoronto.ca                           Huang Po
