Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!uhog.mit.edu!bloom-beacon.mit.edu!world!decwrl!netcomsv!netcom.com!vlsi_lib
From: vlsi_lib@netcom.com (Gerard Malecki)
Subject: Re: Bag the Turing test (was: Penrose and Searle)
Message-ID: <vlsi_libD0KsFu.LyB@netcom.com>
Organization: VLSI Libraries Incorporated
References: <3c68og$ql8@agate.berkeley.edu> <3c7dli$a5m@news1.shell> <jqbD0KMxu.Mw2@netcom.com>
Date: Sat, 10 Dec 1994 03:31:05 GMT
Lines: 65

In article <jqbD0KMxu.Mw2@netcom.com> jqb@netcom.com (Jim Balter) writes:
>In article <3c7dli$a5m@news1.shell>, Hal <hfinney@shell.portal.com> wrote:
>>jerrybro@uclink2.berkeley.edu (Gerardo Browne) writes:
>>
>>>Hal (hfinney@shell.portal.com) wrote:
>>
>>>: I maintain that reasonable people will
>>>: believe that such a beast is not conscious in the sense that you and I
>>>: are, and that they are not necessarily confused or in error.
>>
>>>But why?  What is the basis for this judgement?  To me it seems at this
>>>point a mere visceral discomfort.
>>
>>(I want to clarify that I did not mean that _all_ reasonable people would
>>disbelieve in the conscious HLT, only that some would.)
>>
>>The HLT seems to have no representation of the richness of our internal
>>mental life.  It is little more than a tape recording of all possible
>>conversations.
>>
>>These internal mental states that are apparently missing from an HLT can
>>be observed to some extent in the brain.  With electrical probes we can
>>observe states of arousal, moments of decision, and other correlates of
>>the subjective aspects of consciousness.  But I maintain that there is no
>>way even in principle to observe these phenomena in the HLT, because they
>>are not there.
>
>If it is these phenomena that are essential to being conscious "in the sense
>that you are I are" (I don't know, because no one seems to be willing to
>say what aspects are essential; they just point to the whole and say "that's
>it" and then claim that there's a fact of the matter as to whether some thing
>or the other is "really" in that "natural category" or somesuch), then they
>can be added simply by adding a bunch of intermediate states to the HLT;
>entries that contain utterances or partial utterances, but aren't final
>output states, they just lead to states that are.  Perhaps we could throw
>in some delay loops ("moments of decision"). Is that's what's needed
>to ascribe consciousness?  Would that satisfy *you*?  What would, short of
>an exact replica of a human brain?
>
>>These missing internal mental states are what justify denying that
>>the HLT has a mind.  The fundamental structural difference between the
>>HLT and biological minds (which are the only things we really know to
>>be conscious) give reason to hesitate in extrapolating from our
>>personal experiences of consciousness to the assumption that a
>>recording of conversations could be conscious.


Hal might be having a point here. The HLT completely ignores the 
subjective thought processes that went on during the conversation.
Don't we all agree that two people replying identically to the 
same set of questions posed to them may have different things in
mind? If yes, why can't the same logic be extended to computers?

The HLT is only functionally identical to the program(s) that it
represents in the same way that a sorted list represents the output
of a generic sorting program. But we have different sorting 
algorithms like bubble sort, quicksort, shellsort, etc. which
while being functionally equivalent to each other, have their own
"characteristics" that themselves warrant their own fields in 
computer science (time complexity, space complexity, average case,
worst case etc.). Hence it seems too preposterous to suggest that
consciousness is uniquely determined by functional equivalence alone.

Shankar Ramakrishnan
shankar@vlibs.com
