Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!agate!darkstar.UCSC.EDU!news.hal.COM!decwrl!netcomsv!netcom.com!vlsi_lib
From: vlsi_lib@netcom.com (Gerard Malecki)
Subject: Re: Bag the Turing test (was: Penrose and Searle)
Message-ID: <vlsi_libD0o5I5.58r@netcom.com>
Organization: VLSI Libraries Incorporated
References: <vlsi_libD0KsFu.LyB@netcom.com> <jqbD0LHE2.GKv@netcom.com> <3cdefc$8f1@news1.shell>
Date: Sun, 11 Dec 1994 23:06:04 GMT
Lines: 62

>I, Hal, wrote:
>  These internal mental states that are apparently missing from an HLT can
>  be observed to some extent in the brain.  With electrical probes we can
>  observe states of arousal, moments of decision, and other correlates of
>  the subjective aspects of consciousness.  But I maintain that there is no
>  way even in principle to observe these phenomena in the HLT, because they
>  are not there.
>Jim Balter replied:
>  If it is these phenomena that are essential to being conscious "in the sense
>  that you are I are" (I don't know, because no one seems to be willing to
>  say what aspects are essential; they just point to the whole and say "that's
>  it" and then claim that there's a fact of the matter as to whether some thing
>  or the other is "really" in that "natural category" or somesuch), then they
>  can be added simply by adding a bunch of intermediate states to the HLT;
>  entries that contain utterances or partial utterances, but aren't final
>  output states, they just lead to states that are.  Perhaps we could throw
>  in some delay loops ("moments of decision"). Is that's what's needed
>  to ascribe consciousness?  Would that satisfy *you*?  What would, short of
>  an exact replica of a human brain?
>
>It is not so important what would satisfy *me*, it is what would
>address the argument.  These things are apparently missing in the HLT,
>and you are proposing to add them.  Yes, that would address the
>argument in this case; such an augmented HLT could no longer be claimed
>not to be conscious because it didn't have intermediate states.
>
>However, the original argument would still have force.  It claimed that
>the HLT which lacked these augmentations would pass the TT but not be
>conscious.  Do you really think you need to add them to make it
>conscious, or would you maintain that the original un-augmented HLT was
>conscious in and of itself?
>
The HLT is the equivalent of a timeless program trace. Is a program
trace as conscious as the execution itself? I had raised this question
in another thread, but even strong AI supporters seem to
be divided on this issue.

About Jim Balter's suggestion on faking intermediate states (eg., by
adding delay loops), I can only say that fakes never equal originals.

There is certainly nothing mystical about intermediate states. These
states do really exist in the program (in the form of internal variables).
The TT concerns itself with only what comes out of stdout. A program
can also spit out messages on stderr, or write out other files, or 
even produce sounds or any other possible physical effects. If we
discard the rest of the information (as the TT does), there is no
way we can uniquely determine what's going on. To say otherwise is
in direct conflict with information theory.

The above argument applies to humans too. What a person says does
not uniquely determine what he/she  has in mind. Otherwise there
would be no need for lie detectors and body language.

Therefore, the mental states corresponding to the same TT output
may be implementation dependent. It is also quite possible that
some implementations may not have mental states at all (although
I do not wish to speculate about the HLT at this point).

Shankar Ramakrishnan
shankar@vlibs.com


