Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!ix.netcom.com!netcom.com!vlsi_lib
From: vlsi_lib@netcom.com (Gerard Malecki)
Subject: Re: Strong AI and consciousness
Message-ID: <vlsi_libCzqJzE.HpA@netcom.com>
Organization: VLSI Libraries Incorporated
References: <1994Nov22.121521.27633@oxvaxd> <3atltt$428@mp.cs.niu.edu> <3auja3$7hf@news1.shell>
Date: Wed, 23 Nov 1994 19:40:25 GMT
Lines: 54

In article <3auja3$7hf@news1.shell> hfinney@shell.portal.com (Hal) writes:
>rickert@cs.niu.edu (Neil Rickert) writes:
>
>>In <1994Nov22.121521.27633@oxvaxd> econrpae@vax.oxford.ac.uk writes:
>>>      The point is, I think, that the very formulation of the strong AI thesis
>>>is waiting on some clear understanding of what it is for an object to be
>>>running a program. Please fill in the blanks:
>
>>>object x is executing program p iff __________________________________
>
>>I doubt that you will ever see the blanks satisfactorily filled in.
>>The difficulty is that "x is executing p" is a subjective
>>interpretation, while you are asking for an objective definition.
>
>But, if a conscious AI results from a computer executing the proper
>program, then how can it be a subjective question whether this is
>happening?  I am conscious, and no one's opinion or subjectivity can
>change that.  Presumably the question of whether others are conscious is
>equally objective (if difficult to establish).  It does not seem that
>this would be a subjective issue, and therefore it is hard to see where
>subjectivity arises in the computer-execution question.
>
>Hal Finney
>hfinney@shell.portal.com

I do agree with Neil Rickert when he says that a computer executing 
a program can be a subjective opinion. For example, what if the 
program that one is running is highly encrypted, so that the trace 
makes sense only to the person who holds the key, and is garbage to 
the rest? Or consider this gedaken experiment. We can just have a
binary counter as our computer. We interpret the states through a
translation table which translates the computer (counter) output and
the clock instance to the internal states of a program trace. 
Clearly, this is the most general purpose computer that can execute
several programs with totally different semantic contents at the same 
time (a punning computer?), through the use of appropriate translation
tables which may exist only in the minds of the evaluators. The
scenario is quite similar to the tale of the four blind men and the
elephant, where each of the four men, after feeling a differnent part
of the anatomy of an elephant, imagines the elephant to look different.
Except that in the computer case, all happen to be right.

Clearly there is something wrong with strong AI's definition of 
consciousness, since it makes it dependent on an evaluator. But then
the evaluator has to be conscious itself (or else, another evauator
becomes necessary to determine what the first evaluator has in mind,
and so on, ad infinitum). The only terminating condition seems to be
another conscious entity, leading to a circular definition of 
consciousness. Or to borrow Niel Rickert's terminology from a previous 
posting, 'strong AI defines consciousness to be a caricature that
it could shred apart in debate'.

Shankar Ramakrishnan
shankar@vlibs.com
