Newsgroups: sci.skeptic,alt.consciousness,comp.ai.philosophy,sci.philosophy.meta,rec.arts.books
Path: cantaloupe.srv.cs.cmu.edu!nntp.club.cc.cmu.edu!miner.usbm.gov!rsg1.er.usgs.gov!jobone!newsxfer.itd.umich.edu!zip.eecs.umich.edu!caen!math.ohio-state.edu!howland.reston.ans.net!cs.utexas.edu!utnut!utgpu!pindor
From: pindor@gpu.utcc.utoronto.ca (Andrzej Pindor)
Subject: Re: Penrose and Searle (was Re: Roger Penrose's fixed ideas)
Message-ID: <D0I9n9.528@gpu.utcc.utoronto.ca>
Organization: UTCC Public Access
References: <CzzuEu.F48@gpu.utcc.utoronto.ca> <3c2vvm$8pk@news1.shell> <jqbD0F4yH.E7v@netcom.com> <3c4u97$hch@news1.shell>
Distribution: inet
Date: Thu, 8 Dec 1994 18:49:57 GMT
Lines: 153
Xref: glinda.oz.cs.cmu.edu sci.skeptic:97476 comp.ai.philosophy:23393 sci.philosophy.meta:15414

In article <3c4u97$hch@news1.shell>, Hal <hfinney@shell.portal.com> wrote:
................
>>>Suppose on looking inside we find the famous Humongous Lookup Table,
>>>which holds a good response to all possible conversations.  Many people
>>>would refuse to ascribe consciousness to such a program.  This is not
>>>exactly the position I would take, but it is certainly common enough.
>
>>What is the relevance of commonness?  It can be expected given the confusion
>>over what we *mean* by consciousness.  If consciousness is defined as "the
>>sense of it we humans have" then of course people are reluctant to ascribe it
>>to things that *seem*, in a fuzzy intuitive way, to be far from "humanness",
>>whatever that is.  But I claim that the position rests upon an error in our
>>understanding of what it means to be conscious.  Assuming that there is *some*
>>program that can be conscious (I think Jeff accepts that assumption) there are
>>a whole set of isomorphisms from incredibly complexly structured algorithms to
>>"simple" lookup programs for any such program (given that there are only a
>>finite number of "possible conversations"), and yet it seems wrong to me that
>>whether a program is conscious should depend upon its *representation*.  Is
>>that so hard to understand?
>
>An HLT is not isomorphic to the program that generated it.  An
>isomorphism must be bidirectional.  But in an HLT, information about
>the internal states of the program is lost.  Even if we recorded all
>possible conversations which the program might produce there would
>still be internal mental states which would be left indeterminate by
>this.  Hence it is possible to have a conversation with something which
>is not isomorphic to the original program but which contains only its
>recorded output ("How are you?  I'm fine, how are you?").  So the issue
>is not just the representation of a program since the HLT is not a
>representation of the AI program which produced it.

Your mistake is assumung that what is relevent in HLT-based program is HLT
itself, and the search algorithm is just a trivial add-on. This is not so.
It has been pointed out, for example, that a convincing conversation has to
depend on past history, not to mention that a dimensionality of the table
makes the search not trivial at all. More about additional complexities
below.

>>>It just isn't clear how simply looking up a few words in a (big) table
                           ??????
For a convincing conversation it is not "simply" at all!

>>>could produce consciousness.
>
>>Nor is it clear how simply a few simple chemical reactions or quantum
>>interactions could produce consciousness (well, perhaps it is clear to you).
>>But what makes you think that consciousness is stuff that is produced, rather
>>than a description of a level of relatedness?  IMO, you have the wrong model
>>for looking at a HLT.  An HLT must capture a huge set of *relations* between
>>words and sentences.  The entirety of semantics about humans and their world
>>must be captured as relationships among the entries of the HLT in order for it
>>to pass for a conscious being.  The problem here isn't what views are common
>>or easy to understand.  The problem is how much deep, careful, detailed,
>>sophisticated thinking it takes to try to grasp the problems in our
>>understanding of such things as consciousness.  Whether or not Dalton's
>>position is easy to understand is not relevant.
>
>Where are the internal mental states in an HLT?  What if he considered
>saying A but instead said B?  Are you claiming that the HLT contains
>or implies those rejected alternatives?
>
Of course it has to contain those rejected alternatives! How could it be
otherwise? HLT is supposed to contain _all_ possible converations, doesn't it?
Those "rejected" alternatives are possible answers too, aren't they?
So why wouldn't they be in it too? Consequently at some stage the program
has to decide somehow which alternative to use and there may be quite a few
of them. If you want to suggest that the choice might be done randomly, note
that this is not a case for humans. A conversation usually shows a certain 
pattern which we ascribe to personal traits. If we could not detect something
akin to a personality we certainly would be reluctant to treat a source
of answers as a "person".
.........
>Dalton has said, I believe, that he believes that future knowledge of and
>understanding of how consciousness and intelligence works may give us
>guidelines to use in judging whether a program is conscious or just
>a mimic.  In this view, it is not possible today to give a detailed
>answer to your question.  Broadly speaking, we might expect to look for
>certain data structures and algorithms which could be mapped to the
>mental states which precede speech.  We would not find these in the HLT.

Not true. As I have said before, you are ignoring a very complex task of
selecting a proper answer. This selection proces would map to the mental
states.
..........
>This, from Pindor, has been widely quoted:
>
>>This particular evidence was picked by Turing, so as to isolate ourselves
>>from human biases. Otherwise, you would be suggesting that classifying
>>someone as 'conscious' depends on what he/she looks like, whetehr he/she
>>has acceptable body laqnguage etc. This brings out in force a mutlitude of
>>cultural biases (if someone is black, can he/she be really conscious? Or
>>makes totally inappropriate gestures and body sounds? How about severely
>>deformed humans?). There is naturally also a danger of cultural biases
>
(sorry for re-quiting myself)

>I view this as an insinuation that this view leads naturally to racism
>and other abhorrent attitudes.  

Your view is your personal privilige. Some people can see "insinuation" about
themselves in about everything. However my intention was to indicate  
dangers of allowing evidence which is very prone to activating our biases. 
If I've thought that Jeff judges people's intelligence on basis of their
skin colour, this would not be a good example, would it?

>...................................You yourself have frequently asked
>whether Dalton wants to choose his tests based on how creatures "look".
>You should be aware of how close this comes to suggestions that the
>appearance of _people_ also should determine our views of their
>mentality.  Along these lines, Pindor wrote:
>
>>Are you suggesting that one day we may have a better method of deciding if
>>soemthing is intelligent tha TT/behavior, right? In other words, no matter
>>what a person does, how she/he behaves, we will test him/her using these
>>new criteria and pass a judgment, perhaps give out certificates, deny some
>>privileges etc?
>
>Here we have "persons" being given certificates and denied privileges.
>This sounds a lot like racist behavior.
>
Absolutely, but if I've expected Jeff to be in favor of the above measures, 
how could what I said be an argument against criteria for intelligence/
understanding/consciousnes other then some sort of TT? Precisely, because
such criteria could lead to the above abhorent things, I see looking for 
such criteria misplaced. If I've thought that Jeff did not find the above
abhorent, what force would my argument (or rather 'intuition pump' in this
case :-)) have? Why don't you look more carefully at the context instead of
accusing me of insulting Jeff?

>Besides, the emphasis on appearance is misleading.  Dalton wants to
>decide based on whether the entity in question is human or not.  He may
>have many ways of determining this other than sight.  He could be told,
>he could feel it to see if it is flesh or metal, he could put its brain
>through a CAT scan.  The point is that he wants to know if it is human
>or not because he knows that he is conscious, he adopts the stance that
>others are conscious based both on their behavior and their obvious
>structural similarities to him, but he is not so sure about the case
>when dealing with something which is structurally very different.
>
If we do not see clearly why looks should be connected to what we are trying
to judge (intelligence/consciousness etc) and we know all too well that 
judging by looks is very bias prone, shouldn't we refrain from using looks 
as a criterion? In my view we certainly should.

>Hal Finney
>hfinney@shell.portal.com

Andrzej
-- 
Andrzej Pindor                        The foolish reject what they see and 
University of Toronto                 not what they think; the wise reject
Instructional and Research Computing  what they think and not what they see.
pindor@gpu.utcc.utoronto.ca                           Huang Po
