Newsgroups: sci.skeptic,alt.consciousness,comp.ai.philosophy,sci.philosophy.meta,rec.arts.books
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Penrose and Searle (was Re: Roger Penrose's fixed ideas)
Message-ID: <jqbD0F4yH.E7v@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <CzzuEu.F48@gpu.utcc.utoronto.ca> <D0CxoG.25F@cogsci.ed.ac.uk> <D0EL7t.69E@gpu.utcc.utoronto.ca> <3c2vvm$8pk@news1.shell>
Distribution: inet
Date: Wed, 7 Dec 1994 02:15:53 GMT
Lines: 91
Xref: glinda.oz.cs.cmu.edu sci.skeptic:97357 comp.ai.philosophy:23305 sci.philosophy.meta:15371

In article <3c2vvm$8pk@news1.shell>, Hal <hfinney@shell.portal.com> wrote:
>You know, I have been reading this debate about whether the TT is enough
>or the best test, and I don't see that Dalton's position is nearly as
>incomprehensible as Pindor and Balter make it out to be.  This is strange
>becasue they seem to have been reading him for a long time while I have
>only been doing so for a few days.

What makes you think I find Dalton's position incomprehensble?  I find it
confused and riddled with bad logic and reaching conclusions based upon
imprecise terms where such conclusions can only be reached given precise terms,
but I don't find it, *as a whole*, incomprehensible, if it is even meaningful
to speak of such a whole.  Jeff makes many individual points and arguments,
and I respond with detailed comments on them.  You have not addressed or
rebutted anything specific I have said.  "as incomprehensible as ... Balter
make[s] it out to be" is just innuendo.

>Dalton wants to look inside the computer which passes the Turing test
>before he is willing to pronounce it intelligent.  Is this so hard to
>understand?  I think it is a very common position.

I want to look inside you before I pronounce you intelligent.  Is this
so hard to understand?  Understanding Jeff's position isn't hard; what seems
hard to me is *justifying* it.  In practice, we determine intelligence
operationally.  The very concept of intelligence, above and beyond many
other concepts (e.g., "love", "hunger", "consciousness") seems operational.

What evidence do you have that I'm having trouble understanding what Jeff wants
to do?  Why ask whether it is hard to understand?

No one denies that it is a popular position.  Why bother pointing that out?  

>Suppose on looking inside we find the famous Humongous Lookup Table,
>which holds a good response to all possible conversations.  Many people
>would refuse to ascribe consciousness to such a program.  This is not
>exactly the position I would take, but it is certainly common enough.

How did we get from intelligence to consciousness?  Rosenfelder complains
that TT defenders leap from intelligence to consciousness, but in my experience
it is the TT attackers who facilely slip between intelligence and understanding
and consciousness.

What is the relevance of commonness?  It can be expected given the confusion
over what we *mean* by consciousness.  If consciousness is defined as "the
sense of it we humans have" then of course people are reluctant to ascribe it
to things that *seem*, in a fuzzy intuitive way, to be far from "humanness",
whatever that is.  But I claim that the position rests upon an error in our
understanding of what it means to be conscious.  Assuming that there is *some*
program that can be conscious (I think Jeff accepts that assumption) there are
a whole set of isomorphisms from incredibly complexly structured algorithms to
"simple" lookup programs for any such program (given that there are only a
finite number of "possible conversations"), and yet it seems wrong to me that
whether a program is conscious should depend upon its *representation*.  Is
that so hard to understand?

>It just isn't clear how simply looking up a few words in a (big) table
>could produce consciousness.

Nor is it clear how simply a few simple chemical reactions or quantum
interactions could produce consciousness (well, perhaps it is clear to you).
But what makes you think that consciousness is stuff that is produced, rather
than a description of a level of relatedness?  IMO, you have the wrong model
for looking at a HLT.  An HLT must capture a huge set of *relations* between
words and sentences.  The entirety of semantics about humans and their world
must be captured as relationships among the entries of the HLT in order for it
to pass for a conscious being.  The problem here isn't what views are common
or easy to understand.  The problem is how much deep, careful, detailed,
sophisticated thinking it takes to try to grasp the problems in our
understanding of such things as consciousness.  Whether or not Dalton's
position is easy to understand is not relevant.

>So if you take the position that an HLT could pass the Turing test but
>not be conscious, then it is perfectly reasonable to want to look at the
>program before ascribing consciousness to any computer which would pass
>the test.

Yes, it is perfectly reasonable to get from one erroneous and confused position
to another.  But what do you expect to see there that will permit you to ascribe
consciousness?  "human-consciousness-equivalence-subprogram"?  What?

>This does not mean that you are a racist, a humanist, that
>you want to pull the plug on people in comas, or any of the other moral
>repugnancies that Dalton seems to be accused of just for espousing this
>position!

Where does this garbage come from?  No one accused Dalton of being a racist
outside of his own fevered imagination.  He is only accused of being confused
and of making bad arguments.  Accusations that we are all guilty of from time to
time.  The issues are the specific confusions and specific bad arguments.
Cut out the innuendo!
-- 
<J Q B>
