Newsgroups: sci.skeptic,alt.consciousness,comp.ai.philosophy,sci.philosophy.meta,rec.arts.books
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!Germany.EU.net!EU.net!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Penrose and Searle (was Re: Roger Penrose's fixed ideas)
Message-ID: <CzsB5M.2n2@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute-alter.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <CzH78F.4Eq@gpu.utcc.utoronto.ca> <CzqHIB.1nA@cogsci.ed.ac.uk> <3b0pc9$i2g@news.u.washington.edu>
Distribution: inet
Date: Thu, 24 Nov 1994 18:24:58 GMT
Lines: 106
Xref: glinda.oz.cs.cmu.edu sci.skeptic:96282 comp.ai.philosophy:22592 sci.philosophy.meta:15007

In article <3b0pc9$i2g@news.u.washington.edu> forbis@cac.washington.edu (Gary Forbis ) writes:
>In article <CzqHIB.1nA@cogsci.ed.ac.uk>, jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>|> My position is not that there *is* more to thinking, intelligence,
>|> or whatever, BTW.  OTOH, it seems plausible to me that some aspects
>|> of mental life (or indeed whether there is any mental life) might
>|> depend on how the TT-passing behavior is accomplished.
>|> 
>|> What in the TT shows anything that passes it *must* have subjective
>|> experience, qualia, an internal dialogue, or other such aspects of
>|> human mental life?  Is this question out of bounds for some reason?
>
>I hold a similar position... but ...
>
>There is a notion of identity such that a difference that makes no difference
>is no difference.  

Ok.  But a difference not detectable by us might still make a
difference (perhaps to something else).

I'm a realist about at least some "mental" properties.  That is,
I think there's a fact of the matter whether or not we can tell
what it is.  For instance, I was just thinking of a cat, though
it may be that no one else could know this unless I told them.
Perhaps even then they couldn't know *for sure*.

However, in ordinary life, and in science, (at least) we normally
(at least) adopt a weaker standard of proof.  We're willing to
act on conclusions that aren't "for sure", and rightly so, I might
add.  I think we solve the other minds problem adequately in
practice so far as other humans are concerned, though it may still
trouble philosophers.  We're still working on the animal case,
which is much in despite.  At least there are many books arguing
that animals are conscious, have various rights, and so on, and
occasionally somehting that argues the other side.

I'm an optimist about our ability to make scientific progress on
questions of consciousness and the like.  I therefore expect that
we'll be in a better position to answer such questions in the future
than we are today.

Computers / machines are, like animals, a disputed case. 
But it seems to me that we're worse off (so far) than w/ animals
because we lack sufficiently capable machines.  We still don't
know all that much about what, say, machines that pass the TT
or are "smart as a pig" (say) will be like.

In any case, I don't think our understanding of conscious (and
related issues) is yet at a point where we can say that the
evidence available in an ordinary, teletype Turing Test is
sufficient.  It's difficult to discuss this here w/o people
bringing in the more general question of "differences that
make no difference" and so forth.  But we don't have to resolve
out differences there (concerning realism about mental properties,
etc), since the teletype TT clearly does not provide all the
evidence that could be available.

>As long as you can specify the criteria by which you will
>grant consciousness a machine can be built for which a mapping can be made
>between your criteria and physical change in the machine.

I don't know the criteria, yet.  That's something that has to be
worked out.

As part of my optimism about our ability to make scientific progress
on these issues, I expect that we will at some point find what
physical (or maybe functional, as in functionalism) properties are
significant (or, perhaps, find more of them, if we suppose we 
already have a handle on some).

But finding a mapping may be difficult.  For instance, mapping
passing-the-TT to physical changes.  (I realize, BTW, that you
can find mappings everywhere, a point that Moravec and McCullough
make.  But I don't find that a suitable answer any more than I
find Putnam's rock/FSA mappings to be one.  If you take "make
a mapping" in so general a sense, then I say it's irrelevant.)

>  If a specific
>internal dialogue is required in a particular situation the machine can 
>undergo state changes that map to that dialogue.  If a specific qualia is
>required under certain conditions, the machine's states can be mapped to
>that qualia.  etc.  I'm not sure I have a good reason for saying a machine
>isn't experiencing a qualia when it enters the set of states that are defined
>as mapping to those required under the conditions requiring it.

Then perhaps in discussion with you, we can't set aside the issue
of realism about mental properties.  Because finding *a* mapping
may be so easy that we can find them for rocks, trees, and so forth.

Now, I don't care in the end whether we say rocks, trees, and so
forth are conscious.  There's ultimately no point in fighting over
use of a world.  But if we end up being unable to find any interesting
difference in this area between, say, humans and mushrooms, I'll
be surprised.

>I wish I could find a definition of either "experience" or "qualia" that
>would let me objectively say the machine wasn't experiencing qualia when it
>entered the states previously mentioned.  Why do I assume two different brains
>experience the same (or similar qualia) under the same (or similar) conditions?

You seem to be inclined towards rather strong conclusions on the
basis of what may be only a temporary difficulty.  Why should it
matter that you can't find a definition now?

-- jeff


