From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!caen!garbo.ucc.umass.edu!dime!orourke Fri Jan 31 10:26:37 EST 1992
Article 3225 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!caen!garbo.ucc.umass.edu!dime!orourke
>From: orourke@sophia.smith.edu (Joseph O'Rourke)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <42349@dime.cs.umass.edu>
Date: 28 Jan 92 17:38:13 GMT
References: <11976@optima.cs.arizona.edu>
Sender: news@dime.cs.umass.edu
Reply-To: orourke@sophia.smith.edu (Joseph O'Rourke)
Organization: Smith College, Northampton, MA, US
Lines: 49

In article <11976@optima.cs.arizona.edu> David Gudeman writes:
>   In article  <42304@dime.cs.umass.edu> Joseph O'Rourke writes:
>   
>   ]1. Understanding (grasping meanings of) is impossible without
>   ]   consciousness.
>   ]
>   ]2. It is possible that consciousness does not require biological tissue.
>   ]
>   ]3. As a result of a deep Turing Test -like conversation with a machine,
>   ]   you have to admit that it seems the machine grasps meanings.
>   
>   I'm willing to accept all of those premises.
>   
>   ]4. Since you believe (1), you are led to wonder if perhaps the machine
>   ]   is conscious.
>   
>   Yes, it is a possibility to consider, but it is far from established (...).

At first I thought this was a big step from your earlier phrasings, such
as "I'm saying you have no reason at all to believe that a machine 
understands just because you can't stump it with hard questions,"
or "the test provides no evidence at all."  But later you say:

>   And your possibility 4, which was never more than a 50-50 proposition
>   anyway, ...

So it seems you *do* believe that (1)+(2)+(3) provide evidence for 
consciousness, but just rather weak evidence.  So weak that your belief 
in (7)+(8) below easily win the day:

>   But now add the propositions
>   
>   7. The machine answers questions by purely syntactic manipulations.
>   
>   8. Consciousness doesn't seem to have any relationship to syntactic
>   manipulations.
>   
>   And your possibility 4, which was never more than a 50-50 proposition
>   anyway, becomes seriously doubtful.

The key seems to be how firmly you hold (8): clearly you hold it so
firmly that it overpowers any implication of (1)+(2)+(3).
	Having elucidated all this, it is now clear that neither of
us (nor other participants, I think) are making major logical errors:
our differences come down to the relative strengths with which we
hold certain dubious propositions.  I don't believe in (8) very firmly; 
nor am I so sure about (1).
	Maybe you could explain why you hold (8) so strongly that
it could overpower evidence to the contrary, i.e., (1)+(2)+(3)?


