Newsgroups: comp.ai.philosophy
From: Lupton@luptonpj.demon.co.uk (Peter Lupton)
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!pipex!demon!luptonpj.demon.co.uk!Lupton
Subject: Re: Bag the Turing test (was: Penrose and
References: <1994Dec22.163936.20405@oracorp.com>
Distribution: world
Organization: No Organisation
Reply-To: Lupton@luptonpj.demon.co.uk
X-Newsreader: Newswin Alpha 0.6
Lines:  87
X-Posting-Host: luptonpj.demon.co.uk
Date: Tue, 27 Dec 1994 20:28:38 +0000
Message-ID: <858509606wnr@luptonpj.demon.co.uk>
Sender: usenet@demon.co.uk

In article: <1994Dec22.163936.20405@oracorp.com>  daryl@oracorp.com (Daryl McCullough) writes:


Daryl presents three arguments:
 
   1. that thoughts don't occur in real time, so the TT cannot be
      criticised for failing to localize thoughts in time
   2. that the sort of transformations required for tyhe production
      of an HLT (that is, the processes involved in AI->HLT 
      construction) need not be considered to be thinking.
   3. that the question of where thinking is done is either vague
      or involves 'magical' boundaries.

My responses are:

1. Thoughts don't occur in real time.

I agree with the examples Daryl gives - I just point out they
don't substantiate his claim. Some thoughts certainly occur
over, say, a one or five minute period. I can, for example, right
now, think of something and then it is clear enough that it was
thought about over the previous 30-second period. Ample granularity
for the TT to fail to be a test for it, which is all my case rests
on.

If Daryl is claiming that no thoughts can ever be localised in time
to any degree of granularity, then we do fundamentally disagree,
as Daryl says.

2. The transformations involved in AI->HLT construction aren't 
thinking.

Fist of all, I would like to question whether Daryl even wants to 
make this argument. Surely, the AI->HLT system could be hooked up
to a teletype and would, if so hooked up, pass the TT. Surely Daryl
would wish to say of that system that it *was* thinking? In which
case, shouldn't the AI->HLT be thinking when its outputs aren't
connected to the teletype but, instead, being stored into the HLT?
My point here, of course, is not that I think that TT-passing
implies thinking, but just to point out the inconsistency in 
the position of some-one who thinks the TT does constitute a test for 
thinking accepting that the HLT thinks but denying thinking 
to the AI->HLT construction program. 

Second, I would like to observe that the processes identified by Hans
Moravec certainly strike one as a form of program execution. Now I was 
aware at the time of reading Hans's arcticle that he was presenting us 
with an 'intution pump' by calling the program an 'optimizing 
compiler'. I didn't think anyone would have their intuition pumped 
quite as effectively as seems to be the case.

Certainly there is no shortage of 'silent transitions'. Indeed, if one 
actually takes the trouble to follow through on the operation of Hans's
transformations, one sees that there is an initial case analysis 
which serves to set out in parallel all possible executions sequences -
one of the things I claimed.

Then, in order to get the answers out, the sequences must be unfolded
and substitutions made. This requres a database of formulae of the 
form: 'x = 0 and y = 4 and ....', one entry for each variable. As one
passes one's cursor from one statement to the next, each entry of the
database may require updating (imagine y = x+1, which will change y 
from y = 4 to y = 1).

Indeed, the similarity between such transformations and program execution
has struck a number of authors before now - it has been called 
'symbolic execution' by them. 

On this account, I see no need to apologise for pointing out that the
processes of converting the AI->HLT involves processes sufficiently 
similar to all possible executions of the AI that if we considered one 
to be thinking, we should wish to consider the other to be thinking 
all possible thoughts also.

3. Magical boundaries.

Here Daryl constructs a 'slippery slope' argument. I don't know what
use such arguments are, except to dissuade people who believe that
a certain pair of predicates divides things up into two disjoint
camps (in this case thinking/non-thinking). (You cannot map the
unit interval continuously from one component of a topological space 
to another). If Daryl ever finds such a person, perhaps Daryl's 
argument could be used against them.

Cheers,
Pete Lupton
