Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!pipex!uunet!psinntp!scylla!daryl
From: daryl@oracorp.com (Daryl McCullough)
Subject: Re: Bag the Turing test (was: Penrose and
Message-ID: <1994Dec28.130840.6337@oracorp.com>
Organization: Odyssey Research Associates, Inc.
Date: Wed, 28 Dec 1994 13:08:40 GMT
Lines: 137

Lupton@luptonpj.demon.co.uk (Peter Lupton) writes:

>Daryl presents three arguments:
> 
>   1. that thoughts don't occur in real time, so the TT cannot be
>      criticised for failing to localize thoughts in time
>   2. that the sort of transformations required for tyhe production
>      of an HLT (that is, the processes involved in AI->HLT 
>      construction) need not be considered to be thinking.

I have to disagree with this characterization. What I claimed was
that the AI->HLT transformation need not involve *execution* of
the code. The expansion of the original program into an HLT need
not be done in any specific order, and need not be done by making
inputs to the program and simply recording the results.


>   3. that the question of where thinking is done is either vague
>      or involves 'magical' boundaries.
>
>My responses are:
>
>1. Thoughts don't occur in real time.
>
>I agree with the examples Daryl gives - I just point out they
>don't substantiate his claim. Some thoughts certainly occur
>over, say, a one or five minute period. I can, for example, right
>now, think of something and then it is clear enough that it was
>thought about over the previous 30-second period. Ample granularity
>for the TT to fail to be a test for it, which is all my case rests
>on.

What case are you talking about? I'm not disputing that two programs
which both pass the TT must be identical---there are obviously
differences that cannot be tested by behavioral tests. What I am
disputing is the relevance of these differences for the question of
whether the entity is "really thinking".

>2. The transformations involved in AI->HLT construction aren't 
>thinking.

>First of all, I would like to question whether Daryl even wants to 
>make this argument.

I don't, and I didn't. What I said was that the AI->HLT transformation
need not involve executing the AI and recording the outputs.

>Surely, the AI->HLT system could be hooked up
>to a teletype and would, if so hooked up, pass the TT.

Surely not. The AI->HLT transformation takes *programs* as inputs
and produces programs as outputs. It couldn't possibly pass the TT,
since human beings don't naturally speak C++.

>Surely Daryl
>would wish to say of that system that it *was* thinking? In which
>case, shouldn't the AI->HLT be thinking when its outputs aren't
>connected to the teletype but, instead, being stored into the HLT?

Somehow, you must have missed a big section of my last posting.
I was talking about constructing the HLT by program transformations,
not by storing the outputs of a program execution.

[stuff deleted arguing against a position I don't hold]

>Second, I would like to observe that the processes identified by Hans
>Moravec certainly strike one as a form of program execution.

As I said, the sense in which it is not "program execution" is that
it need not be done in any particular order. It is not necessary to
start with an input, and work forward through the program, one can
just as well start with the output and work backwards, or start in
the middle, and work towards both ends. There need be no relationship
between the temporal order of steps in the AI->HLT transformation and
the steps involved in executing the AI.

[stuff deleted describing in more detail the AI->HLT transformation
process]

>Indeed, the similarity between such transformations and program
>execution has struck a number of authors before now - it has been
>called 'symbolic execution' by them.

As I have said, what I am objecting to is the assumption that the
transformation must be done by executing the program statements in
order. Maybe you are claiming that thinking simply involves "having
silent transitions", regardless of their order?

>On this account, I see no need to apologise for pointing out that the
>processes of converting the AI->HLT involves processes sufficiently 
>similar to all possible executions of the AI that if we considered one 
>to be thinking, we should wish to consider the other to be thinking 
>all possible thoughts also.

You have to be more precise about what you mean by an execution and
what aspects of an execution involve thoughts for me to make sense of
this. I think you are barking up the wrong tree. Anyway, this is
getting way off the subject. The question was whether there are any
good reasons for believing that was why you believe that an HLT
doesn't involve thinking. Your response seemed to be that all the
thinking was done in constructing the HLT. Whether or not that is
true how does your conclusion follow?

>3. Magical boundaries.
>
>Here Daryl constructs a 'slippery slope' argument. I don't know what
>use such arguments are, except to dissuade people who believe that
>a certain pair of predicates divides things up into two disjoint
>camps (in this case thinking/non-thinking).  (You cannot map the
>unit interval continuously from one component of a topological space 
>to another). If Daryl ever finds such a person, perhaps Daryl's 
>argument could be used against them.

I can't see much relationship between this response and anything I
said. What I was saying was that the boundaries between "input
processing", "internal processing" and "output processing" can be
drawn pretty much anywhere, and so I don't see any reason for
considering one part "conscious" (having thoughts) and the others not.
In my opinion, it is the whole system together which implements
whatever consciousness is present, and to ask where the consciousness
resides is a mistake.

You seem to think that "silent transitions" are a critical component
of thinking, and I think that's fundamentally mistaken. The way I
understand it, silent transitions are simply a way of gradually moving
our memories into a new state. Nature made use of silent transitions
because there was no easy way to do it in one step. However, I think
that whatever thoughts we have are present in the memories, and their
relationship to experience, and not in the silent transitions that
created those memories. The only aspects of the transition process
that are important for thinking are those that we remember and can
reason about.

Daryl McCullough
ORA Corp.
Ithaca, NY

