From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!spool.mu.edu!uunet!psinntp!scylla!daryl Mon Mar  9 18:33:31 EST 1992
Article 4106 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!spool.mu.edu!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <1992Feb27.185327.2687@oracorp.com>
Date: 27 Feb 92 18:53:27 GMT
Organization: ORA Corporation
Lines: 86

jeff@aiai.ed.ac.uk (Jeff Dalton) writes:

[about the table lookup program]

>> Why do you say "no non-trivial thoughts are possible"? 

> Where are they taking place? What processes are involved?

I believe that the problems you are having stem from the fact that the
table is static; the information stored in the table never changes
with time, only the pointer indicating the current location within the
table. However, as special-relativity teaches us, you can always view
*any* three-dimensional object that changes with time as a static,
four-dimensional object of which we only see a single slice each
moment. In the table, these future possibilities are simply made
explicit.

To me, the main elements of thinking are (a) memory, and (b) a
mechanism for producing future behavior influenced by memory and
inputs. The question "What is being X *currently* thinking?" does not
have a clear answer, even when X is a human. To see this, simply note
that thinking takes time, and so at any instant a thought is cut in
two pieces: the part that has already been thought, and the part that
will be thought in the future.  *Both* pieces can be found in the
current brain. The past part is in short-term memory. The future
part---the set of *possible* completions to the "current thought"---is
determined by the current state of the brain and the dynamics of brain
functioning. It is only when you put your thoughts into words that
your thoughts acquire a precise temporal order.

In the table lookup program, the current state of the program is
specified by a node in a huge conversation tree. (The top node of the
tree represents the start of the converation.) The path from the top
node to the current node is the "memory" of the program; it records
what has gone on previously in the conversation. The subtree of
conversations that are beneath the current node are the possible
future conversations of the program.  This is the heart (or rather,
brain) of the program and it is this set of future conversations that
we have to look to in order to answer questions about the mental
properties of the table lookup program.

> There's really nothing mysterious about this.  Suppose we had a 
> C compiler that looked up the object code in a huge table. Now
> someone comes along and (without knowing how it works) asks
> whether it's executing a graph-coloring register allocation
> algorithm.

Bad example. The C compiler is not behaviorally equivalent to a
graph-coloring register allocation algorithm.

>> When I say that I believe the table-lookup program is conscious, I
>> mean that I believe that it has thoughts, thoughts as complex as you
>> or I have.

> I do not see how this can possibly be the case.  Can you give any
> argument for it other than "it's so complex anything is possible"?

I didn't make that argument. I don't believe that it is capable of
thoughts because it is complex; I don't think complexity has anything
(directly) to do with it. I believe that anything behaviorally
equivalent to an normal intelligent human being is intelligent
(conscious, or whatever), *regardless* of how complex or how simple it
is. The only reason I brought up complexity was that someone (I'm not
sure who) said that the table was too simple to be intelligent.
Whether or not complexity is important for consciousness, the
conclusion doesn't follow, since the table is not simple.

As for why I would believe that the table is capable of thoughts,
it is because if you ask it "What are you thinking?" it will give a
plausible answer. I have no more reason to doubt its answer than to
doubt your answer to the same question. (Or for that matter, my *own*
answer to the same question.)

>> To the extent that causal connections and correlations can't fix
>> reference, it doesn't get fixed, either in humans or in table-lookup
>> programs.

> And you'd be happy with that? That it doesn't get fixed?

No, I just think that the problems with reference that Putnam (and
others) point out apply to more than just machines, they apply to
every thinking being, as well.

Daryl McCullough
ORA Corp.
Ithaca, NY


