From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Mar 24 09:58:15 EST 1992
Article 4686 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <6480@skye.ed.ac.uk>
Date: 23 Mar 92 20:17:34 GMT
References: <1992Feb27.185327.2687@oracorp.com>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 59

In article <1992Feb27.185327.2687@oracorp.com> daryl@oracorp.com writes:
>jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>
>[about the table lookup program]

>To me, the main elements of thinking are (a) memory, and (b) a
>mechanism for producing future behavior influenced by memory and
>inputs. 

And, as you know, I think it matters what the mechanism is.
In humans, at least, there are thoughts-to-ourselves; we
create sentences that are never output.  But the table machine
doesn't.

The question "What is being X *currently* thinking?" does not
>have a clear answer, even when X is a human. To see this, simply note
>that thinking takes time, and so at any instant a thought is cut in
>two pieces: the part that has already been thought, and the part that
>will be thought in the future.  *Both* pieces can be found in the
>current brain. The past part is in short-term memory. The future
>part---the set of *possible* completions to the "current thought"---is
>determined by the current state of the brain and the dynamics of brain
>functioning. It is only when you put your thoughts into words that
>your thoughts acquire a precise temporal order.

The future part can also depend on things that happen outside the
person; and whether the current state of the brain and the dynamcis
of brain functioning are otherwise sufficient would depend on what
we decide about free will.

>                       I believe that anything behaviorally
>equivalent to an normal intelligent human being is intelligent
>(conscious, or whatever), *regardless* of how complex or how simple it
>is.

>As for why I would believe that the table is capable of thoughts,
>it is because if you ask it "What are you thinking?" it will give a
>plausible answer. I have no more reason to doubt its answer than to
>doubt your answer to the same question. (Or for that matter, my *own*
>answer to the same question.)

Perhaps we should just say we differ on what count as relevant
reasons.

>>> To the extent that causal connections and correlations can't fix
>>> reference, it doesn't get fixed, either in humans or in table-lookup
>>> programs.
>
>> And you'd be happy with that? That it doesn't get fixed?
>
>No, I just think that the problems with reference that Putnam (and
>others) point out apply to more than just machines, they apply to
>every thinking being, as well.

There is always a tension when arguing for AI between saying computers
are as good as humans and saying humans are no better than machines.
I prefer conclusions that are naturally of the former sort.

-- jd


