From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Jan 21 09:27:40 EST 1992
Article 2945 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <6029@skye.ed.ac.uk>
Date: 21 Jan 92 00:51:34 GMT
References: <1992Jan18.144220.11862@oracorp.com> <1992Jan18.195906.15800@news.media.mit.edu>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 62

In article <1992Jan18.195906.15800@news.media.mit.edu> minsky@media.mit.edu (Marvin Minsky) writes:
>In article <1992Jan18.144220.11862@oracorp.com> daryl@oracorp.com writes:
>>David Chalmers writes:
>>
>(Discussion of lookup table, etc. omitted)
>
>>I agree that the giant lookup table is ridiculous as a way to
>>implement AI, but I don't understand why it is so obvious that such an
>>implementation would lack mentality. Your answer might be that it
>>would lack the internal states that real minds have, but I don't even
>>grant that: in the case of the lookup table, the internal state would
>>be coded as a location in the lookup table. It is certainly true that
>>this interpretation of internal state would not obey the same
>>transition rules as our own internal states, but what makes the one
>>"conscious processing" and the other not?

This whole point relies on "conscious processing" being something
mysterious so that, for all we know, our coffee cups are conscious
and just pretending not to be.  Or some very simple program, nowhere
near passing the Turing Test, is conscious, just not very bright.

If you want someone to prove it's not conscious, then maybe you're
right: no one could do it (yet).  But suppose we eventually have
a computational theory of mind that we think is adequate.  How
can you be so sure it will say table lookup is conscious?

If that's all that's required, then let's start dancing in the
streets.  No need to do all that hard cog sci work - just imagine
a very large table and wait for the hardware guys to catch up.

>Umm, I agree with the conclusion, that the anti-conscious theses gets
>no support.  But I don't see any reason to admit :it is certainly true
>that this .. would not obey the same transition rules as our own
>internal states.  To be sure, it might not.  However, a reasonable
>guess might be that the state-transition table for the internal
>location states must be -- what's the mathematical word for this -- a
>structure of which the simulated brain's transition semi-group is a
>homomorphism.

I suspect it has lots of states but only very simple transitions.
It shouldn't be the case that anything with enough states is good
enough, because it's trivial to make programs with as many states
as you'd like.

>My point is that some skeptics could miss Darryl's point because of
>not realizing that an adequate such table-machine must indeed be so
>large that, as he says, the internal state-transition mechan ism must
>indeed be of the order of graph-complexity as the wiring of the brain!

Size and complexity are not quite the same.

>After all, the table itself has as many entries as the brain has
>states.  It would be rash indeed for a skeptic to feel confident that
>a machine of this magnitude -- it has perhaps 2**10**10 nodes, which
>is quite a few googols - could "obviously" not be conscious, whatever
>that might (or might not) mean.

The main effect of such examples, in my opinion, is not to be an
air-tight proof of non-consciousness but rather to weaken the
intuition that anything that acts like it understands does.

-- jeff


