From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!yale!mintaka.lcs.mit.edu!micro-heart-of-gold.mit.edu!news.media.mit.edu!minsky Tue Jan 21 09:27:03 EST 1992
Article 2876 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!yale!mintaka.lcs.mit.edu!micro-heart-of-gold.mit.edu!news.media.mit.edu!minsky
>From: minsky@media.mit.edu (Marvin Minsky)
Subject: Re: Intelligence Testing
Message-ID: <1992Jan18.195906.15800@news.media.mit.edu>
Sender: news@news.media.mit.edu (USENET News System)
Cc: minsky
Organization: MIT Media Laboratory
References: <1992Jan18.144220.11862@oracorp.com>
Date: Sat, 18 Jan 1992 19:59:06 GMT
Lines: 60

In article <1992Jan18.144220.11862@oracorp.com> daryl@oracorp.com writes:
>David Chalmers writes:
>
(Discussion of lookup table, etc. omitted)

>> I personally think that the case against 1 and 2 is made compellingly
>> by the example of the giant lookup table -- a ridiculous example,
>> impossible in practice but not in principle, but enough to make the
>> case.  I think that it's like that any reasonable-in-practice
>> mechanisms that has the right behaviour will have mentality, however.
>
>I agree that the giant lookup table is ridiculous as a way to
>implement AI, but I don't understand why it is so obvious that such an
>implementation would lack mentality. Your answer might be that it
>would lack the internal states that real minds have, but I don't even
>grant that: in the case of the lookup table, the internal state would
>be coded as a location in the lookup table. It is certainly true that
>this interpretation of internal state would not obey the same
>transition rules as our own internal states, but what makes the one
>"conscious processing" and the other not?
>
>Daryl McCullough
>ORA Corp.
>Ithaca, NY

Umm, I agree with the conclusion, that the anti-conscious theses gets
no support.  But I don't see any reason to admit :it is certainly true
that this .. would not obey the same transition rules as our own
internal states.  To be sure, it might not.  However, a reasonable
guess might be that the state-transition table for the internal
location states must be -- what's the mathematical word for this -- a
structure of which the simulated brain's transition semi-group is a
homomorphism.  Of course this isn't rigorous because each human will
have lots of inaccessible states -- that is, one's which never affect
behavior -- hence the super-table could be simplified in those
respects.

My point is that some skeptics could miss Darryl's point because of
not realizing that an adequate such table-machine must indeed be so
large that, as he says, the internal state-transition mechan ism must
indeed be of the order of graph-complexity as the wiring of the brain!
After all, the table itself has as many entries as the brain has
states.  It would be rash indeed for a skeptic to feel confident that
a machine of this magnitude -- it has perhaps 2**10**10 nodes, which
is quite a few googols - could "obviously" not be conscious, whatever
that might (or might not) mean.  To insist on that would simply
clarify the weakness of (Searle's?) thesis which, so far as I can see
says something like:
  Let's assume that no machine can be conscious (or understand
anything, or have intentionality).
  Therefore the Chinese room machine cannot be conscious, etc.

A fine bit of logic, for sure, but a faulty bit of reasoning.









