From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!psinntp!scylla!daryl Tue Jan 21 09:27:22 EST 1992
Article 2912 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Re: Intelligence Testing
Message-ID: <1992Jan20.152751.6143@oracorp.com>
Organization: ORA Corporation
Date: Mon, 20 Jan 1992 15:27:51 GMT

Dave Chalmers writes (in response to Marvin Minsky):

> You seem to be thinking about a different kind of table -- one that has
> a single state for each brain state, with appropriate connections between
> these?  I'm certainly not arguing that this wouldn't be conscious.
> The one I'm talking about is the one whose internal structure consists
> entirely of a huge tree, representing a space of conversations.  At
> a given time, an input statement comes in, the system follows the
> appropriate branch labelled with that statement (a branch exists for
> every possible input), and finds at the new node a representation of the
> appropriate response, which it utters.

Marvin Minsky's assumption that there would have to be a node for each
brain state follows from the following hypothesis: Given two different
brain states, if the difference is important, then there is some
conversation that will uncover the difference. That is, the assumption
is that differences in our brain *can* (not must) show up as differences
in what we say.

> So there's only one state-transition between every input and output.
> As for the size of the table, assuming (conservatively) one million
> possible input statements on each step, and the capacity to handle a
> "Turing test" of a million steps (this has to last a lifetime, remember),
> it will have 10^(6 million) entries.  But its causal structure will be
> extremely simple, and it seems very implausible to me that this huge but
> trivial mechanism could support consciousness.

I don't think that there is anything trivial about a system with
10^(6 million) states. (That's the figure I came up with for the
number of possible conversations, as well). If the triviality is
simply due to the fact that there is a single transition between every
input and output, then consider the following thought experiment:
Augment the human brain with an electronic signalling device that
announces each brain transition (say, with a loud "beep"). For the
augmented brain, it is true that there is exactly one transition for
each output (although most of the outputs are "beeps"). Is the brain
less capable of understanding because of this?

The complexity of the "causal structure" must take into account the
complexity of the data it uses. After all, every computer program can
be implemented using a universal Turing machine which considers the
program as data. In the case of the table lookup machine *all* the
complexity is in the data, but I don't see, a priori, why that should
be important for understanding. Is there a Searle-style argument showing
that such a table lookup is not capable of understanding?

Daryl McCullough
ORA Corp.
Ithaca, NY



     


