From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!darwin.sura.net!europa.asd.contel.com!uunet!psinntp!scylla!daryl Thu Feb 20 15:21:34 EST 1992
Article 3812 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!darwin.sura.net!europa.asd.contel.com!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Newsgroups: comp.ai.philosophy
Subject: Intelligence Testing
Message-ID: <1992Feb17.213926.17908@oracorp.com>
Date: 17 Feb 92 21:39:26 GMT
Organization: ORA Corporation
Lines: 45

Jeff Dalton writes:

> If you want my reasons, they're essentially that the table lookup is
> too simple.  Do you seriously want to suggest that human understanding
> might just be a table lookup of that sort?

Of course not.

> If not, it shouldn't be hard to imagine a host of definitions that say
> humans understand, that the Table Lookup Machine doesn't, and that
> aren't set up just to rule out table lookup (ie, not "anything except
> table lookup that happens to produce the right behavior").

I can imagine ways that a table lookup program differs from a brain,
but I don't see why they should be considered relevant.

> Indeed, there will be many things in human understanding that
> do not correspind to anything in the TLM.  For instance, when
> the TLM determines what to say next, it just matches the most
> recent input against the labels on the arcs from its current
> state.  The matching is just string comparison, and doesn't
> involve any sort of internal sentences like "why did he say
> that?".

In the case of the table lookup, all of that "internal processing" is
done ahead of time, and goes into the building of the table. The
complexity of the table lookup must include that of the algorithm
(which is obviously trivial) and that of the data (obviously
enormously complex). So the program does things at "compile time" that
humans do at "run time". I agree that this is a difference, but why
should it be considered relevant to the question of whether a program
is conscious?

> Nor does it involve knowing that "trees" refers to trees.

*Your* use of the word "tree" doesn't involve real trees, it involves
electrochemical reactions in your brain. I will grant that the human
brain and sense organs are arranged so that our use of the word "tree"
correlates with real-world facts about trees.  But then, the same is
true for the table lookup (otherwise it wouldn't pass the Turing
Test).

Daryl McCullough
ORA Corp.
Ithaca, NY


