From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Feb 20 15:21:52 EST 1992
Article 3841 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <6200@skye.ed.ac.uk>
Date: 18 Feb 92 20:44:44 GMT
References: <1992Feb17.213926.17908@oracorp.com>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 68

In article <1992Feb17.213926.17908@oracorp.com> daryl@oracorp.com writes:
>Jeff Dalton writes:
>
>> If you want my reasons, they're essentially that the table lookup is
>> too simple.  Do you seriously want to suggest that human understanding
>> might just be a table lookup of that sort?
>
>Of course not.

Amazing.  Two people manage to agree on this!

>> If not, it shouldn't be hard to imagine a host of definitions that say
>> humans understand, that the Table Lookup Machine doesn't, and that
>> aren't set up just to rule out table lookup (ie, not "anything except
>> table lookup that happens to produce the right behavior").
>
>I can imagine ways that a table lookup program differs from a brain,
>but I don't see why they should be considered relevant.

You're not trying very hard.  Indeed, you're devoting all your
effort to the other side.

>> Indeed, there will be many things in human understanding that
>> do not correspind to anything in the TLM.  For instance, when
>> the TLM determines what to say next, it just matches the most
>> recent input against the labels on the arcs from its current
>> state.  The matching is just string comparison, and doesn't
>> involve any sort of internal sentences like "why did he say
>> that?".
>
>In the case of the table lookup, all of that "internal processing" is
>done ahead of time, and goes into the building of the table.

No.  Some other processing is done ahead of time.

But if you want to call it "consciousness" when no non-trivial
thoughts (if we're willing to call such matching thought) are
even possible, go ahead.  I can only hope that such bizarre
usage doesn't catch on.

> The
>complexity of the table lookup must include that of the algorithm
>(which is obviously trivial) and that of the data (obviously
>enormously complex). 

The structure is not complex.  It's a table, remember.  

>So the program does things at "compile time" that
>humans do at "run time". 

No it doesn't.  It does different things at compile time.
I don't think you can be serious.

>> Nor does it involve knowing that "trees" refers to trees.
>
>*Your* use of the word "tree" doesn't involve real trees, it involves
>electrochemical reactions in your brain. I will grant that the human
>brain and sense organs are arranged so that our use of the word "tree"
>correlates with real-world facts about trees.  But then, the same is
>true for the table lookup (otherwise it wouldn't pass the Turing
>Test).

So far, no one in this group (except Zeleny) has seriously attempted
to answer Putnam's argument about cats and cherries (see his _Reason,
Truth, and History_), and the related arguments that causal connections
can't fix reference.  Correlations have even less hope of doing so.

-- jd


