From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Feb 20 15:20:36 EST 1992
Article 3713 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <6187@skye.ed.ac.uk>
Date: 13 Feb 92 22:37:43 GMT
References: <1992Jan30.141236.18589@oracorp.com>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 62

In article <1992Jan30.141236.18589@oracorp.com> daryl@oracorp.com writes:
>Jeff Dalton writes: (in response to Andrzej Pindor)
>
>>> Since you do not know how understanding arises in the human brain
>>> (the only system you are convinced has an ability to understand), how
>>> are going to tell, by looking inside the machine, whether it
>>> understands or not?
>
>> You may recall that I said we don't yet know enough.
>
>and also
>
>> In my view, whether or not it "understands" will depend on how it
>> works.  If it works like humans do, in the relevant ways, (which
>> I, but not Searle, think may be functional rather than physical),
>> then I'd say it understands.  If it works in some different way,
>> it would depend on just what that way was.  If it's table lookup,
>> for instance, I'd say it doesn't understand.
>
>Okay, Jeff, so don't go on about "we don't yet know enough". If you
>know eough to say definitely that the table lookup program doesn't
>understand, then you know enough for the purposes of this discussion.
>If you can explain why you are certain that the table lookup program
>is not capable of understanding, then we will at least made some
>progress.

I didn't say I was certain.  I think there are good reasons for
concluding it doesn't, not that it's certain it doesn't.

>So don't worry about some hypothetical machine whose mechanism we
>haven't even thought of yet---of course you don't know enough about
>such a machine. Let's talk about why the table lookup program doesn't
>understand. You should get a lot of help, since just about all the
>principle players of the pro/anti Searle discussion---you, Zeleny,
>Chalmers, McDermott, etc---seem to agree on this one.

But for different reasons.

If you just want some reasons, I think they have offered some.
If you want my reasons, they're essentially that the table lookup
is too simple.  Do you seriously want to suggest that human
understanding might just be a table lookup of that sort?
If not, it shouldn't be hard to imagine a host of definitions
that say humans understand, that the Table Lookup Machine doesn't,
and that aren't set up just to rule out table lookup (ie, not
"anything except table lookup that happens to produce the right
behavior").  

Indeed, there will be many things in human understanding that
do not correspind to anything in the TLM.  For instance, when
the TLM determines what to say next, it just matches the most
recent input against the labels on the arcs from its current
state.  The matching is just string comparison, and doesn't
involve any sort of internal sentences like "why did he say
that?".  Nor does it involve knowing that "trees" refers to
trees.  

This is still rather unsatisfactory, but it has to be.  As I said
before, if it works like humans do, in the relevant ways, (which I,
but not Searle, think may be functional rather than physical), then
I'd say it understands.  But in order to work out the details, I need
to know more about many things than I do now.


