From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!bonnie.concordia.ca!ccu.umanitoba.ca!access.usask.ca!kakwa.ucs.ualberta.ca!unixg.ubc.ca!ubc-cs!uw-beaver!zephyr.ens.tek.com!uunet!mcsu Tue Apr  7 23:22:00 EDT 1992
Article 4691 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:4691 sci.philosophy.tech:2413
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!bonnie.concordia.ca!ccu.umanitoba.ca!access.usask.ca!kakwa.ucs.ualberta.ca!unixg.ubc.ca!ubc-cs!uw-beaver!zephyr.ens.tek.com!uunet!mcsu
n!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Definition of understanding
Message-ID: <6493@skye.ed.ac.uk>
Date: 24 Mar 92 16:29:22 GMT
References: <1992Feb24.100036.9114@husc3.harvard.edu> <1992Feb24.180730.18355@psych.toronto.edu> <1992Feb25.035606.26557@u.washington.edu>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 22

In article <1992Feb25.035606.26557@u.washington.edu> forbis@milton.u.washington.edu (Gary Forbis) writes:
>If you accept the theoretic possibility of a lookup table that could be used
>to produce behavior indestinguishable from human behavior then to the extent
>that human behavior indicates semantics it has been reduce to syntax.  If
>humans have semantics but the lookup table does not then the semantics humans
>have that cannot be reduced to syntax cannot be expressed.  In this case
>semantics provides no explanitory power.

This depends very much on what you mean by "reduce to syntax".  What
you seem to mean is "the same type of behavior could be produced by
syntax".  But that doesn't mean _humans_ produce it by syntax.  To
explain what humans do, you have to explain how humans produce the
behavior, not how some other kind of entity produces the behavior.

An important difference between the two sides of this debate on AI
is that most of the people on the pro-AI side are convinced that it
doesn't matter _at all_ how some behavior is produced.  Anyone
who has ever asked "How does your program work?  Does it do the same
thing as this one?" knows that this is not so, even when only
programs are concerned.

-- jd


