From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Mon Dec  9 10:48:15 EST 1991
Article 1885 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: A Behaviorist Approach to AI Philosophy
Message-ID: <5798@skye.ed.ac.uk>
Date: 5 Dec 91 18:21:17 GMT
References: <gdCb=YW00UhWQ2lpNp@andrew.cmu.edu> <YAMAUCHI.91Dec5040116@heron.cs.rochester.edu>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 8

In article <YAMAUCHI.91Dec5040116@heron.cs.rochester.edu> yamauchi@cs.rochester.edu (Brian Yamauchi) writes:
>Speed matters.

So if instead of a man in a room we have a very fast machine
following rules that are formally equivalent to those used by the
man in the room, suddenly we have understanding?  Seems bizarre
to me.  Moreover, this approach does nothing against Searle's
more general "syntax isn't enough for semantics" argument.


