From newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!destroyer!ncar!noao!amethyst!organpipe.uug.arizona.edu!organpipe.uug.arizona.edu!bill Tue Jun  9 10:07:59 EDT 1992
Article 6162 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!destroyer!ncar!noao!amethyst!organpipe.uug.arizona.edu!organpipe.uug.arizona.edu!bill
>From: bill@nsma.arizona.edu (Bill Skaggs)
Newsgroups: comp.ai.philosophy
Subject: Re: Transducers
Message-ID: <BILL.92Jun8150837@cortex.nsma.arizona.edu>
Date: 8 Jun 92 22:08:37 GMT
References: <1992Jun6.153132.25456@Princeton.EDU>
	<1992Jun6.163918.24479@news.media.mit.edu>
	<BILL.92Jun6194350@ca3.nsma.arizona.edu>
	<1992Jun7.034525.16059@cs.ucf.edu>
	<BILL.92Jun7131519@ca3.nsma.arizona.edu> <60790@aurs01.UUCP>
Sender: news@organpipe.uug.arizona.edu
Organization: ARL Division of Neural Systems, Memory and Aging, University of
	Arizona
Lines: 42
In-Reply-To: throop@aurs01.UUCP's message of 8 Jun 92 17: 09:42 GMT

throop@aurs01.UUCP (Wayne Throop) writes:

   -> bill@nsma.arizona.edu (Bill Skaggs):
   -> Rodney Brooks, who builds fantastic insect-like robots at MIT, claims
   -> that one of the greatest hindrances to AI has been the lack of
   -> "embodiment" of the systems that people have built; he argues that
   -> this leads to unrealistic conceptions of the nature of the problems
   -> that need to be solved.

   But surely this is talking about *engineering* or design problems,
   not any problems of TT-passing-without-embodiedness in *principle*.

   Right?

Rodney Brooks says:  "Turing advances a number of strawman arguments
against the case that a digital computer might one day be able to pass
this test, but he does not consider the need that the machine be fully
embodied.  In principle, of course, he is right.  But how a machine
might be then programmed is a question.  Turing provides an argument
that programming the machine by hand would be impractical, so he
suggests having it learn.  At this point he brings up the need to
embody the machine in some way.  He rejects giving it limbs, but
suspects that eyes would be good, although not entirely necessary.  At
the end of the paper he proposes two possible paths towards his goal
of a `thinking' machine.  The unembodied path is to concentrate on
programming intellectual activities like chess, while the embodied
approach is to equip a digital computer `with the best sense organs
that money can buy, and then teach it to understand and speak
English'.  Artificial Intelligence followed the former path, and has
all but ignored the latter approach."

[Note:  Brooks is referring to "Computing Machinery and Mind", in
particular to a section omitted from the excerpt reprinted in "The
Mind's I".]


In any case, I can't get too excited about "in principle" arguments.
As far as I can see, saying something is true in principle means
saying it would be true if you could change the inconvenient part of
reality.

	-- Bill


