From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!nuscc!maclane!smoliar Wed Feb  5 11:56:50 EST 1992
Article 3461 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!nuscc!maclane!smoliar
>From: smoliar@maclane.iss.nus.sg (stephen smoliar)
Subject: Re: Strong AI and Panpsychism
Message-ID: <1992Feb4.013401.9599@nuscc.nus.sg>
Sender: usenet@nuscc.nus.sg
Reply-To: smoliar@iss.nus.sg (stephen smoliar)
Organization: Institute of Systems Science, NUS, Singapore
References: <1992Feb2.170040.6615@news.media.mit.edu> <1992Feb3.101341.13151@nuscc.nus.sg> <CHANDRA.92Feb3091111@cannelloni.cis.ohio-state.edu>
Date: Tue, 4 Feb 1992 01:34:01 GMT

In article <CHANDRA.92Feb3091111@cannelloni.cis.ohio-state.edu>
chandra@cannelloni.cis.ohio-state.edu (B Chandrasekaran) writes:
>In article <1992Feb3.101341.13151@nuscc.nus.sg> 
>smoliar@maclane.iss.nus.sg (stephen smoliar) writes (in response to Minsky's
>proposal about some sort of GPS mechanism acting in memory): 
>
>  ... what sort of difference reduction
>   goes on in Tinbergen's stickleback?  I have always worried that the GPS 
>   model of goals and targets is a bit too simplistic.  Even if we assign it
>   to
>   lower-level agents, can we even get it to accommodate the sort of variety
>   we encounter in animal behavior?  Perhaps a back-of-the-envelope sketch 
>   might help me with this predicament.
>
>Actualy I think just the opposite: that GPS, especially in its later
>incarnation as SOAR (Newell told me in a conversation, "SOAR is GPS
>done right") is realy a model of *deliberative activity*, not a model
>of anything that goes on in memory.

I am not sure that Newell is not pulling a fast one here.  There is no
question that SOAR is doing SOME KIND of deliberation.  Indeed, from one
particular level of abstraction, deliberation is pretty much ALL it is doing!
I also have no trouble with his disclaimer that he has not built SOAR as a
memory model.  Nevertheless, the deliberation is confined to SOAR's OWN memory,
so to speak;  and that is where I begin to have a bit of trouble.  In the
context of the article I wrote just before posting about Tinbergen, I raised
the issue of the relevance of motor behavior.  In particular, I wanted to
consider the possibility that "systems" for sensation, motor activity, and
cognition were too tightly coupled to be neatly separated.  When Newell comes
along and uses a word like "activity," my response is to question where motor
behavior fits in his story;  or is he just using the word "activity" because
he has used the word "processing" too much?

>  Thus, with respect to animals,
>whatever diference eduction goes on is an extremely minimal one: whatever
>problem spaces and operators that the representation-poor beast can set up,
>it can try to reduce, but it is clearly a lot less that higher level
>animals with increasing amounts of symbolic content to their deliberation.
>
Suppose those "representation-poor" beasts do not HAVE problem spaces and
operators.  Perhaps I should not have formulated my question in terms of
the stickleback.  Let us consider one of Brooks' robots instead.  (For the
sake of argument, we can deal with some of the case studies in "Intelligence
Without Representation," given the direction of this discussion.)  What I
would like to see is a suitable difference-reduction analysis of the behavior
of one of these devices (in the SOAR framework if that is how Newell now
believes we should be talking about the world).  Once I have that in my
pocket, I am probably going to want to know what role it will play in helping
me build more powerful devices.
-- 
Stephen W. Smoliar; Institute of Systems Science
National University of Singapore; Heng Mui Keng Terrace
Kent Ridge, SINGAPORE 0511
Internet:  smoliar@iss.nus.sg


