From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!olivea!netsys!pagesat!spssig.spss.com!markrose Mon Nov  9 09:36:40 EST 1992
Article 7503 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!olivea!netsys!pagesat!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Newsgroups: comp.ai.philosophy
Subject: Re: grounding and the entity/environment boundary
Message-ID: <1992Nov3.232417.16154@spss.com>
Date: 3 Nov 92 23:24:17 GMT
References: <1992Oct30.143242.8130@news.media.mit.edu> <1992Oct30.195251.9573@spss.com> <27598@castle.ed.ac.uk>
Sender: news@spss.com (Net News Admin)
Organization: SPSS Inc.
Lines: 17

In article <27598@castle.ed.ac.uk> cam@castle.ed.ac.uk (Chris Malcolm) writes:
>In article <1992Oct30.195251.9573@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>>I tend to equate grounding with the folk notion of "knowing what you're 
>>talking about."
>
>Fair enough, but don't forget that important class of systems, the
>"zombie" AI systems, which relate to the world via symbolic
>representations of it, but which can only metaphorically be said to
>"know what they are talking about", i.e., they can be correct in what
>they say, but there's `nobody at home', no consciousness.  It is
>reasonable to discuss how well some such system is grounded; which
>means we must not make consciousness (i.e. the Searlean sense of
>"know") a condition of groundedness.

I think you've pretty much answered your own objection.  As you say, we 
say the symbols-only system knows what it's talking about "only meta-
phorically"; it's grounded the same way (i.e not much or not at all).


