From newshub.ccs.yorku.ca!torn!cs.utexas.edu!asuvax!gatech!usenet.ins.cwru.edu!agate!doc.ic.ac.uk!uknet!edcastle!cam Mon Nov  9 09:36:32 EST 1992
Article 7490 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!asuvax!gatech!usenet.ins.cwru.edu!agate!doc.ic.ac.uk!uknet!edcastle!cam
>From: cam@castle.ed.ac.uk (Chris Malcolm)
Newsgroups: comp.ai.philosophy
Subject: Re: grounding and the entity/environment boundary
Message-ID: <27598@castle.ed.ac.uk>
Date: 2 Nov 92 20:33:38 GMT
References: <markrose.720385670@spssig> <1992Oct30.143242.8130@news.media.mit.edu> <1992Oct30.195251.9573@spss.com>
Organization: Edinburgh University
Lines: 17

In article <1992Oct30.195251.9573@spss.com> markrose@spss.com (Mark Rosenfelder) writes:

>I tend to equate grounding with the folk notion of "knowing what you're 
>talking about."

Fair enough, but don't forget that important class of systems, the
"zombie" AI systems, which relate to the world via symbolic
representations of it, but which can only metaphorically be said to
"know what they are talking about", i.e., they can be correct in what
they say, but there's `nobody at home', no consciousness. It is
reasonable to discuss how well some such system is grounded; which
means we must not make consciousness (i.e. the Searlean sense of
"know") a condition of groundedness.
-- 
Chris Malcolm    cam@uk.ac.ed.aifh          +44 (0)31 650 3085
Department of Artificial Intelligence,    Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK                DoD #205


