From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon May 25 14:07:14 EDT 1992
Article 5850 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Grounding and Symbols
Organization: Department of Psychology, University of Toronto
References: <1992May20.170019.26095@kbsw1> <1992May20.181548.7296@cs.ucf.edu> <1992May22.012530.19921@news.media.mit.edu>
Message-ID: <1992May22.183855.13048@psych.toronto.edu>
Date: Fri, 22 May 1992 18:38:55 GMT

In article <1992May22.012530.19921@news.media.mit.edu> nlc@media.mit.edu (Nick Cassimatis) writes:

>While this still leaves unclear as to *exactly* how I should interpret
>"grounding" I have to wonder why the property of being grounded is
>important at all.  Assume we have a robot that works roughly on the
>same level of competence as a human being.  Assume also that it was
>programmed in LISP code that doesn't look much different from the sort
>of LISP code you see today (this last clause is put in here to narrow
>the possibilities not to include things such as LISP implementations
>of neural nets).  Being a LISP program, it will have symbols in it.

*EQUIVOCATION ALERT!!!!*

To interpret "symbol" to mean "something that refers to something else"
is to beg the question.  Why not use "marks" instead, which carries fewer 
problematic connotations?  After all, all that the program does is look
at the "shape" of the thing that it manipulating, and not its meaning.

>Suppose that we can find no adequate basis for them being grounded
>under any of the plausible definitions.  So what!  The robot works and
>"AI has been achieved."  Don't you all see (and it appears that some
>people do, including Fernando) that such a priori discussions over
>groundedness aren't accomplishing anything (positive)?

No, I don't.

>A few people (knowing my interest in AI and knowing that I read some
>philosophy) ask me what's this Chinese Room thing all about, they were
>completely astonished by my description of it.

Given your obvious position on the issue, I'm not sure if your the
most unbiased presenter.

>  One said: "I can't
>believe people even take bull*$&% like that seriously."

I feel sorry for your friend... 

>  I have to say
>that had that been my first introduction to philosiphy, it would have
>been a long time until I read anything else.

...and you.

>I do enjoy reading Patricia and Paul Churchland , Dan Dennett and WVO
>Quine

Again, not that surprising, since their positions are close to that which
you seem to be advocating.  Realize, however, that are certainly not
the last word on the issue (and no, neither is Searle).

- michael



