From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!mcsun!uknet!edcastle!edcogsci!dlh Mon May 25 14:07:17 EDT 1992
Article 5855 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!mcsun!uknet!edcastle!edcogsci!dlh
>From: dlh@cogsci.ed.ac.uk (Dominik Lukes)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding: Real vs. Virtual (formerly "on meaning")
Keywords: symbol, analog, Turing Test, robotics
Message-ID: <9444@scott.ed.ac.uk>
Date: 22 May 92 13:22:19 GMT
References: <1992May19.003821.9450@Princeton.EDU> <1992May19.221021.1619@psych.toronto.edu>
Organization: Centre for Cognitive Science, Edinburgh, UK
Lines: 71

In article <1992May19.221021.1619@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:


>1. I agree that syntax alone will not yield semantics.  Semantics must
>   be attached to the symbols in some manner.

I do not agree with this from this point of view. In linguistics ther
are sometime differentiated two classes of morphemes:
(1)autosemantic(car, cat, table) and (2) synsemantic(but, or, and),
however even  (2)s yield a consistent meaning, just consider sentence
like "I don't want to here any BUTs and ORs, any more." taken from my
mother. So, my point is that the symbols"pure syntax" consists of have
meaning derived from their use.
>
>2. I assert that a virtual reality can be made as indistinguishable
>   from the real world as one would like (more importantly, that
>   it can be made accurate well below the detectable level of
>   our transducers, or the transducers of any simulation of us).
>
>3. Given 2., I assert that a human put into an accurate enough
>   rendering of the real world by a virtual environment would
>   have *exactly* the same experiences as a human in the real
>   environment.  Specifically, such a human would possess semantics.
>   (This in and of itself may be enough to question the necessity of
>   grounding symbols in the actual physical world for the production
>   of semantics.)
>
>4. A program in the situation in 3. would not possess semantics, since
>   it is purely a syntactic system in contact with a simulated
>   (not real) world (this follows from 1.).
>   I take it that you agree with me here.
>
To make things work in some compatible manner you  would have to strip
your human off all groungings gotten during previous development or to
add some to computer. The AI so far fails, because nobody k ows how to
do this.



>The conclusion that I draw from the above is:
>
therfore incorrect...

>5. Statements 3. and 4. taken together seem to indicate that there
>   is a principled difference in the way that humans and programs could
>   obtain semantics.  Specifically, it points to a situation in which
>   what is sufficient for humans is not sufficient for a program.  The
>   grounding of symbols in the *virtual* world, while not sufficient
>   for a program, is sufficient for a human.  Therefore, symbol
>   grounding in the *physical* world cannot be a necessary cause of the
>   meaningfulness of symbols.

...because, you take humans as software depending on its hardware,
which is rihgt, but do not consider hardware with computers which is
wrong, unless you are able to simulate all the biases humans get
through their bodies, properely in your program, what I really doubt.

>
>I welcome your comments.
>
>- michael
And I do welcome yours!

Yours sincere & faithfull,
Dominik.
======================================
    My spELling iS wobbly.              
It's goOd spelling bUt it wobbles       
   and tHe letters get iN               
      the wrOng plaCes.                 
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,


