From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!cis.ohio-state.edu!pacific.mps.ohio-state.edu!linac!mp.cs.niu.edu!rickert Mon May 25 14:06:34 EDT 1992
Article 5779 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!cis.ohio-state.edu!pacific.mps.ohio-state.edu!linac!mp.cs.niu.edu!rickert
>From: rickert@mp.cs.niu.edu (Neil Rickert)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding: Real vs. Virtual (formerly "on meaning")
Keywords: symbol, analog, Turing Test, robotics
Message-ID: <1992May20.191738.18644@mp.cs.niu.edu>
Date: 20 May 92 19:17:38 GMT
References: <1992May19.221021.1619@psych.toronto.edu> <1992May20.030811.13711@mp.cs.niu.edu> <1992May20.150243.25894@psych.toronto.edu>
Organization: Northern Illinois University
Lines: 51

In article <1992May20.150243.25894@psych.toronto.edu> christo@psych.toronto.edu (Christopher Green) writes:
>In article <1992May20.030811.13711@mp.cs.niu.edu> rickert@mp.cs.niu.edu (Neil Rickert) writes:
>>
>>  How can a human be raised in a completely virtual reality?  What is
>>a virtual hunger pang?  What is a virtual runny nose?  How much
>>excitement will there be in a virtual adrenaline rush?  How can a
>>human learn to speak if he can neither hear his own voice nor feel the
>>movement of his facial muscles?

>Interesting point, but the practical difficulties are not really the key to
>the philosophical question.  In principle, you could have a "brain-in-a-vat".
>And, if you did, Harnad's proposal implies that such a brain would have no
>real semantics because, I take it, these things and feelings it thought it
>was having would all be the result of artificial stimulation (I suspect
>there're problems with this phrase, but I'll let someone else take them up).

  When you say 'In principle, you could have a "brain-in-a-vat"' you are
sweeping a great deal under the rug.  Perhaps you are also sweeping all
of Harnad's ideas under the rug too.

  Even if you ignore the practical problems of getting the brain into the
vat, it may not work.  I suspect we make a serious mistake by trying to
locate the mind within the brain.  There are probably information paths which
are not fully within the brain, yet are necessary for its function.
Thus a brain signal may stimulate an organ to release hormone, which may
effect another organ, which may effect blood chemistry and the chemical
environment of the brain.  Information paths of this form may well be
essential.

  Once we go to AI, however, the situation is different.  We are stuck
with the design of our body, but when we design machines we don't have to
build in the same kind of information pathways.  The only importance of
"grounding" should be to gather data of a quantity and complexity which
might not be attainable any other way.  But once the data is available, you
should be able to put the AI machine in a vat.

>That is, it would never see a cat, but only the image of a cat. Thus, its
>tokening of "cat" owuld not refer to cats.  It would never feel a scratch 
>on its arm, but only the "image" of a scratch on its arm. 

  It can be argued that you never see a cat now, either, but only the image
of a cat.  In other words, what you perceive of vision is perhaps already
better thought of as a virtual reality, created by the brain as a way of
integrating input from the two eyes, perhaps from other sensory organs, and
information from memory.

-- 
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
  Neil W. Rickert, Computer Science               <rickert@cs.niu.edu>
  Northern Illinois Univ.
  DeKalb, IL 60115                                   +1-815-753-6940


