From newshub.ccs.yorku.ca!torn!utzoo!helios.physics.utoronto.ca!utcsri!rpi!zaphod.mps.ohio-state.edu!wupost!spool.mu.edu!umn.edu!news Thu Oct  8 10:10:30 EDT 1992
Article 7050 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!utzoo!helios.physics.utoronto.ca!utcsri!rpi!zaphod.mps.ohio-state.edu!wupost!spool.mu.edu!umn.edu!news
>From: debauche@psych.umn.edu (Brian DeBauche)
Subject: Re: Grounding
Message-ID: <1992Sep28.155628.7904@news2.cis.umn.edu>
Sender: news@news2.cis.umn.edu (Usenet News Administration)
Nntp-Posting-Host: text2.psych.umn.edu
Organization: University of Minnesota
References: <1992Sep24.011517.22127@Princeton.EDU>
Date: Mon, 28 Sep 1992 15:56:28 GMT
Lines: 76

In article <1992Sep24.011517.22127@Princeton.EDU> harnad@phoenix.Princeton.EDU  
(Stevan Harnad) writes:
 
> ... What I would say is that a *computer* cannot be grounded by
> connecting it to a Unix system (a virtual world, a real world, or
> anything). A grounded robot (i.e., a robot with the capacity to pass
> the Total Turing Test in the real world) would still be grounded even
> if its sensors and effectors were connected to the peripherals of a
> virtual world (generated by, say, a Unix system), just as WE (with our
> TTT capacity) would still be grounded if our senses and effectors were
> connected only to the I/O of a virtual world simulator. The distinction
> is critical. It is TTT-CAPACITY that the robot must have in order to be
> grounded. It makes no difference whatsoever how it acquired that
> capacity, as long as it has it. If it has it, it has it, regardless of
> what its sensors and actuators are connected to. What cannot be
> grounded, no matter what it's connected to, is a computer, which, by
> definition, cannot have TTT (robotic) capacity.
> 

     If I can restate this line of reasoning for clarification; the original  
contention from farther back in the pile was that 
1. If a symbol manipulator is grounded in reality, then it is conscious.
     This is followed by
2. A robot is such a symbol manipulator. 
     Which Harnad modifies thusly
3. Any robot will be grounded in reality; 
  a) by definition 'robotic'  implies correct intersection with reality by  
transduction.
  b) passing the Total Turing Test indicates consciousness.

	I believe this reasoning to be faulty due to this last premise. By  
consciousness I assume one means the quality of introspection indicated by  
Descartes' "I think therefore I am"; whereby there is at least one thing of  
which I can be sure of, even if hooked to a VR simulating external reality, and  
that is that I think. In this thinking I am directly aware of my introspective  
qualia, as no one else is. The "I" so indicated is then my introspective self  
which is present during dreams, remembrance and waking activity.
     The Turing Test was never meant to indicate this sense of consciousness,  
as the author himself states;
     "The original question, 'Can machines think?' I believe to be too  
meaningless to deserve discussion. Nevertheless I believe that at the end of  
the century the use of words and general educated opinion will have altered so  
much that one will be able to speak of machines thinking without expecting to  
be contradicted. I believe further that no useful prupose is served by  
concealing these beliefs." (A.M. Turing, Mind, 59:433-460, 1950)
     What Turing suggests is that our understanding of introspective phenomenon  
is predicated on the social construction of what it is to be a human animal-  
for millenium consciousness has been used as the essential characteristic of  
those deserving of moral import in society. The change in our assignation of  
value will come about when computers become functionally equivalent to  
ourselves (the TTT-capacity above). This does not in any way deny our  
individual introspective sense and its extension to others, whose experience we  
assume to be homologous.
     Solipsism for Turing was the consequence of considering one's mental  
experience in relation to the world. We have no consciometer, no objective  
indicator for the types of subjective experiences we daily undergo. Our basis  
for believing others to be subjectively isomorphic is merely behavioral   
similarity. This is why he describes our ascription of mind to other humans as  
a "polite convention". 
      Grounding in any sense is no more than another step toward functional  
equivalence. As has been stated, functional isomorphism is no guarantee of  
subjective experience. If you are willing to extend to any TTT robot the  
quality of mind, then there is an implication for any system exhibiting  
patterned response to the environment- that the internal states of such a  
system exhibits the same kind of processing one can expect in a human; and this  
leads plausibly to panpsychism, that all systems are conscious.
     The other path through this argument is to revise our concept of mind; by  
accepting intelligent systems into our moral community, we expand social  
responsibilities towards robots and animals. This opens up a number of  
problematic consequences towards the way we treat the earth and each other.
And there would still exist no proof of this quality of belongingness, except  
as an assumption based on social convention.

									 
									 
					 


