From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uwm.edu!linac!mp.cs.niu.edu!rickert Tue Jun  9 10:05:48 EDT 1992
Article 5992 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uwm.edu!linac!mp.cs.niu.edu!rickert
>From: rickert@mp.cs.niu.edu (Neil Rickert)
Subject: Re: Grounding: Virtual vs. Real
Message-ID: <1992Jun1.014731.28528@mp.cs.niu.edu>
Organization: Northern Illinois University
References: <9571@scott.ed.ac.uk> <1992May29.152559.226@mp.cs.niu.edu> <9597@scott.ed.ac.uk>
Date: Mon, 1 Jun 1992 01:47:31 GMT
Lines: 64

In article <9597@scott.ed.ac.uk> sharder@cogsci.ed.ac.uk (Soren Harder) writes:
>rickert@mp.cs.niu.edu (Neil Rickert) writes:

>                  But there still has to be _symbol_ grounding; the AI
>needs some previous experience with the world.

 I have never suggested otherwise.  But once the grounding is established
on one computer robot, you simply do a disk copy to establish it on the
next.  In other words, the only reason for real-world grounding is that
manual generation of the data would be too difficult.

>The problem is that the transducers doesn't need to translate the data
>back into the form they had in the digital representation of the
>virtual world.

  Again I was responding to a context where they did.  But regardless,
digital computers are very powerful, and given enough compute power
can probably do all the same data transformations.

>       A transducer (a clever English speaking man) would then be able

  Unfair!  You are using transducer in a totally non-standard way and
assuming the conclusion in your hypothesis if you call a human translater
a transducer.

>>                                                 I was discussing only
>>the claim that there is intelligence with transducers and that the
>>intelligence disappears - even though behaviour remains identical - once
>
>Why do you think the behaviour remains identical? (How could it?)

  I believe that was allowed by Harnad in his posting.

>>  Perhaps you can explain what you meant by that comment.  Are you
>>perhaps claiming that there is information (rising tone, falling tone, etc)
>>which is not present in the data on the CD?

>I mean that they are vastly different from the salient features in the
>representation on the CD.

 So what.  The other information is all perfectly computable.  As long
as it is there in the digital information there is, in principle, no
difficulty in extracting it.  It might take lots of compute power though.

>>  Well thank you for answering my rhetorical questions.  I am glad to see
>>you agree with me that there is no intelligence in the transducers, and
>>that you are thereby (perhaps unintentionally) supporting my claim that
>>the transducer argument is bogus.
>
>I might have misunderstood what you meant with a 'modem'. If you mean
>a 'transducer' then I retract one no. But I'm looking forward to the
>answer of your quiz: Where was the thinking and the intelligence, if
>not in the system. 

  You should ask Harnad, not me.  Harnad assumed that there was intelligence
in his TTT.  I am just using that as a basis for further exploration.  All
I am claiming to show is that the intelligence is not in the transducers,
so it must be in what is left.

-- 
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
  Neil W. Rickert, Computer Science               <rickert@cs.niu.edu>
  Northern Illinois Univ.
  DeKalb, IL 60115                                   +1-815-753-6940


