From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!att!att!fang!tarpit!cs.ucf.edu!news Tue Mar 24 09:58:06 EST 1992
Article 4671 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!att!att!fang!tarpit!cs.ucf.edu!news
>From: clarke@acme.ucf.edu (Thomas Clarke)
Newsgroups: comp.ai.philosophy
Subject: Re: A rock implements every FSA
Message-ID: <1992Mar23.143844.26050@cs.ucf.edu>
Date: 23 Mar 92 14:38:44 GMT
References: <45216@dime.cs.umass.edu>
Sender: news@cs.ucf.edu (News system)
Organization: University of Central Florida
Lines: 115

In article <45216@dime.cs.umass.edu> orourke@unix1.cs.umass.edu (Joseph  
O'Rourke) writes:
| In article <1992Mar20.142954.19624@cs.ucf.edu> 
| 	clarke@acme.ucf.edu (Thomas Clarke) writes:
| 
|  >I think everyone is making it too complicated.
|  >[...]
|  >Putnam then goes on to talk about "an object S which ...behaves... exactly 
|  >as if it had a certain description D."  The same mathematical  
identification  
|  >technique can be applied to S to establish that it realizes input/output  
|  >automaton D.   
| 
| This is a restatement of what Putnam says, yes.  But I'm still not sure
| I understand it.  If you do, I would appreciate a rephrasing that makes 
| it clearer. 
 
This is the way I understand it.  Pardon me for putting words in Putnam's  
mouth.

Systems Analyst:  We here at HAL, Inc. are very proud of our IBM 360.  Very  
shortly we will be able to use it to translate between any two human languages.

Putnam:  So you have implemented an Artificial Intelligence?!

Systems Analyst:  No.  Just a language translator.  Why do you bring up AI?  

Putnam: The argument is simple.  Observe this rock; it sits there seemingly  
unchanging.  This IBM 360 is powered up but otherwise disconnected; it sits  
there also seemingly unchanging.  The two are equivalent.

Systems Analyst:  Not so, I just started the 360 on an infinite loop of self  
diagnostic procedures.  The rock is doing nothing.

Putnam:  But is not the rock subject to bombardment by cosmic rays, is it not  
bathed in radiation from TV and radio transmitters, is it not subject to  
planetary tides?  Thus the rock is never truly in the same state from instant  
to instant.  It is easy to establish a mathematical mapping between the  
internal computational states of your 360 and the physical states of this rock.   
In fact since your diagnostics are periodic, the rock's states are potentially  
equivalent to a more complex calculation than your computer is conducting.
   If you want your computer to be more than a doorstop, you'll have to make it  
do something.

Systems Analyst.  Well, OK.  Here, I've connected the 360 to a terminal, and  
I've started it running a simulation of one of Brooks' insect robots.  Now its  
not equivalent to a rock.

Putnam:  Congratulations, the 360 is now computationally equivalent to that ant  
crawling across the window there.  Given that their behaviors are equivalent,  
the ant in the world, the 360 in the simulated world of the terminal, I can use  
the same sort of mathematical mapping to establish a correspondence between the  
360's digital states, and the ant's bio-chemical-physical states.

Systems Analyst.  Hold on.  The ant is full of neurons, and is driven by  
pheromones and other physico-chemical signals.  The 360 uses lisp.  The two are  
entirely different.

Putnam:  Nevertheless, they are computationally equivalent to the extent that  
Brooks' robot behaves like an ant.  For every possible input, the robot and the  
ant exhibit the same behavior.  In the robot simulation this means some  
variables in the machine take on certain values and these values fire  
conditionals giving the desired behavior.  In the ant, certain conjuctions of  
inputs cause neurons to fire, moving muscles giving the desired behavior.  Just  
as the states of the rock and the disconnected computer can be identified, the  
variables and the neural firings can be identified.  This identification is of  
course not unique.  The details of the mapping might be a good activity for a  
graduate student.  

Systems Analyst.  Granted. Where does the AI come in?  I've got a meeting in  
five.

Putnam:  We're almost there.  Name a system that can accept input, say in  
French, and produce a meaningful output in English.

Systems Analyst.  A human translator, of course.  Our machine translator is  
almost ...

Putnam:  Now, would you expect a feral human, raised without culture or  
language to be able to be a good translator?

Systems Analyst:  Of course not.  If the feral human was not irretrievably  
damaged, they would have to learn language.

Putnam:  Would this be enough?  Could a human translate French idioms without  
some cultural background?

Systems Analyst:  No.  From what I remember of high school French, there are  
phrasings that can't literally be translated from one language to another.

Putnam:  Thus, a translator acts not only as if he has knowledge of vocabulary  
and grammar, but as if he been exposed to human culture.

Systems Analyst:  Yes, but where are you going.

Putnam:  We're there.  Replace the he with it.  Your machine translator, if it  
works, behaves like the human translator which behaves like a bi-lingual  
cultured human.  By an extension of the above mathematical argument, the  
machine translator's internal states can be identified with the internal  
bio-chemical-physical neural states of the human translator.
	Therefore, you machine will be able to pass the Turing test to the  
extent that a translator can pass one.  Thus, most observers would call tghe  
machine translator intelligent.  You have created an AI!

Systems Analyst:  No, not at all.  Our project is limited to translation.  The  
broader goals of AI we leave to academics at Harvard, MIT etc.  
	Look, I've got to go.  This meeting is about timing our announcement of  
the translator. <Leaves>

Putnam: <Picking up phone>.  Hello.  Is this my broker?  Sell my stock in HAL,  
please.
 


 


