From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!ames!sun-barr!cronkite.Central.Sun.COM!exodus!appserv!orfeo.Eng.Sun.COM!silber Sun Dec  1 13:06:53 EST 1991
Article 1777 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!ames!sun-barr!cronkite.Central.Sun.COM!exodus!appserv!orfeo.Eng.Sun.COM!silber
>From: silber@orfeo.Eng.Sun.COM (Eric Silber)
Newsgroups: comp.ai.philosophy
Subject: Teleology of Reference
Message-ID: <1209@appserv.Eng.Sun.COM>
Date: 25 Nov 91 22:10:50 GMT
Sender: news@appserv.Eng.Sun.COM
Organization: Sun Microsystems, Mt. View, Ca.
Lines: 19


 Now the man in the chinese room who so faithfully executes
 his catechism of symbol pushing does not UNDERSTAND Chinese
 and the "program" itself does not UNDERSTAND Chinese because
 knowledge is not the same thing as UNDERSTANDING.  In a certain
 sense, UNDERSTANDINg is knowledge in action/INTERACTION .  
 Searle does not ( i believe ) focus on the difference between
 an a-i program and a running instance of an a-i program.
 Dead humans , just as switched-off machines do not make "references".
 The problem of transcending or jump-starting the 
 referential system is bound up with the agency of symbol, "x stands for
 y" does not mean only that x can substitute for y, or that it can point
 to y, or "refer" to y; "x stands for y" means that x can perform some
 teleological function on behalf of y.  Reference with respect to some
 goal oriented process, perhaps.  Mind evolved as an agency, in a-i
 goal-oriented processes simulate such agencies, HAL understands chess,
 gnu-chess does not!

 


