From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!snorkelwacker.mit.edu!usc!wupost!uunet!icmv!tricorder!degroff Tue Mar 24 09:54:37 EST 1992
Article 4373 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!snorkelwacker.mit.edu!usc!wupost!uunet!icmv!tricorder!degroff
>From: degroff@tricorder.IntelliCorp.COM (Les Degroff)
Newsgroups: comp.ai.philosophy
Subject: Re: Reference (was re: Multiple Personality Disorder and Strong AI)
Message-ID: <1992Mar05.182904.13232@icmv>
Date: 5 Mar 92 18:29:04 GMT
References: <1992Feb25.182526.12698@oracorp.com>
Lines: 70
Nntp-Posting-Host: tricorder

In article <1992Feb25.182526.12698@oracorp.com>, daryl@oracorp.com writes:
|> Christopher Green writes (in response to Stanley Friesen):
|> 
|> CG:
|>    3. Computer programs are entirely defined by their formal, or syntactical
|>       structure....true by definition [of a computer program]

|>   Conclusion 4. For any artefact that we might build which had mental states
|>               equivalent to human mental states, the implementation
|>               of a computer program would not by itself be sufficient.
|>               Rather, the artefact would have to have powers equivalent to 
|>               the powers of the human brain.

|>     As I have already stated, I question assumptions 2 and 3.
Ditto for me esp.  2.
|> CG: 
|> 
|> > I can't conceive of what you object to in 3. It doesn't need
|> > evidence.  It's utterly analytic. Learning to program, even a little,
|> > should convince you.
(as is any formal description) does not mean that the implementations
|> of the program are purely syntactic. 
Good commentary on issues deleted
|>   
|> Daryl McCullough
|> ORA Corp.
|> Ithaca, NY
  Attempting to expand on Daryls comments in a more concrete form: It is
comparatively easy to write Input/Output programs and simulations that
"RUN" (are manifested in reality on a specific machine, time and place)
such that apriori, utter analytic abilities will not allow author or
visitors familar with the program to predict its current behavior.  The
normal teaching of programming, the standard attitude toward machines
leads down anther path, the goal is to make predictable programs, always
have the ability to examine and manipulate data,program. Consistency and
predictability are the goals.   A major part of "recognized intelligence"
is the "long history  != long history", weak map communications, unexpected
interactions effects.   Psuedo code for a (clearly not intelligent)
program to illustrate the "history" / "not predicatable" issue.

Begin
  Print "Please name me, I don't know my name"
  Take input and bind  Name = readinput
  Print "Thank you my name is" Name 
  m = 0
  Loop forever (n= n+1)     (heartbeat, clock is nice in history system)
      Print  "Tell me a magic word"
      Take input and store in array  magicword(m)= readinput  
      If (magicword(m) equals Name
             Then "It's my name and I am magic"
             Else, Search array to see if magicword(n) equals already entered
                   magicword.  
                   If equal magicword(m) and magicword (<m)
                     Print "I already know that one, tell me another!!"
                   Else       
                     Print " Thank you, an wonderful new magic word"
   Loop up.

   In lisp or other language with dynamic interpreted functional environment 
this  kind of "take input loop" can be expanded into a program that 
collects "Symbols/Tokens/Words" and "functions/actions/semantic bindings.
  In this simple example we still have distinct, predictable syntax but
if we ran it, it builds a potentially unique history. A weakness in
believing that the "system" of the Chinese room "understands Chinese" is
that human understanding is a "Long History" effect  with a large 
sensory/memory binding set  beyond the "token/symbol bindings.  I in
part believe that at "human mind" that was confined and limited to
learning the rules of a token language would not be intelligent or
human as a result of the "unreality of its input" and "context problem"
Les DeGroff (degroff@intellicorp.com)


