From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!ccu.umanitoba.ca!zirdum Mon Mar  9 18:33:12 EST 1992
Article 4074 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!ccu.umanitoba.ca!zirdum
>From: zirdum@ccu.umanitoba.ca (Antun Zirdum)
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of understanding
Message-ID: <1992Feb27.182302.5525@ccu.umanitoba.ca>
Date: 27 Feb 92 18:23:02 GMT
References: <1992Feb26.165452.7666@psych.toronto.edu> <1992Feb26.190407.5123@organpipe.uug.arizona.edu> <1992Feb27.025740.8034@a.cs.okstate.edu>
Organization: University of Manitoba, Winnipeg, Manitoba, Canada
Lines: 39

In article <1992Feb27.025740.8034@a.cs.okstate.edu> onstott@a.cs.okstate.edu (ONSTOTT CHARLES OR) writes:
>In other words, can the system, non chinese speaking, generate its own
>questions out of its own curiosity, or better still, can the system 
>translate from its native tongue to that of chinese?  It appears that
>this debate is only focusing on the fact that the inputs are already 
>in chinese and that the outputs are chinese.  It seems to ignore that
>the system is uncapable of translating from its own language to another(that
>of Chinese).  
>
NOTE: The system's language *IS* Chinese!!!
Repeat after me slowly "The system speaks Chinese, not English!"
>
>There is a creative component inherent in understanding that seems to
>be entirely ignored.  I could, for example, purchase a book on 
>set theory and site examples from that book over the internet and impress
>on people that I indeed understand set theory.  However, I do not know
>anything about set theory and, thus, I can not claim to understand it.
>If I did know something about set theory, I could not claim to understand
>it until I was able to apply it creatively to a problem.  Passing tokens
>around blindily in no way indicates understanding, rather application
>and origination of those tokens does.  
>
Absolutely, If the system only spouts sentences without being able
to answer questions *in an efficient, understandable way* then
I would be the first to say that the machine is *NOT* intelligent!

There is no requirement for any intelligent person to be a causal
agent! I am aware of several mathematical savants that would
never initiate an investigation, but when asked a question
and they answer it, there is no doubt in any persons mind
that there is *some* kind of vast intelligence at work!

>BCnya,
>  Charles O. Onstott, III
-- 
*****************************************************************
*   AZ    -- zirdum@ccu.umanitoba.ca                            *
*     " The first hundred years are the hardest! " - W. Mizner  *
*****************************************************************


