From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!convex!mips.mitek.com!spssig.spss.com!markrose Tue Mar 24 09:56:06 EST 1992
Article 4489 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!convex!mips.mitek.com!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: Definition of understanding
Message-ID: <1992Mar16.233438.45463@spss.com>
Date: Mon, 16 Mar 1992 23:34:38 GMT
References: <1992Mar10.171755.7458@psych.toronto.edu> <1992Mar11.122705.22342@neptune.inf.ethz.ch> <1992Mar11.185921.10347@psych.toronto.edu>
Nntp-Posting-Host: spssrs7.spss.com
Organization: SPSS Inc.
Lines: 39

In article <1992Mar11.185921.10347@psych.toronto.edu> michael@psych.toronto.edu 
(Michael Gemar) writes (quoting Philip Santas):
>>You can do type checking even statically in this example:
>>
>>  PE GivePE(Distance displacementFromEquilibrium)
>>          {
>>           Distance x = displacementFromEquilibrium;
>>           SpringConstantType k = SpringConstant;  // this is global variable 
>>           return (1/2) * k * power(x,2);
>>          };
>>
>>Relevant things you can do for capacitances, by changing the type
>>of the argument displacementFromEquilibrium.
>
>This does *not* ground the meaning of these terms.  How do these variables
>know that the numbers input are spring constant and displacement, rather
>than capacitance and potential.  Merely typing 
>  
>Distance x = displacmentFromEquilibrium
>
>does not tell the computer what "distance" and "displacement from equilibrium"
>*is*!  I could have just as easily typed:
>
>Qaatlus x = GwornsBleebArack
>
>and the program would *still* compute *both* Potential *and* electrostatic
>energy.

Exactly.  I am afraid that some AI types confuse the names of symbols with
semantics.  The two expressions you write are strictly equivalent to the
computer (indeed, many compilers would generate identical object code from
them).  The first expression is intelligible to a human observer; this 
must not be confused with understanding on the part of the computer.

I don't think this makes artificial intelligence unattainable.  But 
if a computer understands a term, it will be because it can relate it
to an enormous mass of information, experience, and procedures (much as
happens in a human being), and not because the variables it uses
have names that resemble English words.


