From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon Mar  9 18:35:26 EST 1992
Article 4284 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Definition of understanding
Organization: Department of Psychology, University of Toronto
References: <1992Mar5.141610.20612@oracorp.com>
Message-ID: <1992Mar5.201538.1251@psych.toronto.edu>
Date: Thu, 5 Mar 1992 20:15:38 GMT

In article <1992Mar5.141610.20612@oracorp.com> daryl@oracorp.com writes:
>michael@psych.toronto.edu (Michael Gemar) writes:
>
>[About a system for calculating potential energy in springs]
>
>> You then say, "Aha, but the *system* that calculates potential energy
>> in a spring does [understand about springs]!" However, someone who
>> knows electrostatics says, "But wait!  That's also the formula for
>> calculating the electrostatic energy *in a capacitor*.  Simply
>> *interpret* k as C (capacitance) and as V (potential across the
>> capacitor plates)."  *Now* what does the system "understand"?  Only
>> pendulums?  Only capacitors?  Pendulums *and* capacitors?  I'd vote
>> for neither, myself.
>
>Michael, I agree with your point here, but I don't see how it uncovers
>a difference between the situation for computers and the situation for
>human beings. You are supposing that there are two subjects: springs
>and capacitors, that are exactly isomorphic (when we restrict our
>attention to dynamics, anyway). Every statement about the one subject
>can be interpreted as a statement about the other. Before you take
>this as evidence of a fundamental difference between computers and
>people, you should consider how (and if) *people* avoid this
>difficulty.
>

The point of the above example was not meant to be generally taken to
demonstrate that computers can't have reference, but merely to counter
the rather outlandish claim made by an earlier poster that such a system
would "understand" about potential energy.  The case above clearly shows
(assuming anyone needed reminding) that this is just ridiculous.

However, I *do* think that this example does show how much we rely on
interpretation of computer behaviour in assigning meaning to it.  Many
people on the Net have argued that Searle's claim that shuffled Chinese
symbols could instead be interpreted as chess moves is silly.  The above 
case, I believe, points out that it is not as far fetched as folks might
imagine, and that resting meaning on the criterion of "only consistent
interpretation" is questionable at best.

- michael




