From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!utgpu!pindor Mon Mar  9 18:35:43 EST 1992
Article 4307 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: Definition of understanding
Message-ID: <1992Mar6.145636.13539@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <1992Mar5.141610.20612@oracorp.com> <1992Mar5.201538.1251@psych.toronto.edu>
Date: Fri, 6 Mar 1992 14:56:36 GMT
Lines: 61

In article <1992Mar5.201538.1251@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>In article <1992Mar5.141610.20612@oracorp.com> daryl@oracorp.com writes:
>>michael@psych.toronto.edu (Michael Gemar) writes:
>>
>>[About a system for calculating potential energy in springs]
>>
>>> You then say, "Aha, but the *system* that calculates potential energy
>>> in a spring does [understand about springs]!" However, someone who
>>> knows electrostatics says, "But wait!  That's also the formula for
>>> calculating the electrostatic energy *in a capacitor*.  Simply
>>> *interpret* k as C (capacitance) and as V (potential across the
>>> capacitor plates)."  *Now* what does the system "understand"?  Only
>>> pendulums?  Only capacitors?  Pendulums *and* capacitors?  I'd vote
>>> for neither, myself.
>>
>>Michael, I agree with your point here, but I don't see how it uncovers
>>a difference between the situation for computers and the situation for
>>human beings. You are supposing that there are two subjects: springs
>>and capacitors, that are exactly isomorphic (when we restrict our
>>attention to dynamics, anyway). Every statement about the one subject
>>can be interpreted as a statement about the other. Before you take
>>this as evidence of a fundamental difference between computers and
>>people, you should consider how (and if) *people* avoid this
>>difficulty.
>>
>
>The point of the above example was not meant to be generally taken to
>demonstrate that computers can't have reference, but merely to counter
>the rather outlandish claim made by an earlier poster that such a system
>would "understand" about potential energy.  The case above clearly shows
>(assuming anyone needed reminding) that this is just ridiculous.
>
>However, I *do* think that this example does show how much we rely on
>interpretation of computer behaviour in assigning meaning to it.  Many

Since you are in psychology department, you should know only too well how much
we rely on interpretation of other people's behaviour in assigning meaning to
what they say.
How do you establish a meaning of what someone says? It is YOUR interpretation
of what this person means, or is there a better way?

>people on the Net have argued that Searle's claim that shuffled Chinese
>symbols could instead be interpreted as chess moves is silly.  The above 
>case, I believe, points out that it is not as far fetched as folks might
>imagine, and that resting meaning on the criterion of "only consistent
>interpretation" is questionable at best.
>
Doesn't this apply also to interpretation of what humans do? Have you heard 
about different interpretations of works of literature? Were the ancient
Greek plays really about gods, or were they in fact about people?

>- michael
>
>


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


