From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!wupost!udel!rochester!kodak!ispd-newsserver!psinntp!scylla!daryl Tue Mar 24 09:58:01 EST 1992
Article 4661 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!wupost!udel!rochester!kodak!ispd-newsserver!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Subject: Re: Definition of understanding
Message-ID: <1992Mar23.043056.8578@oracorp.com>
Organization: ORA Corporation
Date: Mon, 23 Mar 1992 04:30:56 GMT

michael@psych.toronto.edu (Michael Gemar) writes:

>> All I was saying is that Searle's argument that computers can never
>> understand can just as well apply to people as computers. Therefore,
>> either (a) there is something magical that exempts people from the
>> argument, (b) the argument is wrong, or (c) neither people nor
>> computers can understand. Searle believes (a), I believe (b), and
>> you are trying to stick me with (c).
>
> To equate "magical" with "non-Functionalist" is to go a long way toward
> assuming your conclusion.

I call that which humans are supposed to have but computers lack
"magical" because it has remarkable properties and because its nature
is left completely unspecified by the people who invoke it. If you have
some nonmagical and nonfunctionalist explanation of how humans acquire
semantics, I would certainly be interested in hearing about it.

>>Of course people are capable of understanding, but as I said, I
>>believe that this fact is due to the functional properties of the
>>brain and the way the brain is connected to the world through our
>>sense organs. What I don't believe is that there is some magical way
>>that our thoughts have "semantics" that is not available to computers.

>Then what you believe is Functionalism, but you have made no argument
>for it here (except labelling any competing hypothesis as "magical").

What competing hypothesis? Did someone propose a competing hypothesis
for how humans acquire semantics that I missed? The only real argument
for Functionalism is that it is a plausible hypothesis, (at least to
me), and no other hypothesis is.

Functionalism fails to be convincing only because of one big gap; it
doesn't completely explain subjective experience. However, I don't see
how any objective theory can be satisfying in this regard. Suppose
that a theory makes a statement of the form "Any physical system with
physical property P will have subjective experience E". How could such
a statement be tested? The only evidence we could ever have would be
that whenever human beings have property P, they have experience E. I
don't see how we could ever get any information at all about the truth
of such a statement when the system in question is not a human being.
We can say, however that every system with property P can be
interpreted as having experience E. I agree with you that this is not
completely satisfactory, but I don't know how you could possibly ask
for more.

You (in a different thread) have made much of the fact that
differential equations can be interpreted in more than one way. You
suggest that this would continue to be true, regardless of how
sophisticated the functional rules are. While I agree in principle, I
think that there is a practical limit to the ability to come up with
alternative interpretations. Take a textbook on electromagnetic
theory: you may be able to interpret isolated equations in more than
one way, but is it possible for you to come up with a second coherent
interpretation for the entire textbook? In particular, do you think
that you can successfully interpret an entire textbook on
electromagnetic theory to be about springs and harmonic oscillators?
It is theoretically possible to come up with such an alternative
interpretation, but it is practically impossible.

I believe that the apparent uniqueness of the meanings of our own
thoughts are simply due to the enormous amount of inter-related
information in our memories. It is beyond *our* ability to come up
with a second plausible interpretation, and so for practical purposes,
the interpretation is unique. The same would hold of a sufficiently
detailed computer model; in practice, the interpretation is unique.

     In my opinion, the most that can be asked of an intelligent being
     (computer or human) is:

     1. It's internal processing produces the right relationships among its
     internal patterns.

     2. The being's [connections] to the world produces the right relationship
     between the internal patterns and the external world.

>In thinking it over, I am not sure if 1. is required by functionalism.
>As I understand it, the only functional relations that matter are the
>ones between the inputs to the system as a whole and the outputs from
>the system as a whole.  In this case we needn't talk about "internal
>patterns" as necessary for intelligence - if we did, then some
>implementations might be able to pass the Turing Test *without* the
>right "internal patterns" and thus not "really" be intelligent
>(Humoungous Table Lookup, anyone?).  If I am wrong in characterising
>functionalism in this way, I am happy to hear the real story...

If you drop the requirement about "having the right internal
processing", then you get behaviorism. Actually, behaviorism is simply
a species of functionalism that says that any internal processing that
produces the right behavior is the right kind.

>>The syntax of a language is its intrinsic structural properties (the
>>rules saying what constitutes a term, how terms are combined to make
>>sentences, etc.) The semantics is the interpretation, which is the
>>mapping from terms and sentences in the language to objects and
>>relationships in the intended domain. We can classify sentences as
>>syntactically correct or and semantically correct regardless of whether
>>it was uttered by a person, a computer, or a parrot.

>>The controversy is not really over syntax versus semantics, it is over
>>the question of whether the semantics is somehow "inherent" in the
>>system producing the language.

>...or alternatively how syntax can generate semantics. When you state
>the differences between the two of them as you have above, I simply
>cannot see how you can reduce the one to the other.

I am not saying that you can reduce semantics to syntax, I am saying
that a computer system can have both.

Daryl McCullough
ORA Corp.
Ithaca, NY
 



