From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon Dec 16 11:01:44 EST 1991
Article 2104 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Searle and the Chinese Room
Message-ID: <1991Dec13.175039.16227@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1991Dec10.213907.17512@psych.toronto.edu> <am0TcB3w164w@elrond.toppoint.de>
Date: Fri, 13 Dec 1991 17:50:39 GMT

In article <am0TcB3w164w@elrond.toppoint.de> freitag@elrond.toppoint.de (Claus Schoenleber) writes:
>michael@psych.toronto.edu (Michael Gemar) writes:
>
>> In article <u95kcB2w164w@elrond.toppoint.de> freitag@elrond.toppoint.de (Clau
>> >michael@psych.toronto.edu (Michael Gemar) writes:
>> >
>> >
>> >> [Some lines on holiday]
>> >> The strength of Searle's arugment is that, contrary to what some may claim
>> >> it does not rest on any particular way of telling the Chinese Room story. 
>> >> argument simply is that it is impossible to generate semantics from a pure
>> >> syntactic system.  This, Searle argues, is a *logical* point, true simply 
>> >> virtue of what the words "syntax" and "semantics" mean.  
>> >> [Following lines vanished]
>> >
>> >What, please, is the (your) meaning of "syntax" / "semantics"?
>> >(especially "semantics")
>> 
>> Syntax is the rule-based manipulation of marks due to their shape.
>> 
>
>O.k.
>
>> Semantics is the meaning that symbols have due to their reference to things
>> in the world (this is rough and ready, and would have to be qualified to cove
>> things like unicorns and numbers).
>> 
>> 
>
>O.k., that's the point I wanted to see: semantics = meaning of syntactical
>                                                                structures
>                                                                    (right?)

Semantics is to what symbols refer, yes.

>Now we're in the same trouble as before. We have to define what meaning is,
>haven't we?

Well, actually we have to define "reference".  A slight difference, to be
sure.

>Maybe "meaning" is association between a (more or less) complex symbol and
>some environmental event. 

What do you mean by "association"?  What kind of association counts?  When
I had the chicken pox, I got red spots on my skin.  Do red spots therefore
*mean* chicken pox in the same way that the word "horse" means horse?
I don't think so, and I think we need a much more sophisticated concept
of reference (meaning).

> Now, can't that be done by a sufficient complex
>syntactical rule system? There is nothing said about whether the acting
>machine is intelligent or simulates, it would behave like an intelligent
>system.

See above.  

>> >BTW, isn't it possible, that there does exist a level of complexity in synta
>> >where the quantity of syntax changes to quality (i.e. semantics)? 2 cents? ;
>> 
>> This seems to be the assertion that strong AI makes.  It seems to me, given
>> Searle's argument, that it is up to his critics to demonstrate *how* such
>> a thing would be possible.  If Searle is correct, and his argument is
>> essentially true because of the definition of the terms, then we should no
>
>Using the proper definitions you can prove almost everything. His definitions
>are the problem in my eyes.

Well, as far as I know, they are the definitions used by most philosophers
and linguists.  It seems to be only AI people that have trouble with
believing semantics is not the same as syntax.

>> more expect lots of syntax to yield semantics than we should expect that
>> a whole bunch of bachelors put together would somehow yield some who aren't
>> unmarried males.
>> 
>
>Greetings from the infinite number of monkeys, writing Hamlet :-)

While you meant the above in jest, it only points out the truth of
what Searle is saying.  We certainly wouldn't say that the monkeys
*understand* that they've written Hamlet...  Strings of randomly
produced letters that happen to form words are not produced with
any understanding, even though we may assign such marks meaning in our
language.

>
>(The following arguments are taken from the German issue of Scientific
>American, Spektrum der Wissenschaft, March 90, Searle's article)
>
>Searle said, computer can *only* manipulate symbols. That's similar to:
>"The Venus of Milo is only made from CaCO3 (Marble)". It is in fact made
>from marble, but that's not all to be said about. So let us say: Computers
>can manipulate symbols. (Not more, not less)
>(BTW, the Churchlands did make the same error: They say (as I read it)
>"semantics are _only_ syntax".)
>
>Searle said, human thoughts have semantic contents. (2. Axiom) But he forgot
>saying what he means if he uses the term "semantic". So the problem was
>shifted, not solved.

No, I disagree.  You are right in that Searle does not give a fancy
definition of semantics.  But he does say that semantics, for him, is
essentially equivalent to understanding.  And he *knows* he understands,
by introspection.

>His first conclusion that computers are not sufficient for mental ability
>is therefor not allowed, because he had never had a sufficient (complete)
>premise.
>Now, with his 4. Axiom he did something strange: He said "brains cause mind".
>Some lines before he said, computers have nothing to do with the technology
>they are made of. O.k., let us say: a silicon brain also can cause mind.

No, you miss the point.  Computers are *defined* as being machine-independent.
One computer can, with the appropriate programming, *function* just
like another.  Therefore, computers have nothing to do with the technology
they are made of -- a computer made of beer cans and string and powered
by windmills (one of my favorite Searle images) could, in principle, be
*functionally* identical to a silicon computer.

The fact that brains cause minds Searle takes to be incontrovertible,
at least for a materialist, which he is.

>But: Suddenly there is a new term ("brain"), never defined before. While there
>is no proper definition of semantics, no proper definition of brain and mind,
>why is he able to find conclusions?

Brains: those lumps of gray matter in people's heads.  What more defintion
do you need???

Remember that the statement "brains cause minds" is merely meant to 
establish that we know *for certain* that *some* kinds of material
things cause minds, namely, brains.  This does *not* in and of itself
rule out *other* things causing brains (e.g., computers).

>I think, this disussion is it worth to eliminate that "only" from all
>arguments and to be restarted.

Which "only"?  From "computers can *only* manipulate symbols
syntactically?"  If you do that, then you have re-defined computers in
way that, it seems to me, is entirely indefensible.  The *definition* of
computers is that they are purely syntactic engines.

>Another 2 cents of mine: There is no border between syntax and semantics,
>they belong to each other, never dividable like space and time (the one part
>you believe to understand, the other most is difficult to understand).

You may believe this, but a mere assertion does not make it so, and most
philosophers and linguists would disagree.

>And that's another term that needs proper definition: "understand".

Agreed, although for now all Searle needs is the introspective definition.

>I understand Searle: If strong AI is right, then humans are alone; no hope
>that there is some power, who cares for us. That is a serious philosophical
>(and/or psychological) problem, and I think almost the strong-AI-people know
>that and have their difficulties with it.

Huh?  Searle doesn't rule out God, merely HAL.  To be frank, I don't
particularly *want* a computer to care for me.  And I don't think it
is a philosophical problem in the least, except perhaps for existentialism.
I think that the philosophical problems with strong AI are *much
worse.

- michael




