From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!wupost!uunet!math.fu-berlin.de!uniol!tpki.toppoint.de!elrond.toppoint.de!freitag Mon Dec 16 11:01:19 EST 1991
Article 2059 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!wupost!uunet!math.fu-berlin.de!uniol!tpki.toppoint.de!elrond.toppoint.de!freitag
>From: freitag@elrond.toppoint.de (Claus Schoenleber)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle and the Chinese Room
Message-ID: <am0TcB3w164w@elrond.toppoint.de>
Date: 12 Dec 91 10:57:21 GMT
References: <1991Dec10.213907.17512@psych.toronto.edu>
Organization: Claus Schoenleber, Kiel, Germany (3-926986)
Lines: 113

michael@psych.toronto.edu (Michael Gemar) writes:

> In article <u95kcB2w164w@elrond.toppoint.de> freitag@elrond.toppoint.de (Clau
> >michael@psych.toronto.edu (Michael Gemar) writes:
> >
> >
> >> [Some lines on holiday]
> >> The strength of Searle's arugment is that, contrary to what some may claim
> >> it does not rest on any particular way of telling the Chinese Room story. 
> >> argument simply is that it is impossible to generate semantics from a pure
> >> syntactic system.  This, Searle argues, is a *logical* point, true simply 
> >> virtue of what the words "syntax" and "semantics" mean.  
> >> [Following lines vanished]
> >
> >What, please, is the (your) meaning of "syntax" / "semantics"?
> >(especially "semantics")
> 
> Syntax is the rule-based manipulation of marks due to their shape.
> 

O.k.

> Semantics is the meaning that symbols have due to their reference to things
> in the world (this is rough and ready, and would have to be qualified to cove
> things like unicorns and numbers).
> 
> 

O.k., that's the point I wanted to see: semantics = meaning of syntactical
                                                                structures
                                                                    (right?)

Now we're in the same trouble as before. We have to define what meaning is,
haven't we?

Maybe "meaning" is association between a (more or less) complex symbol and
some environmental event. Now, can't that be done by a sufficient complex
syntactical rule system? There is nothing said about whether the acting
machine is intelligent or simulates, it would behave like an intelligent
system.


> >BTW, isn't it possible, that there does exist a level of complexity in synta
> >where the quantity of syntax changes to quality (i.e. semantics)? 2 cents? ;
> 
> This seems to be the assertion that strong AI makes.  It seems to me, given
> Searle's argument, that it is up to his critics to demonstrate *how* such
> a thing would be possible.  If Searle is correct, and his argument is
> essentially true because of the definition of the terms, then we should no

Using the proper definitions you can prove almost everything. His definitions
are the problem in my eyes.

> more expect lots of syntax to yield semantics than we should expect that
> a whole bunch of bachelors put together would somehow yield some who aren't
> unmarried males.
> 

Greetings from the infinite number of monkeys, writing Hamlet :-)

(The following arguments are taken from the German issue of Scientific
American, Spektrum der Wissenschaft, March 90, Searle's article)

Searle said, computer can *only* manipulate symbols. That's similar to:
"The Venus of Milo is only made from CaCO3 (Marble)". It is in fact made
from marble, but that's not all to be said about. So let us say: Computers
can manipulate symbols. (Not more, not less)
(BTW, the Churchlands did make the same error: They say (as I read it)
"semantics are _only_ syntax".)

Searle said, human thoughts have semantic contents. (2. Axiom) But he forgot
saying what he means if he uses the term "semantic". So the problem was
shifted, not solved.
His first conclusion that computers are not sufficient for mental ability
is therefor not allowed, because he had never had a sufficient (complete)
premise.
Now, with his 4. Axiom he did something strange: He said "brains cause mind".
Some lines before he said, computers have nothing to do with the technology
they are made of. O.k., let us say: a silicon brain also can cause mind.

But: Suddenly there is a new term ("brain"), never defined before. While there
is no proper definition of semantics, no proper definition of brain and mind,
why is he able to find conclusions?

I think, this disussion is it worth to eliminate that "only" from all
arguments and to be restarted.

Another 2 cents of mine: There is no border between syntax and semantics,
they belong to each other, never dividable like space and time (the one part
you believe to understand, the other most is difficult to understand).

And that's another term that needs proper definition: "understand".

Reagards,

Claus.

p.s.:
I understand Searle: If strong AI is right, then humans are alone; no hope
that there is some power, who cares for us. That is a serious philosophical
(and/or psychological) problem, and I think almost the strong-AI-people know
that and have their difficulties with it.



-----------------------------------------------------------------
Claus Schoenleber                      freitag@elrond.toppoint.de
2300 Kiel 1  
Germany					 +49 431 18863 (voice, Q)
=================================================================
        "And he that breaks a thing to find out what it is 
          has left the path of wisdom" (Gandalf the Grey)
=================================================================


