From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!ira.uka.de!math.fu-berlin.de!uniol!tpki.toppoint.de!elrond.toppoint.de!freitag Mon Dec 16 11:01:57 EST 1991
Article 2125 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!ira.uka.de!math.fu-berlin.de!uniol!tpki.toppoint.de!elrond.toppoint.de!freitag
>From: freitag@elrond.toppoint.de (Claus Schoenleber)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle and the Chinese Room
Message-ID: <cqDycB3w164w@elrond.toppoint.de>
Date: 14 Dec 91 16:16:35 GMT
Organization: Claus Schoenleber, Kiel, Germany (3-926986)
Lines: 226

michael@psych.toronto.edu (Michael Gemar) writes:

>In article <am0TcB3w164w@elrond.toppoint.de> freitag@elrond.toppoint.de (Claus Schoenleber) writes:
>>michael@psych.toronto.edu (Michael Gemar) writes:
>>
>>> In article <u95kcB2w164w@elrond.toppoint.de> freitag@elrond.toppoint.de (Clau
>>> >michael@psych.toronto.edu (Michael Gemar) writes:
>>> >
>>> >
[some quotings in that article deleted]

>Semantics is to what symbols refer, yes.
>
>>Now we're in the same trouble as before. We have to define what meaning is,
>>haven't we?
>
>Well, actually we have to define "reference".  A slight difference, to be
>sure.
>

"reference" can be used as term for "relation" (mathematical definition).

>>Maybe "meaning" is association between a (more or less) complex symbol and
>>some environmental event.
>
>What do you mean by "association"?

A relation.

>...What kind of association counts?  When
>I had the chicken pox, I got red spots on my skin.  Do red spots therefore
>*mean* chicken pox in the same way that the word "horse" means horse?

Yes, what else? If your medic sees red spots on your skin, he'd say: "Uuh,
this man has probably chicken pox!" ;-)

Serious: The "red spots" is a _predicate_ of chicken pox, the word "horse"
is the _name_ of four-leg-by-cowboys-used-mammals like "equus" or "Pferd".
So both carry the "meaning" (whatever that means) of the terms they belong
to.


>..., and I think we need a much more sophisticated concept
>of reference (meaning).
>

That's what I said.

>>Using the proper definitions you can prove almost everything. His definitions
>>are the problem in my eyes.
>
>Well, as far as I know, they are the definitions used by most philosophers
>and linguists.  It seems to be only AI people that have trouble with
>believing semantics is not the same as syntax.
>

Oh, there are many AI people who believe that. But believing is not sufficient
for scientists.
(BTW, if 5 million flys choose a meal, it need not to be a good one. :-) )

There is a great difference between definitions philosophers and linguists
make and those used by mathematicians and logicians. And the first kind of
definition is not sufficient for research in AI.

>>Greetings from the infinite number of monkeys, writing Hamlet :-)
>
>While you meant the above in jest, it only points out the truth of
>what Searle is saying.  We certainly wouldn't say that the monkeys
>*understand* that they've written Hamlet...  Strings of randomly
>produced letters that happen to form words are not produced with
>any understanding, even though we may assign such marks meaning in our
>language.
>

Hmm, I think, we don't speak about "randomly produced" patterns when we
speak of AI. So take it for what is was meant: a joke.

>>Searle said, computer can *only* manipulate symbols. That's similar to:
>>"The Venus of Milo is only made from CaCO3 (Marble)". It is in fact made
>>from marble, but that's not all to be said about. So let us say: Computers
>>can manipulate symbols. (Not more, not less)
>>(BTW, the Churchlands did make the same error: They say (as I read it)
>>"semantics are _only_ syntax".)
>>
>>Searle said, human thoughts have semantic contents. (2. Axiom) But he forgot
>>saying what he means if he uses the term "semantic". So the problem was
>>shifted, not solved.
>
>No, I disagree.  You are right in that Searle does not give a fancy
>definition of semantics.  But he does say that semantics, for him, is
>essentially equivalent to understanding.  And he *knows* he understands,
>by introspection.
>

As I said: the problem was shifted (to the term "understanding").

>>His first conclusion that computers are not sufficient for mental ability
>>is therefor not allowed, because he had never had a sufficient (complete)
>>premise.
>>Now, with his 4. Axiom he did something strange: He said "brains cause mind".
>>Some lines before he said, computers have nothing to do with the technology
>>they are made of. O.k., let us say: a silicon brain also can cause mind.
>
>No, you miss the point.  Computers are *defined* as being machine-independent.
>One computer can, with the appropriate programming, *function* just
>like another.  Therefore, computers have nothing to do with the technology
>they are made of -- a computer made of beer cans and string and powered
>by windmills (one of my favorite Searle images) could, in principle, be
>*functionally* identical to a silicon computer.

No, I didn't miss the point. I wrote "also", and the model of Turing machines
is also not unknown to me. I agree with Searle in this point: the technology
is a tool to built real computers, but there are many technologies (beer cans
for example). But if 'computers' are independent from technology, why should
it be impossible, to built an intelligent system using silicon technology?

>
>The fact that brains cause minds Searle takes to be incontrovertible,
>at least for a materialist, which he is.

(Is he?)

Yes, but with the above (and with Searles own arguments) it might be possible
to rebuilt brain with another technology, e.g. silicon.

>>But: Suddenly there is a new term ("brain"), never defined before. While there
>>is no proper definition of semantics, no proper definition of brain and mind,
>>why is he able to find conclusions?
>
>Brains: those lumps of gray matter in people's heads.  What more defintion
>do you need???

That's the point: What makes those lumps of gray matter able to do sort of
thinking? So even Searle might not mean that gray stuff but that stuff's ability
to do that wonderful work we're discussing here about. We do speak about
mind, not matter.

>Remember that the statement "brains cause minds" is merely meant to
>establish that we know *for certain* that *some* kinds of material
>things cause minds, namely, brains.

For certain? And even if some material things cause minds, why not other
material things?

>       This does *not* in and of itself
>rule out *other* things causing brains (e.g., computers).
>

If computers don't have to do with their technology, why then say other
things computers are made of cannot cause brain?


>>I think, this disussion is it worth to eliminate that "only" from all
>>arguments and to be restarted.
>
>Which "only"?

The "only" from above: Searle said, computers could only manipulate symbols.
This is a not allowed classification. The fact is, computers can manipulate
symbols. I repeat: Not more, not less.

>  The *definition* of
>computers is that they are purely syntactic engines.
>

No, it isn't.

If you think so, then you have also to say, brain is a purely electro-
biochemical engine.

BTW, "purely" is another not allowed classification.
                                  
>>Another 2 cents of mine: There is no border between syntax and semantics,
>>they belong to each other, never dividable like space and time (the one part
>>you believe to understand, the other most is difficult to understand).
>
>You may believe this, but a mere assertion does not make it so, and most
>philosophers and linguists would disagree.
>

Thank you.
If you read it twice, you may discover that it was a question, and I wanted
to get other's opinions about that "assertion". So would you please tell me
your opinion instead of what other people might think about? ;-)

>>I understand Searle: If strong AI is right, then humans are alone; no hope
>>that there is some power, who cares for us. That is a serious philosophical
>>(and/or psychological) problem, and I think almost the strong-AI-people know
>>that and have their difficulties with it.
>
>Huh?  Searle doesn't rule out God, merely HAL.  To be frank, I don't
>particularly *want* a computer to care for me.

That's what I said, Michael. And I also said, that I understand it.
But strong AI destroys this hope or want. And that is a problem for some
AI-people. One of my friends, an AI-researcher, died of cancer this summer
and he's had that problem, for example.

>     And I don't think it
>is a philosophical problem in the least, except perhaps for existentialism.
>I think that the philosophical problems with strong AI are *much
>worse.

I'm curious about the worse problems.

>
>- michael
>

Regards,

Claus.

p.s.: What's mind? No matter! What's matter? Never mind! :-)
      (Disclaimer: Sort of old cybernetics joke)

p.p.s: Next time a shorter reply, I promise. 

-----------------------------------------------------------------
Claus Schoenleber                      freitag@elrond.toppoint.de
2300 Kiel 1  
Germany					 +49 431 18863 (voice, Q)
=================================================================
        "And he that breaks a thing to find out what it is 
          has left the path of wisdom" (Gandalf the Grey)
=================================================================


