From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!darwin.sura.net!europa.asd.contel.com!uunet!tdatirv!sarima Thu Jan 16 17:22:16 EST 1992
Article 2768 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!darwin.sura.net!europa.asd.contel.com!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle, again
Message-ID: <377@tdatirv.UUCP>
Date: 15 Jan 92 19:15:17 GMT
References: <5907@skye.ed.ac.uk> <1992Jan08.230618.31038@spss.com> <5952@skye.ed.ac.uk> <1992Jan13.200632.36402@spss.com> <5984@skye.ed.ac.uk>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 49

In article <5984@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
|In article <1992Jan13.200632.36402@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
|>
|>That's not the same thing.  What allows the brain to support understanding
|>might conceivably be its information-handling capabilities-- that is, 
|>something that can be duplicated rather than simulated on a computer.
|
|But Searle thinks he has shown it isn't that.  It's _something else_
|about the brain.  The something else that matters is the causal powers.

Yes, and *this* is the point of disagreement.  I have yet to see any
compelling evidence that any such 'something else' even exists.

Why should I assume that this 'something else' exists?
What line of reasoning does he follow in concluding that it exists?
What axioms or assumptions does he make in arriving at this conclusion?

|>By "causal powers" Searle presumably means "things about the brain that
|>support understanding AND are physical so an algorithm can't have any."
|>And the fact that if you postulate the existence of such things you come
|>to Searle's conclusions is neither surprising nor very significant.
|
|This is becoming pointless.  It may be possible to get every detail
|of an explanation of Searle's "causal powers" right, so that no new
|misunderstanding can occur, but I'm going to give up.

The point is that Searle *seems* to be treating the necessity of this
additional functionality in the brain as an axiom.  Or at least I cannot
understand the syllogism by which he derives it. My philosophy is that I
will not postulate additional capacities without due cause.  Searle has
never shown me anything I can accept as sufficient reason to accept his
concept.

|For the last time, Searle doesn't assume some mysterious "causal
|powers" exist and use that to reach his conclusion that computers
|do not understand.

Accepted and understood.  Now, how does he reach the conclusion that these
'causal powers' even exist?

I think the main reason people tend to see these 'causal powers' as an
assumption is that they (we) cannot see how he gets to them.  And so,
since they seem (perhaps superficially) to be unsupported by evidence,
we tend to jump to the conclusion that thier existance is an axiom to
Searle.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



