From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!uwm.edu!wupost!think.com!paperboy.osf.org!hsdndev!husc-news.harvard.edu!zariski!zeleny Mon Dec 16 11:01:47 EST 1991
Article 2111 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2111 sci.philosophy.tech:1407
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!uwm.edu!wupost!think.com!paperboy.osf.org!hsdndev!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Causes and Reasons (was re: Searle and the Chinese Room)
Summary: chalmers confuses causation and consequence
Message-ID: <1991Dec14.004745.6550@husc3.harvard.edu>
Date: 14 Dec 91 05:47:40 GMT
References: <1991Dec12.194529.28355@bronze.ucs.indiana.edu> 
 <1991Dec13.044040.20059@psych.toronto.edu> <1991Dec13.064817.13637@bronze.ucs.indiana.edu>
Organization: Dept. of Math, Harvard Univ.
Lines: 175
Nntp-Posting-Host: zariski.harvard.edu

In article <1991Dec13.064817.13637@bronze.ucs.indiana.edu> 
chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

>In article <1991Dec13.044040.20059@psych.toronto.edu> 
>michael@psych.toronto.edu (Michael Gemar) writes:

>>In article <1991Dec12.194529.28355@bronze.ucs.indiana.edu> 
>>chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

DC:
>>>The program is just marks on paper.  The implementation is a complex
>>>physical system with rich internal causal organization.

MG:
>>It would be helpful if you elaborated on this, especially the notion
>>of "causal organization".  

DC:
>I'm too lazy, but take a look at this old discussion if you like.

Your laziness will be seen as the cause of your ignorance anon.

>-----------
>From: dave@cogsci.indiana.edu (David Chalmers)
>Newsgroups: comp.ai,talk.philosophy.misc,sci.philosophy.tech
>Subject: Re: Can Machines Think?
>Message-ID: <31821@iuvax.cs.indiana.edu>
>Date: 19 Dec 89 06:35:34 GMT
>
>
>"Programs" do not think.
>Cognition is not "symbol-manipulation."
>The "hardware/software" distinction is unimportant for thinking about minds.
>
>However:
>
>Systems with an appropriate causal structure think.
>Programs are a way of formally specifying causal structures.

To give a mathematical analogy: proofs (in a sound axiomatic system) are a
way of formally illustrating the relation of logical consequence (not
specifying it though, since for well-known reasons the relation of logical
consequence cannot be specified by purely syntactical means).

>Physical systems which implement a given program *have* that causal structure,
>physically.  (Not formally, physically.  Symbols were simply an intermediate
>device.)

Computers which implement a theorem-proving program *have* the relation of
logical consequence physically.  This is nonsensical because of the last
adjective; yet even should you dispense with the claim of physical
embodiment, your computer is not going to have the relation of logical
consequence in *any* sense of "have", pace G\"odel.  Still, based on our
recent communication, I don't really expect you to understand such arcane
metamathematical reasoning; let's try something simpler.

>Physical systems which implement the appropriate program think.

Keep dreaming.

>From: dave@cogsci.indiana.edu (David Chalmers)
>Newsgroups: comp.ai,talk.philosophy.misc,sci.philosophy.tech
>Subject: Re: Can Machines Think?
>Message-ID: <31945@iuvax.cs.indiana.edu>
>Date: 21 Dec 89 05:15:13 GMT

>gene edmon writes:
>>...David Chalmers writes:
>>>Systems with an appropriate causal structure think.
>>
>>Could you elaborate on this a bit? 
>
>Well, seeing as you ask.  The basic idea is that "it's not the meat, it's
>the motion."  At the bottom line, the physical substance of a cognitive
>system is probably irrelevant -- what seems fundamental is the pattern of
>causal interactions that is instantiated.  Reproducing the appropriate causal
>pattern, according to this view, brings along with it everything that is
>essential to cognition, leaving behind only the inessential.  (Incidentally,
>I'm by no means arguing against the importance of the biochemical or the
>neural -- just asserting that they only make a difference insofar as they
>make a *functional* difference, that is, play a role in the causal dynamics
>of the model.  And such a functional difference, on this view, can be
>reproduced in another medium.)
>
>And yes, of course this is begging the question.  I could present arguments
>for this point of view but no doubt it would lead to great complications.
>Just let's say that this view ("functionalism", though this word is a dangerous
>one to sling around with its many meanings) is widely accepted, and I can't
>see it being unaccepted soon.  The main reason I posted was not to argue for
>this view, but to delineate the correct role of the computer and the program
>in the study of mind.

On the contrary, to see the arguments arainst functionalism advanced by its
inventor, Hilary Putnam, check out his book "Representation and Reality".
Of course, if one is to believe your bibliography, you must have read it
and found the arguments unworthy of your attention.

>The other slightly contentious premise is the one that states that computers
>can capture any causal structure whatsoever.  This, I take it, is the true
>import of the Church-Turing Thesis -- in fact, when I look at a Turing
>Machine, I see nothing so much as a formalization of the notion of causal
>system.  And this is why, in the philosophy of mind, "computationalism" is
>often taken to be synonymous with "functionalism".  Personally, I am
>a functionalist first, but accept computationalism because of the plausibility
>of this premise.  Some people will argue against this premise, saying that
>computers cannot model certain processes which are inherently "analog".  I've
>never seen the slightest evidence for this, and I'm yet to see an example of
>such a process.  (The multi-body problem, by the way, is not a good example --
>lack of a closed-form solution does not imply the impossibility of a
>computational model.)  Of course, we may need to model processes at a low,
>non-superficial level, but this is not a problem.

There are so many stupid assertions in the above paragraph that I don't
know where to begin unraveling the mess.  Perhaps a Schopenhauer quotation
would provide a good start: "To confuse a reason of knowledge, lying within
a given concept, with a cause acting from without, is always his
[Spinoza's] artifice, which he has learned from Descartes." ("On the
Fourfold Root of the Principle of Sufficient Reason", 8.)  If I were you,
I'd feel privileged: the closest you'll ever come to being a philosopher is
in recapitulating the fallacies of the great men.  Oh, you are arrogant
enough to state your views with enough force (see below); yet what you are
lacking is the humility that would force you to take opposing views
seriously.  Here's an example: had you paid real attention to Church's
thesis, you might have noticed that its claim concerns the notion of
*effective* computability, and ipso facto, a notion of mechanical
provability, a syntactical counterpart of a *proper* part of the semantic
notion of logical consequence.  Now, as Schopenhauer will tell to all who
would listen, the notion of *efficient* causation doesn't reduce to the
notion of logical consequence: the necessity involved in the former is of a
wholly different kind compared to the one involved in the latter.  Even if
programs could specify the notion of logical consequence, -- which, as I
explained above, is mathematically impossible, -- they wouldn't come close
to specifying the notion of efficient causation.  Given this consideration,
the kind of implementation involved in the construction of your AI is quite
irrelevant: from a Chinese room table lookup design to the most elaborate
neural net, your machines will invariably fail to specify, and a fortiori,
embody, any sort of causal structure but their own.  Finally, if, like many
retrograde philosophers, one is prepared to entertain the notion of *final*
causation, your nice reductionist argument doesn't even get off the ground.

In short, like most of your colleagues, you are doomed to spend your
professional life building an elaborate orrery, while the more open-minded
among us are dedicating our efforts to the development of a theory of gravity.

>The other option for those who argue against the computational metaphor is to
>say "yes, but computation doesn't capture causal structure *in the right
>way*".  (For instance, the causation is "symbolic", or it has to be
>mediated by a central processor.)  I've never found much force in these
>arguments.

I told you once, and I'll tell you again: research is no substitute for
scholarship.  The trappings of your argument are like a sand castle resting
on an elementary confusion of cause and reason.  Once again, had your
knowledge of Locke been unmediated by a 30-page survey article of one
T.Natsoulas, you might have learned the distinction between active and
passive powers, and asked yourself just what sort of mechanism would endow
your Turing machine with the former sort of causal powers.

>-- 
>Dave Chalmers                            (dave@cogsci.indiana.edu)      
>Center for Research on Concepts and Cognition, Indiana University.
>"It is not the least charm of a theory that it is refutable."


`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139           (617) 661-8151            :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`


