From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!news.cs.indiana.edu!bronze!chalmers Mon Dec 16 11:01:38 EST 1991
Article 2094 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!news.cs.indiana.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle and the Chinese Room
Message-ID: <1991Dec13.064817.13637@bronze.ucs.indiana.edu>
Date: 13 Dec 91 06:48:17 GMT
References: <8dEvbVS00iUzA2j64r@andrew.cmu.edu> <1991Dec12.194529.28355@bronze.ucs.indiana.edu> <1991Dec13.044040.20059@psych.toronto.edu>
Organization: Indiana University
Lines: 185

In article <1991Dec13.044040.20059@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>In article <1991Dec12.194529.28355@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

>>The program is just marks on paper.  The implementation is a complex
>>physical system with rich internal causal organization.

>It would be helpful if you elaborated on this, especially the notion
>of "causal organization".  

I'm too lazy, but take a look at this old discussion if you like.

-----------
>From: dave@cogsci.indiana.edu (David Chalmers)
Newsgroups: comp.ai,talk.philosophy.misc,sci.philosophy.tech
Subject: Re: Can Machines Think?
Message-ID: <31821@iuvax.cs.indiana.edu>
Date: 19 Dec 89 06:35:34 GMT


"Programs" do not think.
Cognition is not "symbol-manipulation."
The "hardware/software" distinction is unimportant for thinking about minds.

However:

Systems with an appropriate causal structure think.
Programs are a way of formally specifying causal structures.
Physical systems which implement a given program *have* that causal structure,
physically.  (Not formally, physically.  Symbols were simply an intermediate
device.)

Physical systems which implement the appropriate program think.

--
Dave Chalmers     (dave@cogsci.indiana.edu)      
Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable"

---------
>From: mbb@cbnewsh.ATT.COM (martin.b.brilliant)
Newsgroups: comp.ai,talk.philosophy.misc,sci.philosophy.tech
Subject: Re: Can Machines Think?
Message-ID: <6724@cbnewsh.ATT.COM>
Date: 19 Dec 89 16:57:30 GMT


>From article <31821@iuvax.cs.indiana.edu>, by dave@cogsci.indiana.edu
(David Chalmers)...

Slightly edited to make the bones barer:

  1. Systems with an appropriate causal structure think.
  2. Programs are a way of formally specifying causal structures.
  3. Physical systems implement programs.
  4. Physical systems which implement the appropriate program think.

I take it that (1) is an acceptable definition.  Does anybody think it
begs the question?

The weakest link here may be (2), the supposition that programs can
implement any causal structure whatever, even those that do what we
call thinking.

The software/hardware duality question is semantically resolved by (3).

The conclusion is (4), which seems to assert "strong AI."

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858
Holmdel, NJ 07733	att!hounx!marty1 or marty1@hounx.ATT.COM
After retirement on 12/30/89 use att!althea!marty or marty@althea.UUCP

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.


--------------
>From: kp@uts.amdahl.com (Ken Presting)
Newsgroups: comp.ai,talk.philosophy.misc,sci.philosophy.tech
Subject: Re: Can Machines Think?
Message-ID: <2c0R02gv76O601@amdahl.uts.amdahl.com>
Date: 19 Dec 89 21:46:41 GMT


In article <6724@cbnewsh.ATT.COM> mbb@cbnewsh.ATT.COM (martin.b.brilliant) writes:
>From article <31821@iuvax.cs.indiana.edu>, by dave@cogsci.indiana.edu
>(David Chalmers)...
>
>  1. Systems with an appropriate causal structure think.
>  2. Programs are a way of formally specifying causal structures.
>  3. Physical systems implement programs.
>  4. Physical systems which implement the appropriate program think.
>
>I take it that (1) is an acceptable definition.  Does anybody think it
>begs the question?

I don't think so.  Presumably, humans think because of the way we're
built, and the mechanical/chemical/electrical structure determines the
causal structure of our brains.

>The weakest link here may be (2), the supposition that programs can
>implement any causal structure whatever, even those that do what we
>call thinking.

Agreed.  The multi-body problem of astrophysics is a clear case of a
causal system which cannot be precisely represented by an algorithm.
But the argument could succeed with a weaker version of 2, IF we could
figure out which causal structures are relevant to thought

>The software/hardware duality question is semantically resolved by (3).

This is problematic.  Harnad's "symbol grounding problem" (and some of
Searle's objections, I think) point out the difficulty of claiming that
some object "thinks" strictly on the basis of its internal operation,
or even on the basis of it's outputs.  Harnad would want to know how the
symbols found in the output are grounded, while Searle might claim that
the machine *simulated* thinking, but did not itself *think*.
  I agree that the correct resolution of the software/hardware duality
can only be resolved by the concept of implementation used in (3).  I'm
just repeating a familiar (but important) theme.


--------------
>From: dave@cogsci.indiana.edu (David Chalmers)
Newsgroups: comp.ai,talk.philosophy.misc,sci.philosophy.tech
Subject: Re: Can Machines Think?
Message-ID: <31945@iuvax.cs.indiana.edu>
Date: 21 Dec 89 05:15:13 GMT


gene edmon writes:
>...David Chalmers writes:
>>Systems with an appropriate causal structure think.
>
>Could you elaborate on this a bit? 

Well, seeing as you ask.  The basic idea is that "it's not the meat, it's
the motion."  At the bottom line, the physical substance of a cognitive
system is probably irrelevant -- what seems fundamental is the pattern of
causal interactions that is instantiated.  Reproducing the appropriate causal
pattern, according to this view, brings along with it everything that is
essential to cognition, leaving behind only the inessential.  (Incidentally,
I'm by no means arguing against the importance of the biochemical or the
neural -- just asserting that they only make a difference insofar as they
make a *functional* difference, that is, play a role in the causal dynamics
of the model.  And such a functional difference, on this view, can be
reproduced in another medium.)

And yes, of course this is begging the question.  I could present arguments
for this point of view but no doubt it would lead to great complications.
Just let's say that this view ("functionalism", though this word is a dangerous
one to sling around with its many meanings) is widely accepted, and I can't
see it being unaccepted soon.  The main reason I posted was not to argue for
this view, but to delineate the correct role of the computer and the program
in the study of mind.

The other slightly contentious premise is the one that states that computers
can capture any causal structure whatsoever.  This, I take it, is the true
import of the Church-Turing Thesis -- in fact, when I look at a Turing
Machine, I see nothing so much as a formalization of the notion of causal
system.  And this is why, in the philosophy of mind, "computationalism" is
often taken to be synonymous with "functionalism".  Personally, I am
a functionalist first, but accept computationalism because of the plausibility
of this premise.  Some people will argue against this premise, saying that
computers cannot model certain processes which are inherently "analog".  I've
never seen the slightest evidence for this, and I'm yet to see an example of
such a process.  (The multi-body problem, by the way, is not a good example --
lack of a closed-form solution does not imply the impossibility of a
computational model.)  Of course, we may need to model processes at a low,
non-superficial level, but this is not a problem.

The other option for those who argue against the computational metaphor is to
say "yes, but computation doesn't capture causal structure *in the right
way*".  (For instance, the causation is "symbolic", or it has to be
mediated by a central processor.)  I've never found much force in these
arguments.

--
Dave Chalmers     (dave@cogsci.indiana.edu)      
Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable"
-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


