From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!cs.yale.edu!blenko-tom Mon Dec 16 11:02:14 EST 1991
Article 2153 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2153 sci.philosophy.tech:1437
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!cs.yale.edu!blenko-tom
>From: blenko-tom@CS.YALE.EDU (Tom Blenko)
Subject: Re: Causes and Reasons
Message-ID: <1991Dec16.063352.24754@cs.yale.edu>
Sender: news@cs.yale.edu (Usenet News)
Nntp-Posting-Host: morphism.systemsz.cs.yale.edu
Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158
References: <1991Dec14.181000.3907@bronze.ucs.indiana.edu> <1991Dec15.120726.6592@husc3.harvard.edu> <1991Dec15.201231.19710@bronze.ucs.indiana.edu>
Date: Mon, 16 Dec 1991 06:33:52 GMT
Lines: 51

In article <> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
|To be more precise, we formalize causal structure in a program; this
|program then has the property that any implementation of it will
|possess that causal structure.  By analogy: we formalize properties
|of cakes in a recipe; then any implementation of that recipe will
|possess the relevant properties.  If you don't like talk of
|"formalization" here, that's fine: substitute "specification" instead.
|The substantive point is unaffected.

"Any" implementation certainly cannot be expected to possess the
intended causal structure (and therefore to display the intended
behavior).  If you wish to say a "correct" implementation, then
providing such an implementation (from a specification of a correct
implementation) leaves you with precisely the difficulty you started
with.

Do you really think you can specify laws of economics, and that any
implementation of those laws will produce a real, working economic
system?  I think this is obviously and transparently absurd.  You may
be able to create a (real, working) economic system that conforms to
your specification, but many non-working systems will conform as well.
(And of course there will also be the working but non-conformant
systems).

People have successfully engineered economic systems.  It is possible
to specify an organization made up of real, physical,
causally-connected individuals with particular needs and abilities, and
various modes of interaction.  But then most of the structure, the
causal relationships, etc. is not in the specification, it is in the
actual physical entities that make up an implementation.  There is
nothing at all other than the best efforts of the engineer that
provides a connection between the specification and the working
system.

In practice, one consequence is that engineered systems never conform
to their specifications (unless, perhaps, the specifications are so
weak as to be useless).  Real engineers know this, and I'm quite sure
they will tell you that your claim that the specification suffices is
wildly naive.

So I think this connection between a specification and any of its
implementations is a great deal weaker than the causal connectedness
Searle and others claim is a necessary part of mental functioning in
the physical world (and I think Searle would agree).

One of the invariant features of AI as it is done today is the
fragility of the systems produced.  I do not understand why those who
take the venture seriously don't recognize this fragility as a
necessary consequence of their methodology.

	Tom


