From newshub.ccs.yorku.ca!torn!cs.utexas.edu!usc!sol.ctr.columbia.edu!usenet.ucs.indiana.edu!bronze.ucs.indiana.edu!chalmers Wed Oct 14 14:58:48 EDT 1992
Article 7225 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai:4746 comp.ai.neural-nets:4677 comp.ai.philosophy:7225 sci.psychology:4813
Newsgroups: comp.ai,comp.ai.neural-nets,comp.ai.philosophy,sci.psychology
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!usc!sol.ctr.columbia.edu!usenet.ucs.indiana.edu!bronze.ucs.indiana.edu!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Human intelligence vs. Machine intelligence
Message-ID: <BvzuF7.6xr@usenet.ucs.indiana.edu>
Sender: news@usenet.ucs.indiana.edu (USENET News System)
Nntp-Posting-Host: bronze.ucs.indiana.edu
Organization: Indiana University
References: <BvytMD.9FC@cs.bham.ac.uk> <Bvz6GE.97F@usenet.ucs.indiana.edu> <BvzGnv.At2@cs.bham.ac.uk>
Date: Mon, 12 Oct 1992 05:40:19 GMT
Lines: 86

In article <BvzGnv.At2@cs.bham.ac.uk> axs@cs.bham.ac.uk (Aaron Sloman) writes:

>But not all its negative features: i.e. certain things can't happen
>to the muliprocessor system that can happen to the FSA
>implementation. Because you think only in terms of formalization you
>miss my main point, which is about properties of real
>implementations, not properties of formal systems representing those
>implementations. (J.L.Austin: "Fact is richer than diction" - actual
>systems can diverge from their specifications, and the serial
>implementation can diverge in ways in which the parallel one can't.)

No, I'm concerned with the real physical systems doing the causal
work.  The formal system is relevant insofar as it gives an
description of the physical system's causal structure.  The strong AI
thesis, recall, is something like: (there exist formal systems such
that) anything that implements that formal system has a mind.  Given
this notion of implementation, this comes to: such that anything
with the right causal structure has a mind.  This is a claim about
physical systems, not formal systems.

In your article you at least suggest that this is false, because
some implementations -- e.g. serial ones -- will be less robust
(i.e. can go wrong in worse ways).  Now it seems very dubious
that this kind of robustsness could really make the huge differences
between mindedness and the lack thereof, but in any case, the important
point is that a system that was "going wrong" in its state-transitions
*would not be implementing the formal system at all*.  By definition,
an implementation of the formal system is something that makes its
state-transitions in such-and-such a way; if the system is going
wrong, it's making the transitions the wrong way, and isn't an
implementation.  Therefore such systems aren't relevant to the truth
of the strong AI thesis in question.

(Of course it's important to know about how systems can go wrong
in practice, and fail to perfectly implement the formal systems
that we had intended they implement.  Nothing I'm saying here
denies that.  The only point is that it's irrelevant to the
strong AI thesis in question.)

>> What's not clear is why these things should affect the cognitive
>> properties of the system (of course if things *do* go wrong it
>> will make a difference,
>> ...but in this case it will be failing
>> to implement the relevant computation, so this isn't directly
>> to the strong AI thesis).
>
>This begs the question by assuming that mental states and processes
>are defined in terms of the computations they implement, rather than
>in terms of their causal powers.

No, it doesn't assume this at all.  The definition of mental states
isn't the point here.  All that counts are the definitions of
computation and implementation.  I claim that the system that makes
the wrong state-transitions isn't implementing the original formal
system -- which seems to be straightforwardly true.

>A counter argument to me might be that there might be a serial
>processor that is somehow, by magic or special physics, capable of
>being made inherently uninterruptable and totally reliable when it
>is doing the "detailed" state transitions. This would seem to
>require some kind of temporary causal isolation, and I have no
>reason to believe this is possible, or even that it really makes any
>sense in working systems, as opposed to formal systems.

Well, perhaps perfect reliability is practically impossible (you
can't guarantee that a nuclear bomb isn't going to drop on the
system), but there seem to be plenty of computers around that are
pretty damn reliable, only going wrong very rarely, despite the
need for these "detailed" internal state-transitions.  Maybe things
will go wrong very occasionally, but surely mindedness cannot stand
and fall depending on such very occasional mishaps.  Unless you
claim that *any* serial implementation of the relevant class of
computations would necessarily go wrong a large amount of the time,
it's not clear what the above comes to.

I don't deny that there are important concerns about the difference
between different ways of trying to implement the same computation
(e.g. serial vs. parallel, and the like), even ones that are
important in the foundations of cognitive science.  I just find
it implausible that this has much relevance to the kind of
anti-strong-AI argument that Penrose is putting forward.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


