From newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!caen!spool.mu.edu!darwin.sura.net!paladin.american.edu!news.univie.ac.at!hp4at!mcsun!uknet!news.cs.bham.ac.uk!axs Wed Oct 14 14:58:45 EDT 1992
Article 7222 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai:4743 comp.ai.neural-nets:4674 comp.ai.philosophy:7222 sci.psychology:4807
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!caen!spool.mu.edu!darwin.sura.net!paladin.american.edu!news.univie.ac.at!hp4at!mcsun!uknet!news.cs.bham.ac.uk!axs
>From: axs@cs.bham.ac.uk (Aaron Sloman)
Newsgroups: comp.ai,comp.ai.neural-nets,comp.ai.philosophy,sci.psychology
Subject: Re: Human intelligence vs. Machine intelligence
Keywords: penrose, church-turing hypothesis
Message-ID: <BvzGnv.At2@cs.bham.ac.uk>
Date: 12 Oct 92 00:43:07 GMT
References: <MOFFAT.92Oct7105034@uvapsy.psy.uva.nl> <1992Oct7.151533.7822@CSD-NewsHost.Stanford.EDU> <BvytMD.9FC@cs.bham.ac.uk> <Bvz6GE.97F@usenet.ucs.indiana.edu>
Sender: news@cs.bham.ac.uk
Organization: School of Computer Science, University of Birmingham, UK
Lines: 130
Nntp-Posting-Host: emotsun

I was previously asked to followup only to comp.ai.philosophy
but since others haven't, I'll continue the parallel threads, with
apologies, and make this as short as I can. I think the issues are
not just philosophical, but involve important design considerations.

chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

> Date: 11 Oct 92 21:02:37 GMT

(I claimed)
> > ...
> >there remain interesting differences between single processor and
> >multi-processor systems.
(Even though the former can simulate the latter).

(Dave comments, among other things)
> these differences can't come to much.
> .....
> In any case, if we formalize computation and implementation like
> this, then there's nothing that multiple processors can give you
> that single processors can't.

Well, I claimed that multiple processors give state transitions that
*avoid* some of the trajectories that single processors require. Put
another way: the multiple processors give you LESS than a single
processor, not more. Because of this they have fewer ways of going
wrong, or fewer ways of being re-directed by events of the sort that
can happen in the intermediate states required by the single
processor. The multi-processor systems also admit more options for
recovery from diversion, failure, etc. I.e. they are inherently more
robust. (That's why engineers rightly often prefer them.)

I regard a mind as essentially a control system. Thus robustness is
not just an irrelevant frill.

> ...If the causal relations between the multiple processors
> are determinate, then we can incorporate them into the specification
> of a single giant FSA that does the overall job, such that any
> implementation of this FSA will have the same pattern of causal
> structure.

But not all its negative features: i.e. certain things can't happen
to the muliprocessor system that can happen to the FSA
implementation. Because you think only in terms of formalization you
miss my main point, which is about properties of real
implementations, not properties of formal systems representing those
implementations. (J.L.Austin: "Fact is richer than diction" - actual
systems can diverge from their specifications, and the serial
implementation can diverge in ways in which the parallel one can't.)
Unless I've failed to understand your argument?

> ......I'd argue that these serial implementations
> have all the causal structure that the parallel ones do.

Yes and more besides: the more being the unwanted bit!

> ....Unless you want to
> argue that slowing and adding detail to these causal relations while
> preserving overall causal structure can change some vital cognitive
> property, then it seems to me that nothing can rest on an
> implementation's being serial or parallel.

No. I regard speed as theoretically (if not practically) irrelevant
here. The point is not that the extra detail requires more time, but
that it changes the truthvalues of (some) counterfactual
conditionals describing the system, even if you manage to get the
serial implementation fast enough.

> Certainly there will be differences in speed, and in the way
> things can go wrong, and in various other engineering concerns.

What my article tried to do was claim that engineering concerns have
been underrated by AI theorists discussing philosophical issues. And
also by Penrose. Of course people building working systems have to
consider them. Cognitive scientsts are not usually trained as
engineers, alas.

> What's not clear is why these things should affect the cognitive
> properties of the system (of course if things *do* go wrong it
> will make a difference,

Aha! Things do go wrong in the real world

> ...but in this case it will be failing
> to implement the relevant computation, so this isn't directly
> to the strong AI thesis).

This begs the question by assuming that mental states and processes
are defined in terms of the computations they implement, rather than
in terms of their causal powers. Of course, you can define
(technical) terms any way you like, but what you've written
disregards the fact that for a cognitive scientist or philosopher
interested in mental properties that are defined in terms of their
*causal powers*, which I believe is the same as defining them in
terms of the counterfactual conditionals that are true of them, then
the engineering differences are important.

They would be irrelevant to someone who thought of a mind as a giant
formal system, rather than as a control system.

If Searle and Penrose are attacking those who think of a mind as
simply a formal system, then I guess I am on their side. But they
are wrong in thinking that AI is in any sense committed to this
particular philosophical position. People building working systems
have to go beyond formalisation.

A counter argument to me might be that there might be a serial
processor that is somehow, by magic or special physics, capable of
being made inherently uninterruptable and totally reliable when it
is doing the "detailed" state transitions. This would seem to
require some kind of temporary causal isolation, and I have no
reason to believe this is possible, or even that it really makes any
sense in working systems, as opposed to formal systems.
For a philosopher who thinks the whole universe is just some kind of
formal system in the mind of some kind of God, my position would be
meaningless.

Oh dear, this has come out longer than I intended... Enough!

Cheers.
Aaron
PS
I'll not reply to your points about continuously varying parallel
systems. Everything you say is right - I merely mentioned the case
as a hopefully interesting aside.
-- 
Aaron Sloman, School of Computer Science,
The University of Birmingham, B15 2TT, England
EMAIL   A.Sloman@cs.bham.ac.uk  OR A.Sloman@bham.ac.uk
Phone: +44-(0)21-414-3711       Fax:   +44-(0)21-414-4281


