From newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!usenet.ucs.indiana.edu!bronze.ucs.indiana.edu!chalmers Wed Oct 14 14:58:42 EDT 1992
Article 7216 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai:4737 comp.ai.neural-nets:4667 comp.ai.philosophy:7216 sci.psychology:4802
Newsgroups: comp.ai,comp.ai.neural-nets,comp.ai.philosophy,sci.psychology
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!usenet.ucs.indiana.edu!bronze.ucs.indiana.edu!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Human intelligence vs. Machine intelligence
Message-ID: <Bvz6GE.97F@usenet.ucs.indiana.edu>
Keywords: penrose, church-turing hypothesis
Sender: news@usenet.ucs.indiana.edu (USENET News System)
Nntp-Posting-Host: bronze.ucs.indiana.edu
Organization: Indiana University
References: <MOFFAT.92Oct7105034@uvapsy.psy.uva.nl> <1992Oct7.151533.7822@CSD-NewsHost.Stanford.EDU> <BvytMD.9FC@cs.bham.ac.uk>
Date: Sun, 11 Oct 1992 21:02:37 GMT
Lines: 114

[I cancelled my original article, which was long, and split it into
two parts, one on computation and implementation, and the other
on Godel.]

In an interesting article, Aaron Sloman writes:

>My argument was that "notwithstanding the theorem that parallelism
>cannot increase the class of functions that can be computed and
>notwithstanding the fact that some inherently parallel virtual
>machines may be implemented on a time-shared sequential processor"
>there remain interesting differences between single processor and
>multi-processor systems.

If one thinks of computation as inherently parallel, as I tend to, then
these differences can't come to much.  The idea is that we formalize
the notions of computation and implementations using complex finite
state automata -- by this I mean FSAs in which every state is a
complex combination of many substates, or "cells" if you like, and
in which evolution rules are stated in terms of these substates --
and stipulate that a physical system implements this computation
if there is an appropriate mapping from states of sub-parts of the
system to formal states of the cells of the FSA such that the
causal relations between the states of the subparts are isomorphic
to the formal relations between the states of the cells.  This, I
take it, is more or less a rational reconstruction of the notion
of implementation as it's usually thrown about, and is general
enough to handle any computational formalism -- standard FSAs,
Turing machines, neural networks, cellular automata, programming
languages, and so on.  (Of course for some of these formalisms one
needs an infinite number of cells in principle, but that's a
trivial change, and in any case it's not a factor in practice,
where all systems have finite storage.)

In any case, if we formalize computation and implementation like
this, then there's nothing that multiple processors can give you
that single processors can't.  Not just in I/O, but in causal
structure.  If the causal relations between the multiple processors
are determinate, then we can incorporate them into the specification
of a single giant FSA that does the overall job, such that any
implementation of this FSA will have the same pattern of causal
structure.  If they're not determinate, then we can simulate this
indeterminacy using a random-number generator, and do much the
same thing.

Of course, one whose notion of computation is inherently serial
might object.  But these FSAs can certainly be *implemented*
serially, if you like, and I'd argue that these serial implementations
have all the causal structure that the parallel ones do.  It's
just that it takes longer for a "single" time-step to occur
(you have to update the cells one-by-one), and the causal relations
between cells may be mediate my some central registers.  Neither
of these factors seem to lose anything in principle -- they just
add some detail to the causal relations in question.  (And any
causal relation between cells probably has plenty enough detail
as it is -- these things don't happen by magic.)  Unless you want to
argue that slowing and adding detail to these causal relations while
preserving overall causal structure can change some vital cognitive
property, then it seems to me that nothing can rest on an
implementation's being serial or parallel.

>For example certain state-transitions that
>can occur in "one step" in a multi-processor implementation may
>require a complex trajectory through state space in the other.

Sure, but this "complex trajectory" is just the detail whereby
the "one step" of the causal relation happens in this system.
Even in the multi-processor, it's not as if this "step" occurs
by magic -- there's a whole lot of complex mediation by all
kinds of physical circuits and the like.  I'm not sure that
there's a relevant difference in kind here.

Certainly there will be differences in speed, and in the way
things can go wrong, and in various other engineering concerns.
What's not clear is why these things should affect the cognitive
properties of the system (of course if things *do* go wrong it
will make a difference, but in this case it will be failing
to implement the relevant computation, so this isn't directly
to the strong AI thesis).

>(Actually, I believe that if you have a non-synchronised collection of
>processors, and their speeds can vary continuously relative to one
>another, then they can together compute functions not computable by a
>Turing machine: the continuous variation makes impossible the
>simulation on a single Turing machine. I don't think this is of any
>interest to AI, since, unlike Penrose, Lucas, and others, I don't
>think super-Turing capabilities are crucial to the study of
>intelligence.)

In any case, it's not clear that this would be *effective* computation.
Presumably on any given run, the speed-relations will differ, so
that different results will come up.  The "extra" property that
this gives you isn't much different from the "extra" property
that a pure random-number-generator gives you -- certainly it
can produce uncomputable functions, but not effectively.  And
it seems that most of the interesting properties can be simulated
by a pseudo-random number generator.  Of course, you may be
arguing that the speed-relations are in fact *determinate*, but
continuous.  In that case, it's very unclear how we would in fact
have set those relations to the precise continuous values required
to give some interesting computational results, given the finite
precision of our instruments.  If it's just a matter of it being
*some* determinate continuous quantity, then this doesn't seem
to give any interesting kind of new computational power -- the
continuous values in question might as well be random -- or they
might as well by some nearby value that happens to be computable,
in which case we can do the simulation.  There's no doubt that
this sort of system can produce non-computable results, but it's
quite unclear how this could be exploited for any kind of effective
results.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


