From newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!darwin.sura.net!paladin.american.edu!news.univie.ac.at!hp4at!mcsun!uknet!news.cs.bham.ac.uk!axs Wed Oct 14 14:58:35 EDT 1992
Article 7207 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai:4733 comp.ai.neural-nets:4664 comp.ai.philosophy:7207 sci.psychology:4799
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!darwin.sura.net!paladin.american.edu!news.univie.ac.at!hp4at!mcsun!uknet!news.cs.bham.ac.uk!axs
>From: axs@cs.bham.ac.uk (Aaron Sloman)
Newsgroups: comp.ai,comp.ai.neural-nets,comp.ai.philosophy,sci.psychology
Subject: Re: Human intelligence vs. Machine intelligence
Summary: eh?? Did I say that??
Keywords: penrose, church-turing hypothesis
Message-ID: <BvytMD.9FC@cs.bham.ac.uk>
Date: 11 Oct 92 16:25:24 GMT
References: <BvM75v.AEF@eis.calstate.edu> <26536@castle.ed.ac.uk> <MOFFAT.92Oct7105034@uvapsy.psy.uva.nl> <1992Oct7.151533.7822@CSD-NewsHost.Stanford.EDU>
Sender: news@cs.bham.ac.uk
Organization: School of Computer Science, University of Birmingham, UK
Lines: 164
Nntp-Posting-Host: emotsun

ginsberg@t.Stanford.EDU (Matthew L. Ginsberg) writes:

> Date: 7 Oct 92 15:15:33 GMT
> Organization: Computer Science Department, Stanford University
>
> In article <MOFFAT.92Oct7105034@uvapsy.psy.uva.nl>
> moffat@uvapsy.psy.uva.nl (Dave Moffat) writes about rebuttals to
> Penrose's book.
>
> >Now finally here's what I originally meant to say.
> >There's an extensive paper by Aaron Sloman in the latest AI journal
> >doing exactly this -- refuting Penrose, point by point.
>
> This paper is -- my opinion only, of course -- one of the worst
> responses to Penrose that has been published.  It is rife with
> technical inaccuracies (such as Sloman's belief that multiple
> processors somehow avoid the Church-Turing hypothesis), and
> essentially caves in to Penrose's attack on strong AI.  I found
> it an embarrassment.

I am terribly sorry to have embarrassed a defender of AI. But let me
reassure anyone who thinks that what they read in net news articles
is a reliable source of information that I did not claim

> .....that multiple
> processors somehow avoid the Church-Turing hypothesis)

On the contrary, I acknowledged explicitly that there's no
mathematical function that can be computed by a collection of multiple
(synchronised) processors that cannot be computed by a single Turing
machine, since it is obvious that a Turing machine could easily
(however slowly?) simulate any such collection of processors by
interleaving simulations of each of them.

My argument was that "notwithstanding the theorem that parallelism
cannot increase the class of functions that can be computed and
notwithstanding the fact that some inherently parallel virtual
machines may be implemented on a time-shared sequential processor"
there remain interesting differences between single processor and
multi-processor systems. For example certain state-transitions that
can occur in "one step" in a multi-processor implementation may
require a complex trajectory through state space in the other.

These are *causal* differences that affect the kinds of states that
can occur in such systems. From an engineering point of view such
differences are of great practical importance, e.g. as regards
reliability. This issue is relevant to the analysis and explanation of
mental states if you believe (as I and other functionalists do) that
mental states need to be defined ultimately by their causal powers,
including their powers to cause other mental states. Systems with
different state spaces admitting different sets of state-transitions
will have different causal powers. Whether these differences settle
the philosophical discussions is not obvious: there is still
philosophical work to be done here. My review offered conjectures
only.

I don't think this has anything to do with the Church-Turing
hypothesis, which, as I understand it, is not about causal properties
of systems but about which mathematical functions are computable by
what sorts of engines. On this interpretation this is only a question
about what input-output mappings can be achieved, not a question about
issues like reliability, what intermediate states can occur, etc.

One of my aims in the review was to argue that concern about the
mathematical input-output properties of engines had little to do with
the hard problems in designing intelligent systems, since a totally
unintelligent system might accurately mimic the behaviour of an
intelligent system.

(Actually, I believe that if you have a non-synchronised collection of
processors, and their speeds can vary continuously relative to one
another, then they can together compute functions not computable by a
Turing machine: the continuous variation makes impossible the
simulation on a single Turing machine. I don't think this is of any
interest to AI, since, unlike Penrose, Lucas, and others, I don't
think super-Turing capabilities are crucial to the study of
intelligence.)

Because I was uncertain about my conjectures I tried them out on
several mathematical and logical computer scientists before they were
published. I am still uncertain. Despite the vetting by experts
(including one ACM Turing award winner, and colleagues he copied the
paper to!), errors could have been missed, as the topics are very
slippery and busy people don't always read carefully. If my discussion
had technical flaws, as Matt claims, I would welcome detailed
information. I'd rather know the truth than ignorantly believe I was
right!


It is fascinating, after thinking I had demolished Penrose's
position, by exposing his Platonism as based on a technical
misinterpretation of Godel, to find that Matt thinks that my review

> essentially caves in to Penrose's attack on strong AI.

I am not sure which Strong AI thesis (of the 8 or so distinguished) he
is referring to. If he means I abandon the strongest AI theses I am
unrepentant: they are so strong as to be silly, and I know nobody who
actually believes them. (E.g. they imply that certain numbers must be
intelligent because they are Godel-encodings of behaviours of the
algorithms that allegedly suffice for intelligence.)

I suspect he must be referring to my discussion of an error that I
claimed was common to Penrose and many of his critics. I.e. Penrose
and crtics both think there is some way of discovering that Godel's
"undecidable" sentence (G(F)) says something *true* about the formal
system F that allegedly provides a basis for arithmetic.

Penrose thinks he can do this (i.e. see the truth) but a Turing
machine couldn't: whereas many of his critics argue that a Turing
machine could also.

My claim was that since the sentence was undecidable, adding either it
or its negation to F would produce a new consistent system F1 or F2,
and since both F1 and F2 are consistent both would have models, with
G(F) true in one set of models and false in the other set (the
socalled "non-standard" models). Hence nobody can rightly claim simply
to see that G(F) is true.

If they say they can because they know *which* model they are thinking
about, then they must explain *how* they identify that model. Godel's
result shows that it cannot be by means of a formal system. If there
are some other means, then they apparently support Penrose who is
claiming exactly that!

(I've ignored the difference between consistency and
omega-consistency: I don't think the difference matters here.)

Although several "experts" have looked at my argument without (as far
as I know) finding flaws, I am still uncertain about it, and said so
in the article. And that may be how Matt misread me as "caving in" to
Penrose (making me a traitor, I suppose?). I wrote:

   "If the argument (a), above, is incorrect, and there really is a
   way of seeing that G(F) is true despite its formal
   undecidability, then perhaps we have to accept Penrose's
   conclusion that there is something mathematicians do that does
   not correspond simply to deriving formulas in a formal system,
   and cannot be modelled by any algorithm."

A quick reading of this, not paying proper attention to the whole "If"
clause at the beginning and the "perhaps" in the middle, could lead to
the wrong interpretation. I really suspect that neither mathematicians
nor any algorithm can do what Penrose claims he can do. But I have not
proved this, and so my conclusions here are all tenative and
provisional. The review had some hypothetical discussion of possible
ways in which something like Penrose's position *might* be correct if
my argument was flawed.

As I said in response to a previous article in this thread, I am still
unsure about the arguments. People who want to see the world as
consisting of Goodies and Baddies won't like a review that tries to
steer a middle course, and claims that a critic of AI had something
interesting to say despite his errors. Too many AI responses to Searle
and Penrose are apparently based on the notion that they must be
completely wrong if they are wrong at all.

Apologies if this is the wrong forum for such comments.
Aaron
-- 
Aaron Sloman, School of Computer Science,
The University of Birmingham, B15 2TT, England
EMAIL   A.Sloman@cs.bham.ac.uk  OR A.Sloman@bham.ac.uk
Phone: +44-(0)21-414-3711       Fax:   +44-(0)21-414-4281


