From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sdd.hp.com!spool.mu.edu!yale.edu!jvnc.net!darwin.sura.net!paladin.american.edu!news.univie.ac.at!hp4at!mcsun!sun4nl!swi.psy.uva.nl!swi!moffat Mon Oct 19 16:59:04 EDT 1992
Article 7267 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sdd.hp.com!spool.mu.edu!yale.edu!jvnc.net!darwin.sura.net!paladin.american.edu!news.univie.ac.at!hp4at!mcsun!sun4nl!swi.psy.uva.nl!swi!moffat
>From: moffat@uvapsy.uvapsy.psy.uva.nl (Dave Moffat)
Subject: Re: Human intelligence vs. Machine intelligence
In-Reply-To: ginsberg@t.Stanford.EDU's message of Wed, 7 Oct 1992 15: 15:33 GMT
Message-ID: <MOFFAT.92Oct14153357@uvapsy.uvapsy.psy.uva.nl>
Sender: news@swi.psy.uva.nl (News Man)
Nntp-Posting-Host: uvapsy.psy.uva.nl
Organization: Faculty of Psychology, University of Amsterdam
References: <BvM75v.AEF@eis.calstate.edu> <26536@castle.ed.ac.uk>
	<MOFFAT.92Oct7105034@uvapsy.psy.uva.nl>
	<1992Oct7.151533.7822@CSD-NewsHost.Stanford.EDU>
Date: Wed, 14 Oct 1992 14:33:57 GMT
Lines: 103


In article <1992Oct7.151533.7822@CSD-NewsHost.Stanford.EDU> ginsberg@t.Stanford.EDU (Matthew L. Ginsberg) writes:

   In article <MOFFAT.92Oct7105034@uvapsy.psy.uva.nl>
   moffat@uvapsy.psy.uva.nl (Dave Moffat) writes about rebuttals to
   Penrose's book.

   >Now finally here's what I originally meant to say.
   >There's an extensive paper by Aaron Sloman in the latest AI journal
   >doing exactly this -- refuting Penrose, point by point.

   This paper is -- my opinion only, of course -- one of the worst
   responses to Penrose that has been published.  It is rife with
   technical inaccuracies (such as Sloman's belief that multiple
   processors somehow avoid the Church-Turing hypothesis), and
   essentially caves in to Penrose's attack on strong AI.  I found
   it an embarrassment.


						   Matt Ginsberg

It's good that you felt free enough to express your
opinion on the paper so forcefully, and I can see how you
might have formed such a negative one.
But it wouldn't be fair (of me, having first mentioned it)
to let this go just like that, as it might give people the impression
that it's as bad as you say, and indefensibly so.
Anybody who read your article and was put off reading the paper
should still give it a look I think,
as there's a lot in there worth his/her attention.

Regarding the technical issues you raise, you may be right.
This sure isn't the place to argue about that in detail.
But it would be wrong (I feel) to discount tha paper
out of hand merely for a few technical inaccuracies.
Technical points can often be essential to the argument,
but they aren't always.
It can sometimes be damaging to focus unduly
on such things if it means you ignore the main
flow of the argument.

And what is the main flow?
Hard to summarise here, but my impression of the sections
where Sloman goes into Turing equivalence is that there's
quite a deep issue running under this ground.
The issue is in your (our) conception of AI;
is it very hard science, or is it engineering?

If you're a really hard-core mathematico-logician type
of AI person, you might think that Turing machines is
what it's all about, and not look beyond that type of computation.
But if you're more interested in the possibility of
building a real artificial or analysing a natural AI system,
then, says Sloman, you'll simply have to face other issues.
Like interaction with the environment.
Like parallelism.
Like several other things we don't have space for here.
A hard-core logician would say that's all irrelevant at the
highest level of abstraction, because of the ultimate
equivalence in a certain mathematical sense of all computers anyway.
But an AI engineer type would say that kind of abstraction,
as so often in mathematical modelling,
throws so much good stuff away in its drive to reduce the
problem to symbols that it ultimately makes its "solution" trivial.

Just one illustrative point.
What would a good engineering solution to the perfectly
reasonable notion of an *interrupt* look like
in the universal Turing machine?
Interrupts are, I take it, absolutely important
to all thinking machines, as well as the cheapest
computers you can buy these days.

And one more point about Turing - although he only had one name,
he was two men (and a woman too, incidentally).
There was the Turing of the Turing machine,
and the Turing of the Turing test,
and folks often seem to get confused about this.
In my opinion, of course.

If Turing himself had tried to make a machine
to pass the Turing test, he might have thought
more about the issues Sloman raises, and might
have suggested an extension of his notion
of computation from implementation of mathematical
functions (input -> output) to one including
formalizations for things like interaction
with the environment per se, which don't
require enormous cuts with a formal knife
to reduce things like interrupts to marks
on a single, semi-infinite tape.

In short, although the points you made about the
paper might all be valid, it seems to me that
there's more to it than you find.
Certainly I also found it a provocative paper
technically, but also thought-provoking,
and I'd encourage people in this newsgroup
to look at it -- I'd says it's a must,
actually, if you're interested in the philosophical
questions of AI.

Sorry for the length of this.


