From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!agate!stanford.edu!CSD-NewsHost.Stanford.EDU!Xenon.Stanford.EDU!geddis Mon Oct 19 16:59:12 EDT 1992
Article 7278 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!agate!stanford.edu!CSD-NewsHost.Stanford.EDU!Xenon.Stanford.EDU!geddis
>From: geddis@Xenon.Stanford.EDU (Don Geddis)
Subject: Re: Human intelligence vs. Machine intelligence
Message-ID: <geddis.719107637@Xenon.Stanford.EDU>
Sender: news@CSD-NewsHost.Stanford.EDU
Reply-To: Geddis@CS.Stanford.Edu
Organization: CS Department, Stanford University, California, USA
References: <BvM75v.AEF@eis.calstate.edu> <26536@castle.ed.ac.uk> 	<MOFFAT.92Oct7105034@uvapsy.psy.uva.nl> 	<1992Oct7.151533.7822@CSD-NewsHost.Stanford.EDU> <MOFFAT.92Oct14153357@uvapsy.uvapsy.psy.uva.nl>
Date: 15 Oct 92 00:07:17 GMT
Lines: 50

moffat@uvapsy.uvapsy.psy.uva.nl (Dave Moffat) writes (replying to Ginsberg):
>If you're a really hard-core mathematico-logician type
>of AI person, you might think that Turing machines is
>what it's all about, and not look beyond that type of computation.
>But if you're more interested in the possibility of
>building a real artificial or analysing a natural AI system,
>then, says Sloman, you'll simply have to face other issues.
>Like interaction with the environment.
>Like parallelism.
>Like several other things we don't have space for here.
>A hard-core logician would say that's all irrelevant at the
>highest level of abstraction, because of the ultimate
>equivalence in a certain mathematical sense of all computers anyway.
>But an AI engineer type would say that kind of abstraction,
>as so often in mathematical modelling,
>throws so much good stuff away in its drive to reduce the
>problem to symbols that it ultimately makes its "solution" trivial.

This is a reasonable comment to make, but unfortunately it is irrelevant
to the main discussion.  This began way back with things like Searle's
Chinese Room argument, and the latest iteration was Penrose's "Emperor".
Sloman's article was intended to be a reply to Penrose.  _All_ of these
writings were on the topic of the philosophy of AI, specifically whether
Strong AI (that computers could _be_ intelligent) or Weak AI (that computers
could _act_ intelligent) was even theoretically possible.

That's why Turing Machines and Godel arguments show up:  they all relate
to notions of what computers are ultimately capable of achieving.  Searle
and Penrose believe that such arguments prove that computers are incapable
in principle of becoming intelligent.

_Everyone_ accepts that, even if AI could succeed in principle, making it
succeed in practice requires a tremendous amount of effort, and issues you
and Sloman raise (parallel processing, reliability, interrupts) are very
important for engineering AI systems.  But that's all tangential to _this_
discussion, about whether AI can work at all (surely an easier question than
whether we humans are able to figure it out), and that's why the comments
are inappropriate to this thread and to Sloman's review.

In any case, I'd challenge you to find someone who believes that the
simplifying restriction to Turing machines has made the solution to AI
become "trivial".

	-- Don Geddis
-- 
Don Geddis (Geddis@CS.Stanford.Edu)
I think a good movie would be about a guy who's a brain scientist, but he
gets hit on the head and it damages the part of the brain that makes you want
to study the brain.
	--  Deep Thoughts by Jack Handey [SNL]


