Newsgroups: sci.physics,talk.origins,alt.atheism,sci.logic,comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!news.mathworks.com!europa.chnt.gtegsc.com!news.sprintlink.net!simtel!harbinger.cc.monash.edu.au!lugb!ee.latrobe.edu.au!not-for-mail
From: khorsell@ee.latrobe.edu.au (Kym Horsell)
Subject: Re: How to get around Godel, Was: If God exists, what created
Sender: news@lugb.latrobe.edu.au (News System)
Message-ID: <3os40h$p4a@faraday.ee.latrobe.edu.au>
Date: Thu, 11 May 1995 04:32:17 GMT
Lines: 51
References: <95May8.214113edt.887@neuron.ai.toronto.edu> <799999838snz@longley.demon.co.uk> <3oo911$nja@mp.cs.niu.edu>
Organization: Department of Electronic Engineering, La Trobe University
Xref: glinda.oz.cs.cmu.edu sci.physics:121007 sci.logic:10760 comp.ai.philosophy:27924

In article <3oo911$nja@mp.cs.niu.edu>, Neil Rickert <rickert@cs.niu.edu> wrote:
>In <799999838snz@longley.demon.co.uk> David Longley <David@longley.demon.co.uk> writes:
>
>>My point is that AI is (*should be*?) concerned surely, not with modelling 
>>human performance, but going much  further in  implementing the  proposals
>>sketched  by  Leibniz  and Frege (for  whom,  psychologism  was  of course  
>>anathema). Let psychologists model human reasoning (and its biases) by all
>>means, but surely we should be looking to mathematics, logic & engineering
>>for developments in AI?, & for *practical* tools to help applied behaviour
>>scientists/technologists  such as  myself, work more effectively in public
>>service agencies.
>
>David,
>
>Here is a suggestion.  If you will try to not dictate to us what AI
>should be about, we will try to not dictate to you what psychology
>should be about.

I don't think David's remarks are that unreasonable. Certainly
in the early stages of most development the strategy is generally
to copy nature. But we should all know that, despite the "first
law of ecology", Nature doesn't always have the best solution
from our point of view -- jets don't propel themselves like
birds, for instance.

There certainly does seem to be a "pre-occupation" with modeling
something like human intelligence. The aim of meeting Turing at
least partway _does_ tend to limit the possibilities. Hence the
raging debates in even professional circles over the nature of
intelligence -- typically alleged to be some kind of goal of AI.

I don't know about others, but it generally irks me when I
not only have to solve a given problem, but have my hands tied
in solving it in a particularly (and apparently overly-complicated
and inefficient) way. I would imagine David would have some sympathy
for this position wrt AI researchers.

But on the other hand, as numerous SF stories tend to illustrate
(and, yes, we can learn a lot from stories ;-) there are significant
problems in having "the machine" solve problems in non-human ways,
esp. since part of the problem-solving process involves having to
justify the answer to someone (presumably "limited" to human intelligence
and psychology).

A neat little story of the type I mean is Lem's "Golem 2000". Not only
of interest to hard scientists, but sociologists and philosophers as
well. (Not to mention re-introducting the art of writing intoductions). ;-)

-- 
R. Kym Horsell
khorsell@EE.Latrobe.EDU.AU              kym@CS.Binghamton.EDU 
