Newsgroups: rec.arts.books,comp.ai,sci.cognitive,sci.psychology.theory
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!travelers.mail.cornell.edu!news.kei.com!news.mathworks.com!tank.news.pipex.net!pipex!dispatch.news.demon.net!demon!uknet!newsfeed.ed.ac.uk!dcs.ed.ac.uk!newshost.dcs.ed.ac.uk!mxm
From: Mike Moran <mxm@dcs.ed.ac.uk>
Subject: Re: AI needs Lit to make Agents Intelligent
In-Reply-To: jorn@MCS.COM's message of 30 Aug 1995 11:52:44 -0500
X-Nntp-Posting-Host: hildasay.dcs.ed.ac.uk
Message-ID: <MXM.95Sep1191110@dcs.ed.ac.uk>
Sender: cnews@dcs.ed.ac.uk (UseNet News Admin)
Organization: Department of Computer Science, Edinburgh University
References: <41qn5n$jdf@Mars.mcs.com> <MXM.95Aug29211812@dcs.ed.ac.uk>
	<42250s$285@Venus.mcs.com>
Date: Fri, 1 Sep 1995 18:11:10 GMT
Lines: 53
Xref: glinda.oz.cs.cmu.edu comp.ai:33046 sci.cognitive:9349 sci.psychology.theory:499



Jorn Barger writes:
In article <42250s$285@Venus.mcs.com> jorn@MCS.COM (Jorn Barger) writes:

jorn> Mike Moran  <mxm@dcs.ed.ac.uk> wrote:
>> Don't you think it is very anthropomorphic to assume that to make
>> the 'agents' (*) you describe interact in an advantageous way 
>> you require an understanding of human society?

jorn> Certainly, one should not *assume* this.  One should try building them,
jorn> and see what problems arise.  One group that's running up against the
jorn> anthropomorphic problem is at UMass <URL: http://dis.cs.umass.edu/ >

	I'll try to have a look at those pages. I, myself, err on the
	side of the 'try it and see what happens' approach.

>> I would agree that
>> it does help to view the agent interactions as those of a society,
>> but I would say a more appropriate society would be that of a
>> nest of ants or suchlike. 

jorn> You'd think so, but try it.  (Even our knowledge of ants has to be filtered
jorn> thru our self-knowledge.)

	Yes, that it is true, but then there are not many things that 
	can be understood without *any* self-knowledge.

>> Additionally, I would say that a general understanding of competition
>> (perhaps modelled by darwinism or some other prevalent competing
>> ( :-) ) model) would be more appropriate in this case.

jorn> Sure, but try implementing it.  It sounds simple, but it's not...

	I would agree that what i proposed above is not simple, but my
	basic point is that it is usually simpler to avoid bringing
	the extra task of modelling anything to do with humans into the
	problem, unless, of course you are really straining not to.

	The general perceived property of the original post that annoyed 
	me was the tacit assumption that Artificial Intelligence is only about
	the human mind, and nothing else. 
	
						Thanks,

							Mike

--

 MI   CH AE   LM "I don't the meaning of the word surrender! Well, ->   ,__o
 O R A N F O U R      I do, I'm not dumb,.. just not in this     -->  _-\_<,
 T  H  Y E  A  R              context" - The Tick	       ---> (*)'(*)
 A     I C     S Home: http://www.dcs.ed.ac.uk/home/mxm/ !eat emacs biscuits!
