Newsgroups: rec.arts.books,comp.ai,sci.cognitive,sci.psychology.theory
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!gatech!news.mathworks.com!tank.news.pipex.net!pipex!uknet!newsfeed.ed.ac.uk!dcs.ed.ac.uk!newshost.dcs.ed.ac.uk!mxm
From: Mike Moran <mxm@dcs.ed.ac.uk>
Subject: Re: AI needs Lit to make Agents Intelligent  (Was: Quantifying literary progress)
In-Reply-To: jorn@MCS.COM's message of 27 Aug 1995 16:13:27 -0500
X-Nntp-Posting-Host: fuday.dcs.ed.ac.uk
Message-ID: <MXM.95Aug29211812@dcs.ed.ac.uk>
Sender: cnews@dcs.ed.ac.uk (UseNet News Admin)
Organization: Department of Computer Science, Edinburgh University
References: <41qn5n$jdf@Mars.mcs.com>
Date: Tue, 29 Aug 1995 20:18:12 GMT
Lines: 77
Xref: glinda.oz.cs.cmu.edu comp.ai:32963 sci.cognitive:9321 sci.psychology.theory:471



Jorn Barger writes:
In article <41qn5n$jdf@Mars.mcs.com> jorn@MCS.COM (Jorn Barger) writes:


jorn> In article <41qi7s$gms@Mars.mcs.com> on rec.arts.books, I wrote, in
jorn> answer to Tom Stanton:
>> Despite the glowing pictures that some AI gurus paint, progress
>> towards such intelligent agents is really stalled, and has been for
>> some years.
>> My argument is that in order to build agents that understand humans,
>> we have to build a model of the human personality [...]

jorn> Even the problem of building a computer with multiple independent
jorn> processors seems to require treating each processor as a sort of
jorn> 'agent' that's competing with others for resources-- but this requires
jorn> designers understand *human* competition better... it's a *society*
jorn> of CPUs.

jorn> Or, one of several processors in such a design may be supplying
jorn> results that aren't 100% reliable-- which is another sort of *social*
jorn> complication that has detailed parallels to human social psychology...

	Don't you think it is very anthropomorphic to assume that to make
	the 'agents' (*) you describe interact in an advantageous way 
	you require an understanding of human society? I would agree that
	it does help to view the agent interactions as those of a society,
	but I would say a more appropriate society would be that of a
	nest of ants or suchlike. 

	Additionally, I would say that a general understanding of competition
	(perhaps modelled by darwinism or some other prevalent competing
	( :-) ) model) would be more appropriate in this case.

	In general, what I'm trying to say is: automatically introducing
	the modelling of humans into the solution or investigation of
	a problem will tend to complicate rather than simplify the
	given problem. Regarding the specific case of agents, they are
	really quite simple right now, and do not even begin to
	approach the utility nor functionality of humans whether
	looked at individually, or in groups (possibly
	'societies'). Indeed, this is an advantageous position, since
	it leads to the ability to see any if emergent behaviour can
	result from such simple connectivity. 

	To ask agents to approach the goal of emulating human
	reasoning and/or society is asking way too much, rather, we
	should allow the taking of more bite-sized chunks (**),
	then sit back and watch them chew,

						Thanks,

							Mike

	(*)  Quotes added since the term 'agent' is used in many ways
	to mean different things, and I wish to indicate that the use
	here is but one of many

	(**) Problems such as: two or more agents co-operating to push a block
	across a room to a desired site, or, a small community of
	agents trying to make other agents believe their views with
	the added spice of allowing them to lie.

	




	
	
--

 MI   CH AE   LM "I don't the meaning of the word surrender! Well, ->   ,__o
 O R A N F O U R      I do, I'm not dumb,.. just not in this     -->  _-\_<,
 T  H  Y E  A  R              context" - The Tick	       ---> (*)'(*)
 A     I C     S Home: http://www.dcs.ed.ac.uk/home/mxm/ !eat emacs biscuits!
