From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rutgers!ub!zaphod.mps.ohio-state.edu!unix.cis.pitt.edu!pitt!geb Mon Dec 16 11:01:51 EST 1991
Article 2115 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rutgers!ub!zaphod.mps.ohio-state.edu!unix.cis.pitt.edu!pitt!geb
>From: geb@dsl.pitt.edu (gordon e. banks)
Newsgroups: comp.ai.philosophy
Subject: Re: Abstract question.
Message-ID: <12683@pitt.UUCP>
Date: 14 Dec 91 13:37:00 GMT
References: <1991Dec09.183012.5748@ecst.csuchico.edu> <12638@pitt.UUCP> <1991Dec11.151326.9932@saifr00.cfsat.honeywell.com>
Sender: news@cs.pitt.edu
Organization: Decision Systems Laboratory, Univ. of Pittsburgh, PA.
Lines: 17

In article <1991Dec11.151326.9932@saifr00.cfsat.honeywell.com> petersow@saifr00.cfsat.honeywell.com (Wayne Peterson) writes:
>Is true artificial intelligence something like real simulated pearls.  What does depression have to do with intelligence? How often we confused being humanlike with being intelligent. How anthropomorhic.

The more pertinent question that was being asked is "what is depression?"
By simulating it, we might be able to learn something about what cognitive
elements it contains.  Does depression serve a cognitive purpose, even
a positive one in some cases?  I don't think we were confusing intelligence
with human-like.  We were more interested in the human-like, in this
case, than in making the program do something brilliant.  It was intentionally
"anthropomorphic".


-- 
----------------------------------------------------------------------------
Gordon Banks  N3JXP      | "I have given you an argument; I am not obliged
geb@cadre.dsl.pitt.edu   |  to supply you with an understanding." -S.Johnson
----------------------------------------------------------------------------


