Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!cs.utexas.edu!utnut!utgpu!pindor
From: pindor@gpu.utcc.utoronto.ca (Andrzej Pindor)
Subject: Re: Wants in AI Systems
Message-ID: <D0IA0I.5tw@gpu.utcc.utoronto.ca>
Organization: UTCC Public Access
References: <3c5dd6$hc7@oahu.cs.ucla.edu>
Date: Thu, 8 Dec 1994 18:57:54 GMT
Lines: 28

In article <3c5dd6$hc7@oahu.cs.ucla.edu>,
Kenneth Colby <colby@oahu.cs.ucla.edu> wrote:
>   
>
>   If an aim of AI is to design and build an intelligent
>   artifact, it can be overlooked that what we interpret
>   as a mind is primarily a system of interests - of "wants"-
>   a wanter/thinker with intelligence being a means to an end
>   rather than an end in itself.
>
>   If our envisioned artifact is to be interpreted as a mind,
>   what wanted goals should this artifact seek to attain?
>   What are its "wants"?
>
>   Should they simply be the human goal-set the artifact is better
>   at achieving because it is more "intelligent"?

This might be too limiting, although there should be some common ground -
if the artifact goal-set did not include its survival, it would be hard
to call it "intelligent".

>	   KMC
Andrzej
-- 
Andrzej Pindor                        The foolish reject what they see and 
University of Toronto                 not what they think; the wise reject
Instructional and Research Computing  what they think and not what they see.
pindor@gpu.utcc.utoronto.ca                           Huang Po
