Newsgroups: comp.ai,comp.robotics,comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!nntp.club.cc.cmu.edu!hudson.lm.com!news.pop.psu.edu!news.cac.psu.edu!howland.reston.ans.net!agate!darkstar.UCSC.EDU!nic.scruz.net!earth.armory.com!rstevew
From: rstevew@armory.com (Richard Steven Walz)
Subject: Re: Minsky's new article
Organization: The Armory
Date: Mon, 12 Dec 1994 10:21:55 GMT
Message-ID: <D0p0sr.3II@armory.com>
References: <3c78j8$b44@jetsam.ee.pdx.edu> <3ca2ji$og4@jetsam.ee.pdx.edu> <D0Lp9s.1tI@armory.com> <3ccjcg$a79@jetsam.ee.pdx.edu>
Sender: news@armory.com (Usenet News)
Nntp-Posting-Host: deepthought.armory.com
Lines: 41
Xref: glinda.oz.cs.cmu.edu comp.ai:25877 comp.robotics:16205 comp.ai.philosophy:23566

In article <3ccjcg$a79@jetsam.ee.pdx.edu>,
Marcus Daniels <marcus@ee.pdx.edu> wrote:
>rstevew@armory.com (Richard Steven Walz) writes:
>>You imagine that a suffiently
>>programmed learning robot cannot forsee consequences or learn to forsee new
>>ones!??
>
>It just that it may not fucking care what the consequences are.
>Its model may be quite good.
-----------------------------------------
You mean that you've never written a "fucking" goal-oriented program???
If not, then you simply have a completely insufficient background to be
discussing this topic without looking stupid! You really don't know how to
program such a simple thing? Honestly??? "Caring" is simply a "cognitive"
or "internested iterative search" routine enabled in a program that causes
permutations of the database to be juxtaposed in array to attempt to find
one that suits the gambit so as to "win"! Human-like intelligences have a
"supervisor" program that enables the "best suited" member of the array
set, and which congratulates itself on being so "smart"! Any deeply
recursive "goal oriented" program will become aware, if it is sufficiently
complex. It's not such a big trick, it's just that we haven't found the
tricky arrangement yet. When we do, kids will likely play with the
structure in grade school to better understand learning and their own
awareness. I very much doubt that simple awareness requires much of a
database to manipulate as an environment, and animals are well known to
display simple self-awarenesses which some of our best attempts border on
quite closely. It's only a matter of time and the right basic algorithmic
structure and you will have "living" entities, which we will probably shut
off and kill quite often without a second thought, and which we will
consider at first to be too simple and bloodless to be alive, until we find
out how simple it is to create something that is REALLY alive! And then we
will either be shocked by our own ruthlessness, or we shall lose respect
for so-called "intelligent-aware" beings, except ourselves, for obvious
reasons! *WE* don't want to get "turned off"!!! When we find out how simple
this thing we are actually is, we will puke at a lot of squicky poetry that
has lionized humans as such inimitable creations! We have called ourselves
a lot of highly vaunted things, and without a real shred of proof or
evidence! It will be a bit of a let-down, *I* am betting! And it may humble
us a bit more suitably than any "second coming" might have!;->
-Steve Walz   rstevew@armory.com

