From newshub.ccs.yorku.ca!ists!torn!utcsri!rpi!zaphod.mps.ohio-state.edu!cs.utexas.edu!uunet!trwacs!erwin Tue Jun 23 13:21:09 EDT 1992
Article 6306 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!torn!utcsri!rpi!zaphod.mps.ohio-state.edu!cs.utexas.edu!uunet!trwacs!erwin
>From: erwin@trwacs.fp.trw.com (Harry Erwin)
Newsgroups: comp.ai.philosophy
Subject: Re: Transducers
Message-ID: <640@trwacs.fp.trw.com>
Date: 18 Jun 92 16:40:04 GMT
References: <1992Jun10.203412.19158@news.Hawaii.Edu> <6980@pkmab.se> 	<1992Jun17.132117.9273@Princeton.EDU> <BILL.92Jun17123907@ca1.nsma.arizona.edu>
Organization: TRW Systems Division, Fairfax VA
Lines: 17

Suppose my world scan is grounded because it is hard-wired and realistic,
and suppose my cognitive objects (used in scrutinizing, considering, and
acting) can operate on the grounded objects in my world scan, and suppose
effective action on the grounded objects is rewarded. Won't I learn
groundings for my cognitive objects?

Also, won't my world scan tend to be realistic because unrealistic world
scans result in individuals who cannot survive? Also, the world scan will
be prior to cognitive function since a combination of a realistic world
scan, genetically encoded stimulus-response loops, and some capacity for
calibrating those loops (AKA learning) is enough to survive.

Cheers,
-- 
Harry Erwin
Internet: erwin@trwacs.fp.trw.com



