From newshub.ccs.yorku.ca!torn!utcsri!rpi!usc!cs.utexas.edu!uunet!trwacs!erwin Mon Aug 24 15:40:51 EDT 1992
Article 6621 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rpi!usc!cs.utexas.edu!uunet!trwacs!erwin
>From: erwin@trwacs.fp.trw.com (Harry Erwin)
Newsgroups: comp.ai.philosophy
Subject: Re: Turing Test Myths
Message-ID: <693@trwacs.fp.trw.com>
Date: 14 Aug 92 15:38:41 GMT
References: <BILL.92Aug12122254@ca3.nsma.arizona.edu> 	<1992Aug13.024527.2079@news.media.mit.edu> 	<BILL.92Aug13130725@ca3.nsma.arizona.edu> 	<1992Aug13.230220.23021@news.media.mit.edu> <BILL.92Aug13201500@ca3.nsma.arizona.edu>
Organization: TRW Systems Division, Fairfax VA
Lines: 21

I suspect we can change context on this question by looking at HDP
systems. Action networks are not intelligent. Recordings of plans that can
drive action networks are not intelligent. Intelligence resides in the
adaptive critic, particularly when it creates a plan for the first time
or when it adapts (tunes) an existing plan for a new situation.

I also suspect the creation of a plan is no big deal, involving a chaotic
process. The big deal is in recognizing that a given plan might be
effective or that an existing plan could be modified to be effective.

Bill McKellan's evidence that cultural innovation is similar to individual
innovation is of some value here, because it allows us to think about
individual intelligence by examining social group intelligence. (Social
groups store plans in somewhat the same way that the cerebellum does.) Ask
yourself: "When does a social group seem intelligent?"

Enjoy,
-- 
Harry Erwin
Internet: erwin@trwacs.fp.trw.com



