From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!apple!portal!cup.portal.com!PLai Wed Feb  5 11:56:05 EST 1992
Article 3384 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!apple!portal!cup.portal.com!PLai
>From: PLai@cup.portal.com (Patrick L Faith)
Newsgroups: comp.ai.philosophy
Subject: Re: "Understanding" (was Evidence that would falsify strong AI.
Message-ID: <53822@cup.portal.com>
Date: 2 Feb 92 01:08:02 GMT
References: <1992Jan30.172057.7114@oracorp.com>
  <1992Jan31.170339.22643nagle@netcom.COM>
Organization: The Portal System (TM)
Lines: 16

>John Nagle
>  A way to look at "understanding" from the outside is this:
> Understanding is the ability to predict the consequences of your actions.
> ...
> This seems an appropriate way to deal with the question for
> artificial intelligences.

I like to base my AIs on the experimental method, building a connectionist
view of probable expectations based on past experience, and correcting
invalid expectations through continous testing and re-associating to 
expectational patterns based on diverging results.  I think John said the
same thing in a more general way, kinda.  The AI must be structure so that
it has access to a virtual world in which it can gain understanding by 
experimenting in the virtual world.

                                        PLai


