From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!cs.utexas.edu!sun-barr!newstop!sun!amdcad!netcomsv!nagle Wed Feb  5 11:55:40 EST 1992
Article 3342 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!cs.utexas.edu!sun-barr!newstop!sun!amdcad!netcomsv!nagle
>From: nagle@netcom.COM (John Nagle)
Newsgroups: comp.ai.philosophy
Subject: "Understanding" (was Evidence that would falsify strong AI.)
Message-ID: <1992Jan31.170339.22643nagle@netcom.COM>
Date: 31 Jan 92 17:03:39 GMT
References: <1992Jan30.172057.7114@oracorp.com>
Organization: Netcom - Online Communication Services  (408 241-9760 guest)
Lines: 13


       A way to look at "understanding" from the outside is this:
Understanding is the ability to predict the consequences of your actions.

       This is a useful working definition.  An observer can tell, after
the fact, if a prediction was correct.  One can use this definition to
deal with such questions as "understanding" in animals, by observing
how well they deal with goal-oriented situations that require prediction.

       This seems an appropriate way to deal with the question for
artificial intelligences.

					John Nagle


