From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usenet.coe.montana.edu!news.u.washington.edu!carson.u.washington.edu!forbis Mon May 25 14:06:16 EDT 1992
Article 5746 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usenet.coe.montana.edu!news.u.washington.edu!carson.u.washington.edu!forbis
>From: forbis@carson.u.washington.edu (Gary Forbis)
Subject: Re: Competence vs. Understanding
Message-ID: <1992May19.180447.22266@u.washington.edu>
Sender: news@u.washington.edu (USENET News System)
Organization: University of Washington, Seattle
References: <1992May19.003821.9450@Princeton.EDU> <60703@aurs01.UUCP>
Date: Tue, 19 May 1992 18:04:47 GMT

In article <60703@aurs01.UUCP> throop@aurs01.UUCP (Wayne Throop) writes:

>The point I still see is that this notion of "grounding" leads to a
>situation in which a robot "with semantics" is NOT (in itself)
>distinguishable  from one without.  (Or a human "with semantics" vs one
>without, for that matter.)

>Consider a robot interacting and demonstrating competence against a
>virtual world, and another robot interacting and demonstrating
>competence against the real world.  The two robots will (by hypothesis)
>end up in identical physical states, yet one "has semantics" and the
>other doesn't.

Just prior to reading this I was thinking that maybe one should ask about
competence rather than understanding.  Competence seems to be something
that can be demonstrated.

It seems to me it won't be long after computers start showing competence
in very general domains and a robustness in dealing with new domains that
we start refering to computers as understanding the domains in which it
demonstrates competence.  When this happens it will seem strange to ask
if they *really* understand these domains.

--gary forbis@u.washington.edu


