From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!sarah!cook!psinntp!psinntp!scylla!daryl Mon Mar  9 18:36:04 EST 1992
Article 4342 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!sarah!cook!psinntp!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Subject: Re: Definition of understanding
Message-ID: <1992Mar7.160942.22844@oracorp.com>
Organization: ORA Corporation
Date: Sat, 7 Mar 1992 16:09:42 GMT

michael@psych.toronto.edu (Michael Gemar) writes:

>>Chris, why must you always attack strawmen ? Why bother with easy
>>questions ("does a system that shuffles symbols understand the
>>symbols") when more interesting and difficult ones are around ("does a
>>system that models its own symbol shuffling understand the symbols") ?

> And how, pray tell, does the system "model its own symbol shuffling"
> without simply shuffling symbols?  How does this avoid being a regress?

> (As I've noted before, I believe that "self-modeling" explanations
> are merely smoke and mirrors.)

Michael, the problem of infinite regress I believe is real, and I
think it applies just as well to *any* notion of understanding, not
just to AI. How does it help if the brain uses electro-chemical
reactions to cause its thinking? There are still just electro-chemical
patterns in the brain, and they only relate to other electro-chemical
patterns. How do they ever relate to the Real World?

You can't argue against the possibility of artificial intelligence by
using arguments that would apply to *any* intelligence. (Well, you can
if you want to, but I don't see how it proves anything.) 

In my opinion, the most that can be asked of an intelligent being
(computer or human) is:

1. It's internal processing produces the right relationships among its
internal patterns.

2. The being's to the world produces the right relationship between
the internal patterns and the external world.

To the extent that this isn't sufficient for true semantics, mortal
beings don't *have* true semantics.

Daryl McCullough
ORA Corp.
Ithaca, NY




