From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!psinntp!scylla!daryl Wed Dec 18 16:02:04 EST 1991
Article 2179 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Re: Causes and Reasons
Message-ID: <1991Dec17.033356.22762@oracorp.com>
Organization: ORA Corporation
Date: Tue, 17 Dec 1991 03:33:56 GMT

Mikhail Zeleny has written that
> 1. Causality cannot be finitely specified.
> 2. Putnam's argument denies "in principle the type-identity of
>    functional and mental states, which is a necessary condition for the
>    sort of nomological monism you need to espouse in order to take strong
>    AI seriously.
> 3. Since every operation of a Turing machine reduces to a purely syntactic
>    manipulation, the operation of the said Turing machine cannot determine
>    the semantical properties of the program. Consequently, any understanding
>    that may have produced the said program cannot, in principle, be captured
>    by it, though it may very well be interpreted by a human agent perusing
>    it.
> 4. It is a common misconception that extensional semantic functions
>    can be explained purely by reference to the syntax of the elements
>    manipulated by the program could be remedied by an elementary course
>    in model theory.
> 5. A computer is not going to have the relation of logical
>    consequence in *any* sense of "have", pace G\"odel.

These points have been used to argue that the positions of David
Chalmers, Stanley Friesen, John McCarthy, Drew McDermott (I hope I
haven't left anyone out) are incoherent, obfuscatory, and thoroughly
discredited by philosophy, logic, mathematics, and common sense.

On the contrary, I believe that neither Godel's theorem, nor Tarski's
theorem about the undefinability of truth, nor model-theoretical
results such as the impossibility of a theory uniquely determining a
model prove anything at all about the impossibility of Strong AI. To
greatly oversimplify, all of these results stress the limitations of
purely syntactic methods to capture semantics. However, in order for
these limitative results to have any implication for this endless AI
debate, you must first have some reason to believe that humans do
*not* suffer from these limitations. That is where I part company from
Penrose, Lucas, and Searle.

By introspection, I don't see any reason to believe that I can, for
instance, solve the halting problem or determine the truth of
arbitrary mathematical statements any better than a computer can. As I
have said in email conversations, I believe that the argument given by
Penrose in _The Emperor's New Mind_, to the effect that a human
mathematician can always do better than any computer program at
solving mathematical problems, is simply wrong; it is not a valid
argument.

The only mathematical or logical argument that seems to me to have
philosophical implications for the Strong AI program is Putnam's. I
have to admit that I have only heard his argument second-hand, through
the Net, but if I understand it correctly, it is related closely to
the issue that Joseph Wang has brought up in recent postings: the
question of the uniqueness of the interpretations of computation.
Assuming that it is possible to program a computer so that it can be
consistently interpreted as, say, thinking about cats, there is still
the possibility that it can *also* be interpreted as thinking about
cherries, or chess, or chemistry. A physical system can be
*interpreted* in infinitely many ways.

I think that this is a very important point, although it still doesn't
prove that AI is impossible, only that it has strange (though not
inconsistent) consequences. I'm inclined to just bite the bullet and
face up to the possibility (likelihood, in my opinion) that what a
*person* is thinking about is not uniquely determined.

Daryl McCullough
ORA Corp.
Ithaca, NY






