From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usenet.coe.montana.edu!ogicse!das-news.harvard.edu!spdcc!dirtydog.ima.isc.com!ispd-newsserver!psinntp!scylla!daryl Tue Mar 24 09:56:33 EST 1992
Article 4529 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usenet.coe.montana.edu!ogicse!das-news.harvard.edu!spdcc!dirtydog.ima.isc.com!ispd-newsserver!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Newsgroups: comp.ai.philosophy
Subject: Re: Systems Reply I
Message-ID: <1992Mar17.215831.26109@oracorp.com>
Date: 17 Mar 92 21:58:31 GMT
Article-I.D.: oracorp.1992Mar17.215831.26109
Organization: ORA Corporation
Lines: 17

christo@psych.toronto.edu (Christopher Green) writes:

> If there is any reference to the meanings of the strings in the rules
> that are executed by the system, then it is not a Turing Machine.

The behavior of a human brain doesn't depend on the meanings of words,
either; it only depends on the laws of physics and the electrochemical
state of the brain. If you are going to present an argument that
purports to show that Turing machines are not capable of meaning, you
need to show why the argument doesn't also apply to humans. The answer
"It is obvious that it doesn't apply, since by introspection our
thoughts have meaning" is simply evidence that your supposedly
air-tight argument has holes in it.

Daryl McCullough
ORA Corp.
Ithaca, NY


