From newshub.ccs.yorku.ca!ists!torn!utgpu!news-server.ecf!utcsri!rutgers!jvnc.net!yale.edu!qt.cs.utexas.edu!cs.utexas.edu!sun-barr!decwrl!mcnc!aurs01!throop Tue Jun 23 13:20:57 EDT 1992
Article 6284 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!torn!utgpu!news-server.ecf!utcsri!rutgers!jvnc.net!yale.edu!qt.cs.utexas.edu!cs.utexas.edu!sun-barr!decwrl!mcnc!aurs01!throop
>From: throop@aurs01.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: Transducers
Message-ID: <60837@aurs01.UUCP>
Date: 17 Jun 92 14:56:34 GMT
References: <1992Jun17.132117.9273@Princeton.EDU>
Sender: news@aurs01.UUCP
Lines: 55

> harnad@phoenix.Princeton.EDU (Stevan Harnad)
> My arguments about transduction being an essential part of brain
> function rather than an independent module were based in part on this
> fact, [.. that the retinal surface can be considered a part of the brain..]
> in part on the fact that transduction is sufficient to immunize a
> system against Searle's Argument, and in part on the essential role it
> plays in giving a system TTT capacity.

I still don't see how "transduction" and the TTT imunize a TTT testee
from Searle's argument, especially in the "chineese android"
incarnation.  In particular, when coupled with the additional point
about the TTT that it does not matter what mechanisms are "inside" the
TTT testee.

> Logical point: Not all of the brain is a computational core.

The problem here is that there IS no fact of the matter about whether a
process is "computational" or not, whether its inputs and outputs are
"symbolic" or not.  The notions of "computation" and "symbolic" are
interpretations of the process, projections of a model held by some
interpreter.

> Empirical point: Large parts of the brain are doing analog processing.

It seems like asynchronous digital to me.  But question: is a
slide-rule "computational"?  Should "computational" be equated with
"non-analog"?

> Logical point: The meanings of the symbols in a computer alone are
>                ungrounded; they are parasitic on the interpretations
>                we creatures with minds project on them

"The meanings of gestures and utterances of a human alone are
ungrounded; they are parasitic on the interpretations we creatures
with minds project on them".  Hence, begs the question of whether
computers can have minds.

> Logical point: A TTT-passing robot is immune to Searle's Argument
>                and the meanings of its symbols are grounded in its capacity
>                to discriminate, identify, and manipulate the objects,
>                events and states of affairs that they are
>                systematically interpretable as being about

But a computer assembly line checker with a scanner and a "reject"
lever can "discriminate, identify, and manipulate [...] objects, events,
and states of affairs that [.. its internal symbols ..] are systematically
interpretable as being about".  I therefore conclude that there's some
other magic ingredient to "grounding" that I don't yet know about.


In summary, it seems to me that many of the points, both logical and
empirical, that lead to Stevan Harnad's conclusion are not well 
established.

Wayne Throop       ...!mcnc!aurgate!throop


