From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon Mar  9 18:35:56 EST 1992
Article 4327 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and panpsychism
Organization: Department of Psychology, University of Toronto
References: <1992Mar4.203455.23960@psych.toronto.edu> <1992Mar06.011031.8634@norton.com>
Message-ID: <1992Mar6.215601.20146@psych.toronto.edu>
Date: Fri, 6 Mar 1992 21:56:01 GMT

In article <1992Mar06.011031.8634@norton.com> brian@norton.com (Brian Yoder) writes:
>michael@psych.toronto.edu (Michael Gemar) writes:

>My claim is not that programs would not be used in constructing such a machine,
>but that looking at the workings of the machine in terms of instructions and
>states is a fruitless direction (as Searle points out).  Much like describing
>human intelligence in terms of neurons, neurotransmitters, and brain waves
>would be useless.  Certainly they are involved, but not in a way that can
>be intelligently mapped from one domain to the other.  Another area of 
>consideration Searle leaves out is interaction with the outside world.  How
>can a thing be said to be conscious if it isn't conscious OF anything?

But Searle *does* consider interactions with the outside world.  The
original CR situation passing inputs in and outputs out.  A more stimulus
rich version is considered in his response to the "Robot Reply" in the
original BBS article.  Adding inputs and outputs, sensors and effectors,
doesn't change anything.

[much deleted]

>> Look, Searle explicitly states that it may be possible to construct
>> devices that have understanding.  However, his position is that such
>> devices will not have understanding *solely* in virtue of their
>> functional relations.  Yes, if you are able to clone a brain, Searle would
>> be happy to say that such a thing could have understanding (or qualia, or
>> whatever).  But it would in virtue (according to Searle) of reproducing
>> the *non-computational* aspects of the brain. 
>
>Sure, such as sensory apparatus for example?  That is exactly what I have been
>saying all along.  To consider the philosophical terms for this again, the
>various theories of truth: intrinsic, subjective, objective, and skeptical.
>The last of course is out since who would want an intelligent machine if it
>couldn't ever know anything?  The intrinsic theory is the one Searle attacks
>(actually, a rationalist appraisal of intrinsic truth) and his conclusions are
>right...it's hopeless.  The subjective theory is almost as bad as the skeptical
>one since the machine could just make up anything it wanted and it would be
>(somehow) "true".  What is left is the objective theory which says that
>knowledge derives from the interaction between the knower and the world, and 
>that the "knowledge" is not in one or the other, but in the union of the two.

I would suggest you read the original Behavioral and Brain Sciences
article (1980), where the suggestion of interactions with the environment
is dealt with in detail.  In brief, the argument goes that any input will
still be in "marks" which are uninterpreted, and have no more "inherent"
meaning than do the marks the machine was manipulating before.  But
read the original for the fleshed-out argument.

>> And how does "perception...induction and goal-orientation" arise?  
>> At least AI offers an answer (through functional organization).  You
>> don't seem here to offer any alternative account.
> 
>I have not fully fleshed this out yet (which I guess is why I'm not obscenely
>wealthy yet ;-) but these sub-systems could be constructed out of mecahnical
>parts, processors, programs, and the like, just as humans are composed of cells
>and organs, but what would be intelligent is not "the program" as Searle points
>out, but the whole system.  Remember too, that "a program" without a hardware
>platform and all the rest can't do ANYTHING, much less be intelligent.

These sub-systems merely exchange additional symbols with the machine.  This
changes nothing.                         

- michael




