From newshub.ccs.yorku.ca!torn!utcsri!rpi!zaphod.mps.ohio-state.edu!darwin.sura.net!convex!news.oc.com!spssig.spss.com!markrose Wed Oct 14 14:58:21 EDT 1992
Article 7183 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!utcsri!rpi!zaphod.mps.ohio-state.edu!darwin.sura.net!convex!news.oc.com!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: Brain and Mind (was: Logic and God)
Message-ID: <1992Oct9.175633.8061@spss.com>
Sender: news@spss.com (Net News Admin)
Organization: SPSS Inc.
References: <1992Oct8.174224.20547@meteor.wisc.edu> <1992Oct8.200218.9855@spss.com> <1992Oct9.031847.1603@meteor.wisc.edu>
Date: Fri, 9 Oct 1992 17:56:33 GMT
Lines: 112

In article <1992Oct9.031847.1603@meteor.wisc.edu> tobis@meteor.wisc.edu 
(Michael Tobis) writes:
>In article <1992Oct8.200218.9855@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>>But you seem to leave it open to me to describe consciousness as emerging 
>>from the system as a whole, which is all I could ask for.
>
>Well, no, sorry. I don't think that's a meaningful assertion. "Emerge"
>seems like question-begging to me. 

It wasn't intended to explain where consciousness comes from, but only to
summarize my earlier remarks.  Consciousness doesn't arise out of mere
complexity; the system has to explicitly work to produce it.  It is emergent
only in that it is not a property of arbitrary sub-parts of the system.
In a like manner, "accomplishes accounting functions" could be true of a
bookkeeping system as a whole, but not of (say) one JNE instruction within
the code.

>>Let's say the chess program also drives a robotic arm which moves the pieces
>>and punches the clock.
>
>Well, if the machine deliberately gives us clues about what its symbols
>mean to its designer, then you are correct. But I proposed that we only
>had access to the microcode level, which is presumably equivalent to the
>symbol manipulator, and that the microcode is no more than typically obscure.

You're looking for meaning in an algorithm unconnected to the world.
Unfortunately there's no grounds to believe that *anything* has semantics 
under these conditions.  A human brain has semantics, but it *is* connected
to the world.  Allow the computer this connection (making it a robot)
and it can have semantics too.

>>>How can consciousness arise from symbols when symbols cannot exist
>>>without consciousness?
>
>>Why can't they?  Without any arguments to back it up, this statement
>>remains merely a slogan.  
>
>While I don't agree that it's only a slogan, if it were, I wouldn't be
>the only guilty party. "An emergent property of self-referential recursive
>symbol manipulating algorithms" sure seems like a slogan to me.

When I called your statement a slogan, I was inviting you to provide 
arguments to back it up.  If you simply assert it, I can't accept any 
conclusions you draw from it, because I don't agree with it.

The counter-slogan you propose is not mine, by the way.

>Perhaps we disagree on what a symbol means. Could you tell me what a 
>symbol is in a context that involves no conscious participants? If a 
>picture of a tree falls in the forest, is it a picture?

A symbol is something that stands for something else.  This statement
implies an assignment of that meaning, either explicitly made or implicitly
accepted by a process; and a use of the assignment, both "syntactic"
(i.e. acting on the symbol in ways that depend on it being a symbol)
and "semantic" (i.e. following the pointer to what's referred to).

Bees use symbolic movements to indicate the direction and distance of
pollen sources.  The bees don't make the assignment of meaning themselves,
they just use it; presumably it's genetically programmed.  They use the
symbol both syntactically (unlike the information symbolized, the dance fits 
in the hive and can be communicated to other bees) and semantically (after
hearing about the pollen they go out and get it).

Are bees conscious?  I don't know, but I don't see that any of this process
depends on consciousness.  

>>I still challenge you on the statement that passing the Turing Test is the 
>>goal of "some" AI workers (retreat on previous statements noted).  
>>I want names and addresses.
>
>I can't imagine why you are so fixated on this point. OK, try Paul & Patricia
>Churchland, Dept of Philosophy, UCSD. The following quotes are from "Could a
>Machine Think?", Scientific American, Jan 1990:

Thanks for the quote.  Close enough for government work, I suppose.
However, other researchers differ; McDermott and Charniak in their 
_Introduction to Artificial Intelligence_ state that "The ultimate goal
of AI research... is to build a person."

>>Make our day-- ask about the humongous lookup table.
>
>OK, what about the HLT?

It's an alleged way to beat the Turing Test, by recording a reasonable 
response for every possible input in a huge table.  What's controversial
is what, if anything, that means for the TT.

>>Better yet, suggest a way to systematically trick a Turing tester.
>>It will disturb no AI researcher's repose if you think it can be done
>>but don't suggest how.
>
>I propose, and I suspect that Searle and Penrose would agree on this point,
>that completing the program outlined by the Churchlands above is identical
>to creating a system which systematically produces false positives
>on the Turing test, since none of us would believe a purely algorithmic
>implementation to be conscious in principle, although we would be fooled
>by an instantiation. 

But this belief depends on an unproven assumption that consciousness depends
on something non-algorithmic-- causal properties, quantum randomness, 
souls, phlogiston, whatever.  AI researchers don't share this assumption,
and so have no reason to give up the Turing Test because of it.

>>Well, I'll call and raise: the first SF work to use the word "robot" 
>>described the oppression of artificial life at the hand of conservatives.
>
>Your fascination with Slavic writers is interesting. You don't seem to
>realize that they tend to be allegorical when writing science fiction.

You infer too much.  Be careful, or I'll draw conclusions about you
based on your reference to Star Trek...


