From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!rutgers!uwvax!meteor!tobis Wed Oct 14 14:58:14 EDT 1992
Article 7175 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!rutgers!uwvax!meteor!tobis
>From: tobis@meteor.wisc.edu (Michael Tobis)
Newsgroups: comp.ai.philosophy
Subject: Re: Brain and Mind (was: Logic and God)
Message-ID: <1992Oct9.031847.1603@meteor.wisc.edu>
Date: 9 Oct 92 03:18:47 GMT
References: <1992Oct5.181741.7241@spss.com> <1992Oct8.174224.20547@meteor.wisc.edu> <1992Oct8.200218.9855@spss.com>
Organization: University of Wisconsin, Meteorology and Space Science
Lines: 106

In article <1992Oct8.200218.9855@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>In article <1992Oct8.174224.20547@meteor.wisc.edu> tobis@meteor.wisc.edu 
>(Michael Tobis) writes:
>I am not at all sure what you mean by "understanding of consciousness is
>incomplete"-- to me it suggests that our Searle-style intuitions about
>where consciousness begins and ends are not to be trusted.  But you seem
>to leave it open to me to describe consciousness as emerging from the system
>as a whole, which is all I could ask for.

Well, no, sorry. I don't think that's a meaningful assertion. "Emerge"
seems like question-begging to me. Your approach requires much more specificity,
which I don't think is available to you, to satisfy me.

>To put it another way: dualism is nice, but it offers no program for
>science.  The soul is not an explanation; it is an a priori rejection of
>explanation.  To be a scientist, the dualist must set aside his beliefs
>and use materialistic methods and explanations.

I try to avoid the "s" word for fear of attracting allies I could do without.
I agree with the paragraph though, but I wouldn't use the word "rejection".
I suspect that no explanation will arise that is of comparable robustness
and verifiability to those presented by physical science on topics not
directly involving mind. I am more a scientist by inclination than a dualist:
I will abandon my hypothesis if proven wrong. However, my dualism is not imho
in the immediate danger that so many of you think it is.

>>It is only by the application of
>>the (as yet mysterious) consciousness of the human participant that any
>>meaning is attached to the symbols.

>That would be true if the only way to divine the purpose of a program was
>to ask the programmer; but it isn't.  

>Let's say the chess program also drives a robotic arm which moves the pieces
>and punches the clock.

Well, if the machine deliberately gives us clues about what its symbols
mean to its designer, then you are correct. But I proposed that we only
had access to the microcode level, which is presumably equivalent to the
symbol manipulator, and that the microcode is no more than typically obscure.

>>How can consciousness arise from symbols when symbols cannot exist
>>without consciousness?

>Why can't they?  Without any arguments to back it up, this statement
>remains merely a slogan.  

While I don't agree that it's only a slogan, if it were, I wouldn't be
the only guilty party. "An emergent property of self-referential recursive
symbol manipulating algorithms" sure seems like a slogan to me.

Perhaps we disagree on what a symbol means. Could you tell me what a 
symbol is in a context that involves no conscious participants? If a 
picture of a tree falls in the forest, is it a picture?

>I still challenge you on the statement that passing the Turing Test is the 
>goal of "some" AI workers (retreat on previous statements noted).  
>I want names and addresses.

I can't imagine why you are so fixated on this point. OK, try Paul & Patricia
Churchland, Dept of Philosophy, UCSD. The following quotes are from "Could a
Machine Think?", Scientific American, Jan 1990:

"More specifically these results imply that a suitably programmed symbol-
manipulating machine should be able to pass the Turing test for conscious
intelligence. ... Of course, at present no one knows the function that would 
produce the output behavior of a conscious person. But the Church and Turing
results assure us that whatever that function might be, a suitable symbol-
manipulating machine could compute it. ... The only remaining problem is
to identify the undoubtedly complex function ... and then write the program
by which the symbol-manipulating machine will compute it. These goals form
the fundamental research program of classical AI."
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

>As for "systematic false positives", it's a good point, but it's covered.
>Frequently this newsgroup considers ways to cheat on the Turing Test.  
>Make our day-- ask about the humongous lookup table.

OK, what about the HLT?

>Better yet, suggest a way to systematically trick a Turing tester.
>It will disturb no AI researchers's repose if you think it can be done
>but don't suggest how.

I propose, and I suspect that Searle and Penrose would agree on this point,
that completing the program outlined by the Churchlands above is identical
to creating a system which systematically produces false positives
on the Turing test, since none of us would believe a purely algorithmic
implementation to be conscious in principle, although we would be fooled
by an instantiation. I can certainly already write a program that
successfully emulates severe autism (using the original remote typewriter 
test at least):
	main(){while(1);}

>Ah, yes, scientific hubris.  There Are Some Things Man Was Not Meant to Know.

No, but There Are Some Things It Would Be Utterly Foolish To Do, even though
they may be within our power to do.

>Well, I'll call and raise: the first SF work to use the word "robot" 
>described the oppression of artificial life at the hand of conservatives.

Your fascination with Slavic writers is interesting. You don't seem to
realize that they tend to be allegorical when writing science fiction.

mt


