From newshub.ccs.yorku.ca!torn!utcsri!rpi!zaphod.mps.ohio-state.edu!darwin.sura.net!sgiblab!a2i!pagesat!spssig.spss.com!markrose Wed Oct 14 14:58:10 EDT 1992
Article 7168 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!utcsri!rpi!zaphod.mps.ohio-state.edu!darwin.sura.net!sgiblab!a2i!pagesat!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: Brain and Mind (was: Logic and God)
Message-ID: <1992Oct8.200218.9855@spss.com>
Sender: news@spss.com (Net News Admin)
Organization: SPSS Inc.
References: <1992Oct5.022907.6131@meteor.wisc.edu> <1992Oct5.181741.7241@spss.com> <1992Oct8.174224.20547@meteor.wisc.edu>
Date: Thu, 8 Oct 1992 20:02:18 GMT
Lines: 105

In article <1992Oct8.174224.20547@meteor.wisc.edu> tobis@meteor.wisc.edu 
(Michael Tobis) writes:
[on whether a single neuron is conscious]
>Well, I think that's a good question! If what the neurons are doing is
>just implementing an algorithm, the algorithm can by Turing be reduced to
>a sequence of trivial operations, comparable to a neuron firing, which
>cannot plausibly be individually considered a conscious process. You take
>this as evidence of an "emergent property" while I take it as evidence
>that understanding of consciousness is incomplete.

I am not at all sure what you mean by "understanding of consciousness is
incomplete"-- to me it suggests that our Searle-style intuitions about
where consciousness begins and ends are not to be trusted.  But you seem
to leave it open to me to describe consciousness as emerging from the system
as a whole, which is all I could ask for.

>Forgive me, I had thought I was bravely fighting a rearguard action
>in support of dualism against a materialist zeitgeist. Your response
>confused me, since it is clearly not a materialist response. Rereading
>Searle's Scientific American article, I see that he points this out, 
>though he considers dualism a fatal flaw and I do not.

The question of dualism vs. materialism is an interesting one, but orthogonal
to one's views on AI.  A materialist can be anti-AI; Searle is an example.
And there's no reason an AI researcher couldn't be a dualist.  Even if we
have souls, science can probe the mind-body interface and see how much
of human behavior and cognition can be explained with purely physical models.

To put it another way: dualism is nice, but it offers no program for
science.  The soul is not an explanation; it is an a priori rejection of
explanation.  To be a scientist, the dualist must set aside his beliefs
and use materialistic methods and explanations.

>This is where I come to the confusion of intelligence and consciousness.
>Clearly, AI has existed ever since a program defeated its programmer
>at chess. However, the attachment of meaning to the symbols output by
>the program was accomplished by an external conscious entity, i.e., the
>human player. Owing to the equivalence (I hope I understand this correctly)
>between NP complete problems, if we had no knowledge of the rules of chess
>and attempted to determine what the machine was doing from the microcode
>level description of the algorithm, we could not derive the rules of the
>game. Hence, the machine can no more be said to "understand" chess than
>the travelling salesman problem, etc. It is only by the application of
>the (as yet mysterious) consciousness of the human participant that any
>meaning is attached to the symbols.

That would be true if the only way to divine the purpose of a program was
to ask the programmer; but it isn't.  

Let's say the chess program also drives a robotic arm which moves the pieces
and punches the clock.  We can now inspect the program, observe the 
correlations with the chess game, and deduce that the program functions
as a chess-playing algorithm.  We connect the program to a backgammon set,
or an auto assembly line, and observe that it does not function in these
roles.  We would have good reason to declare, without recourse to conscious-
ness or dualism, that the program meaningfully deals with chess.

The threads about grounding are about something like this: the idea is that
meaning derives from the capacity for successful interaction with the world.

>How can consciousness arise from symbols when symbols cannot exist
>without consciousness?

Why can't they?  Without any arguments to back it up, this statement
remains merely a slogan.  There are other ways meaning can be defined;
I outlined one above.

>I have no problem with AI, only with the assertions like: that artificial
>consciousness is known to be possible, that consciousness is known to
>be an emergent property of symbol manipulation, that consciousness is
>known to be impossible without intelligence, that the domain of science
>is known to be commensurate with all the phenomena of the universe, etc. etc.

Such assertions bother me too, because we *don't* "know" these things.
But we can suppose them and see where that gets us.

>The test is assumed to be infallible in that our intuition is assumed to
>be incapable of systematic false positives. In other areas, e.g., optical
>illusions, it is known that systematic false positives are possible. And
>since passing the Turing Test is certainly _a_ goal, if not _the_ goal,
>of some AI workers, the possibility of systematic false positives must
>be seriously considered. It is my belief that a Turing Test passing
>algorithm is likely in the near future, but that is only because we
>are capable of being sytematically tricked, not because we have a handle
>on the nature of consciousness.

I still challenge you on the statement that passing the Turing Test is the 
goal of "some" AI workers (retreat on previous statements noted).  
I want names and addresses.

As for "systematic false positives", it's a good point, but it's covered.
Frequently this newsgroup considers ways to cheat on the Turing Test.  
Make our day-- ask about the humongous lookup table.

Better yet, suggest a way to systematically trick a Turing tester.
It will disturb no AI researchers's repose if you think it can be done
but don't suggest how.

>BTW, the very first (imho) SF story ever written was a cautionary tale about
>scientific hubris and artificial life. Hint: the author's husband was
>a famous romantic poet.

Ah, yes, scientific hubris.  There Are Some Things Man Was Not Meant to Know.
Well, I'll call and raise: the first SF work to use the word "robot" 
described the oppression of artificial life at the hand of conservatives.


