From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!mips!swrinde!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!cmcl2!arizona!gudeman Wed Feb  5 11:57:07 EST 1992
Article 3491 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!mips!swrinde!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!cmcl2!arizona!gudeman
>From: gudeman@cs.arizona.edu (David Gudeman)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <12320@optima.cs.arizona.edu>
Date: 5 Feb 92 08:38:44 GMT
Sender: news@cs.arizona.edu
Lines: 56

In article  <12187@optima.cs.arizona.edu> Curtis E. Dyreson writes:
]>From article <12184@optima.cs.arizona.edu>, by gudeman@cs.arizona.edu (David Gudeman):
]> All I'm saying (sigh) is that you can't take behavior as a sign of
]> consciousness without some argument for why the behavior should be
]> taken as a sign of consciousness.
]
]High Church Computationalism provides one such argument.  The lynchpin to
]that argument, as you have correctly pointed out, is the assumption that 
]"semantics follows syntax".  You reject this assumption.  It is too great
]a leap of faith to assume that intentionality emerges from syntactic
]manipulation.  Yours is a perfectly reasonable position.  

Actually, I don't have much trouble with the assumption that semantics
must follow syntax in some way.  Some days I accept it and some days I
don't.  I'll admit that some of my articles were written on days that
I don't and that this lead the argument astray, but I intended to
argue strictly within a monist framework.  What I'm really complaining
about is the assumption that _any_ syntax that generates human-like
behavior must generate consciousness.

I'll grant the possibility that by the time a computer passes the
Turing test the relationship between semantics and syntax might be
solved and so the Turing test might be unnecessary.  I'm only arguing
that we have no reason _at this time_ for the belief that any process
that produces to human behavior must lead to consciousness even if we
accept that _some_ processes that produce human behavior lead to
consciousness.

]However, since you reject this assumption, is there any argument you are
]willing to offer in its place that explains conciousness, why it exists,
]and how it arises; in short, the nature of the beast?  It would be 
]terrific if the theory you reveal provides the foundation for the 
]scientific study of mind.  

Well, I've been meaning to save this for publication in a prestigious
journal, but since you ask I'll give the gist of my theory: basically,
everything we don't understand is caused by little invisible fairies.

]You did give a nice recipe as to why you believe that other 
](human) minds are conscious (the defeatable inference from 
]"no other reason to believe it ain't conscious" to "is conscious").  
]Alas, that recipe offered little as a scientific theory of 
]consciousness (nor did you intend it to be such), although it 
]was a nice bit of folk psychology.

Actually, I think any such argument for why we should believe that
other humans are conscious is really just a rationalization for what
we believe anyway.  It is important for philosophical argument to give
reasons for what you believe, but some things just come with being
human, and one of those things is a belief that (at least some) other
humans experience life like you do.  I certainly believed this long
before I was capable of making any sort of rational argument.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman


