From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!darwin.sura.net!gatech!ncar!noao!arizona!gudeman Tue Nov 26 12:30:58 EST 1991
Article 1444 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1027 comp.ai.philosophy:1444
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!darwin.sura.net!gatech!ncar!noao!arizona!gudeman
>From: gudeman@cs.arizona.edu (David Gudeman)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: Mind, Material, and AI (was: Re: Daniel Dennett)
Message-ID: <9754@optima.cs.arizona.edu>
Date: 20 Nov 91 18:46:04 GMT
Sender: news@cs.arizona.edu
Followup-To: sci.philosophy.tech
Lines: 57

In article  <9745@optima.cs.arizona.edu> Curtis E. Dyreson writes:
]...  Let's put on our Dualist hats and assume that 
]intelligence does not arise from physical processes.  Is it then possible 
]to have a research programme for the scientific study of the mind?  If so,
]what is that programme?

Of course all of the following is entirely speculative, but it seems
not unreasonable.  The first assumption is that there is a causal
connection between the mental and the material.  The material affects
the mental indirectly through the senses, and the mental affects the
material through (psychological) motivation of some sort.

I'd suggest that an exploration of minds properly takes place through
this connection.  And that before you can use this connection to study
minds, you must isolate it.  And that before you can isolate it, you
have to know a great deal more about the material basis of behavior.
To this end, I have no complaint about the way research is progressing
in psychology, biology, and AI.  I think they all have something to
contribute to the understanding of cognition, and that this
understanding may eventually lead to a discovery of what it is about
people that is not material.

I should also note that I have no reason to suppose that it is
impossible to design a machine that can pass the Turing test, but that
I would not be convinced that the machine was self-aware without some
sound accompanying theory of how this self-awareness appeared.  The
case is the same as if you were to put me in a room with controls to
some sort of adding machine.  Just because the machine passed all the
tests of an electronic calculator is no reason to suppose that the
machine at the other end is electronic.

Let me expand on this.  Suppose you have a machine with the
computational power of a PC but with a huge amount of storage.
Suppose further that this disk contains a decision tree of
conversations, such that for any statement made by party A, it gives
one reasonable response by party B, taking into account the history of
the conversation.  Suppose also that this tree is complete for all
conversations of length t or less, where t is the time the
conversation takes.  Then a simple program could be written for the PC
that would access this tree and could pass the Turing test for any
length of time less than t.  Is the PC aware?

Suppose now that instead of a physical decision tree on a hard disk,
you give a set of rules for calculating the decision tree.  You might
need a bigger, faster computer but is this new machine more aware just
because it uses a rule for calculating its response instead of looking
it up in a table?  Suppose now that I add clever heuristics such that
the rules plus heuristics leads to arbitrarily long conversations
--the machine is no longer restricted to conversations of a limited
time.  Does that make the machine aware?

If so, why?  Why should I take any performance by the machine as
indicative of some special experience by the machine?
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman


