From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!tdatirv!sarima Thu Feb 20 15:22:01 EST 1992
Article 3856 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Reference (was re: Multiple Personality Disorder and Strong AI)
Keywords: consciousness,functionalism,meaning
Message-ID: <425@tdatirv.UUCP>
Date: 18 Feb 92 20:49:24 GMT
References: <1992Feb13.045721.29805@cs.yale.edu> <1992Feb13.201109.25439@psych.toronto.edu> <418@tdatirv.UUCP> <1992Feb15.162918.3699@psych.toronto.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 59

In article <1992Feb15.162918.3699@psych.toronto.edu> christo@psych.toronto.edu (Christopher Green) writes:
|In article <418@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
|>
|>Yes, and now the question is 'can computers have minds?' ...
|>It is only by *trying* to make such a computer that the answer can be found.
|
|No, it's not the only way. One can prove that that computer could not
|have a mind, because they are lacking some necessary feature, just as
|one can prove that some substance is not, say, water, because it does
|not do something that any substance containing water must do. This is
|what Searle and Penrose have attempted, ...

But for this to be a conclusive answer it must:

	A) be based on an adequate *definition* of 'mind' that can be
	shown by induction on observations to pass all entities we agree
	have minds and to fail all (or most) entities that we agree have
	no minds.  (That is it must be *demonstrable* that al humans meet
	the definition and all lumps of clay fail to meet it).  Merely
	enunciating an apparently adequate definition is insufficient in
	the absence of obsevational evidence of its adequacy - intuition is
	not sufficiently reliable to base a proof on it.

	B) ALL axioms and assumptions of the proof *must* have substantial
	observational evidence in support of them. NO 'pure' axioms allowed.
	(including all mathematical models of intelligence and of computer
	operations - without direct testing it is not clear that such models
	actually capture the relevant features).

	C) The inability of all possible computing machines to achieve the
	necessary feature must either be true by definition or be directly
	observable, or be a necessary result of some observation.  (Note
	that I include 'robots' and other computer systems attached to sensors
	and effectors as computing machines).

	D) It must be possible to verify that there are no hidden assumptions
	in the purpoerted proof.


Now, when a 'logical proof' meets those requirements, I will accept it.


But by then it is no longer a purely logical proof, it is an experimental
demonstration of a complex theory.   And this is what I meant when I said
what I did above.  (Since making a sufficiently detailed, tested model
of intelligence will entail most of what is necessary to make a machine
intelligence if such is possible).


Anything short of this is just too ethereal, too abstract, for me to accept it
as valid in the real world.


[P.S., I consider the basic axioms of arithmetic as being supported by
observation, and thus empirically true].
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



