From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!snorkelwacker.mit.edu!spool.mu.edu!caen!nic.umass.edu!dime!orourke Wed Feb 26 12:54:43 EST 1992
Article 4030 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!snorkelwacker.mit.edu!spool.mu.edu!caen!nic.umass.edu!dime!orourke
>From: orourke@unix1.cs.umass.edu (Joseph O'Rourke)
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of understanding
Message-ID: <43956@dime.cs.umass.edu>
Date: 26 Feb 92 03:10:42 GMT
References: <43846@dime.cs.umass.edu> <1992Feb24.223405.28054@psych.toronto.edu> <1992Feb25.011840.24663@beaver.cs.washington.edu> <1992Feb25.184610.5199@psych.toronto.edu>
Sender: news@dime.cs.umass.edu
Reply-To: orourke@sophia.smith.edu (Joseph O'Rourke)
Organization: Smith College, Northampton, MA, US
Lines: 33

In article <1992Feb25.184610.5199@psych.toronto.edu> 
	christo@psych.toronto.edu (Christopher Green) writes:

>If you really want your argument to rely wholly on the very dubious 
>assumption that there are, somehow, two minds running around inside
>the man's head, feel free, but the utter tendentiousness of the claim
>is patently obvious to everyone not committed a priori to the belief
>that computers JUST GOTTA have minds.

	It is not patently obvious to me, and I don't *think* I have
committed a priori to that belief.  Can you explain why it is patently
obvious to you, or is it so obvious that it is beyond explanation?
	Let me give you three reasons why it is not absurd to me that
the system might have different properties than the mind that memorized
and blindly executes the rules:

	1. The mind that memorized is not at all like the mind
	of a normal human.  It is vastly more powerful in several
	ways.  
		a. Thus I mistrust ascribing the properties of our 
		puny mind to this mind the like of which we have 
		never seen.

		b. Because this mind is so much "larger" than ours,
		it conceivably has more "room" for housing subminds.

	2. Our own minds house capabilities of which we have little
	direct awareness.  Language formation is one example.  Thus
	the language-formation subsystem of my own mind might be said
	to possess properties that my mind as a whole does not.

I don't claim that these points establish Hoftsadter's position.  But
nor do I see why your position is patently obvious.


