From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!batcomputer!munnari.oz.au!spool.mu.edu!agate!stanford.edu!rutgers!hsdndev!husc-news.harvard.edu!zariski!zeleny Sun Dec  1 13:05:42 EST 1991
Article 1656 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1168 comp.ai.philosophy:1656
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!batcomputer!munnari.oz.au!spool.mu.edu!agate!stanford.edu!rutgers!hsdndev!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Consciousness (was Re: Daniel Dennett)
Message-ID: <1991Nov26.212920.5939@husc3.harvard.edu>
Date: 27 Nov 91 02:29:18 GMT
Article-I.D.: husc3.1991Nov26.212920.5939
References: <YAMAUCHI.91Nov26024948@indigo.cs.rochester.edu> 
 <1991Nov26.135953.5926@husc3.harvard.edu> <1991Nov26.211823.20295@newshost.anu.edu.au>
Organization: Dept. of Math, Harvard Univ.
Lines: 93
Nntp-Posting-Host: zariski.harvard.edu

In article <1991Nov26.211823.20295@newshost.anu.edu.au> 
gar@arp.anu.edu.au (Greg A. Restall) writes:

>In article <1991Nov26.135953.5926@husc3.harvard.edu> 
>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

MZ:
>* I'll suggest a reductio ad absurdum of the AI view.  Assume that
>* the mind is reducible to the functioning of the brain.  Then we may
>* conclude that the mind shares the computational limitations of an FSA.
>* Consider the fact that, in contrast with Turing Machines, there is no such
>* thing as an Universal FSA.  
>* 
>* In other words, why aren't you an Ultra-intuitionist, denying all but
>* practically "feasible" numbers?

GAR:
>The swift move was the `in other words'.  Unfortunately
>for a logician like myself, I'm not sure of any inference
>that can take one from the first paragraph to the next.  
>This is probably due to my limited acquaintance with the 
>theory of FSAs.  However, I'm prepared to accept that there
>is a legitimate inference from the claim that the mind is 
>"computationally isomorphic" to an FSA, and Ultra-intuitionism,
>if such a move were demonstrated to me.

No inference was implied; note that I promised merely to suggest an
argument, not to present it.

GAR:
>Whatever it could be, I have no idea - for it seems to me
>that the first claim is one in the philosophy of mind, and
>the second, in the philosophy of mathematics.  And while
>the philosophy of mathematics espoused by Ultra-intuitionists
>is intimately tied to the philosophy of mind, it is not
>clear that the latter could in any way follow from *any*
>particular philosophy of mind.  After all, on most realist
>views, numbers exist independently of minds - and independently
>of what *kinds* of minds there are.

I agree with nearly all of the above, especially with the claim that the
philosophy of mathematics espoused by Ultra-intuitionists is intimately
tied to the philosophy of mind. On the other hand, it seems to me that any
philosophy of mind that makes a claim of its numerical limitation, should
be wedded to Ultra-intuitionism.  Last I heard, Yessenin-Volpin was still
pursuing his research privately, unencumbered by NSF or DOD grants...

GAR:
>Of course, Mikhail's inference might be enthymematic, but 
>it's pushing it to express an enthymeme with the phrase 
>`in other words', and also, one is open to reject any 
>suppressed premises, instead of being forced into Ultra-
>intuitionism.

It isn't enthymematic, but elliptic, as I omitted the conclusion, rather
than the major premiss.

GAR:
>So it seems that unless the move is sketched out a lot
>more thoroughly, this suggested reductio fails.

The conclusion is as follows:

Given that an FSA is inherently incapable of modeling itself, how can we
expect an AI theorist to come up with a model of his own intellectual
processes? 

In other words, if so many people in computer science believe themselves to
be finite state automata, why isn't Yessenin-Volpin besieged with lucrative
job offers?  (This is *not* a rhetorical question!)

>Best wishes,
>
>Greg

Regards,
MZ


'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139                                     :
: (617) 661-8151                                                     :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'


