From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!usc!wupost!uunet!mcsun!uknet!edcastle!aisb!aisb!philkime Tue Jan 28 12:16:20 EST 1992
Article 3047 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1947 sci.logic:847 sci.math:5985 comp.ai.philosophy:3047
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!usc!wupost!uunet!mcsun!uknet!edcastle!aisb!aisb!philkime
>From: philkime@aisb.ed.ac.uk (Philip Kime)
Newsgroups: sci.philosophy.tech,sci.logic,sci.math,comp.ai.philosophy
Subject: Re: Penrose on Man vs. Machine
Message-ID: <1992Jan22.233028.7894@aisb.ed.ac.uk>
Date: 22 Jan 92 23:30:28 GMT
References: <1992Jan19.170838.7805@husc3.harvard.edu> <1992Jan19.233811.18340@bronze.ucs.indiana.edu> <1992Jan20.124249.7832@husc3.harvard.edu> <1992Jan22.203136.24023@bronze.ucs.indiana.edu>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Reply-To: philkime@aifh.ed.ac.uk ()
Organization: Dept AI, Edinburgh University, Scotland
Lines: 56

In article <1992Jan22.203136.24023@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

[Strong AI....]

>The claim that an appropriately programmed computer could think (feel,
>understand,...).  More precisely, the claim that there exists a program
>P such that implementing P is sufficient for mentality.

This may have been the case 30 or 40 years ago but this is only accurate
if you focus on Symbolic Functionalism. And even then, any Symbolic
Functionalist worth his philosophical salt would be extremely wary what
was meant by 'program' here given the (possibly insurmountable) problems
in 'implementing', in a SF way, the necessary and sufficient conditions
for mentality discussed by Dreyfus. Basically, as Dreyfus says, a
solution alomg the lines you suggest would require SF AIers to solve the
problem of phenomenalism given up as practically immpossible by Husserl
and Winograd et al. Thus, the 'claim' of Strong AI is, I don't think,
quite as naive as this any more (at least amongst those who bother to
take an interest in the essential philosophical issues involved....a
very small minority). The 'claim' of philosophically aware AI people
tends towards something like:

'Nature doesn't have an exclusive monopoly on intelligence.'

                             or

'It is not necessary that intelligence occur as a purely natural
phenomena.'

This sort of claim is much better than the rather more computational
claims of the individual AI camps and results in less of the implementational
dogma that has gripped Symbolic Functionalism for the last 20 or so
years. I find that a lot of AIers brought up on strong SF AI have begun
to retreat towards a weak AI position as the phenomenalist trauma that
besest Husserl towards the end begins to seep into their minds. Yet
still they cling to SF....mainly because of the rallying claim of which
you speak. Sure we need more computationally specific claims than those
above but a hell of a lot more evidence needs to be assesed before we
can take a claim and base a research programme on it. It never ceases to
amaze me that most AI students (and lecturers) I meet know nothing of
the neuroscientific, computation theory or philosophical results that
have a huge bearing on the choice of research programme rallying point
(or the rejection of an existing one). Generally, they don't care
either. People who are working in a programme and are not thinking about
it's presuppostions and the bearing that evidence has on these should
stick to implementation and keep out of philosophical debates (this
comment is not meant to be directed towards anyone on this thread). If
this were adhered to, we might avoid statements about 'intuitive' or
'obvious' status's of philosophy of mind issues when they are NEVER
either of these.


>-- 
>Dave Chalmers                            (dave@cogsci.indiana.edu)      
>Center for Research on Concepts and Cognition, Indiana University.
>"It is not the least charm of a theory that it is refutable."


