From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!tdatirv!sarima Sun Dec  1 13:06:15 EST 1991
Article 1710 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Arguments against Machine Intelligence
Message-ID: <289@tdatirv.UUCP>
Date: 27 Nov 91 20:32:37 GMT
References: <43772@mimsy.umd.edu> <1991Nov27.111048.4933@odin.diku.dk>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 48

In article <1991Nov27.111048.4933@odin.diku.dk> kurt@diku.dk (Kurt M. Alonso) writes:
|
|I think that the main objection that has been presented in this group
| against the strong AI thesis is that a programmed computer can not
|have understanding, in the human sense.

And my response is always "why not?"
I have yet to see any non-mystical treatments of 'meaning' which cannot
be computed, at least in theory.

|Now, just to clarify things I will give my definition of understanding:
|"understanding is the phenomenon we experience when upon exposure to
|an isolated mental construction we find that this construction is
|coherent with previous knowledge we had. Such previous knowledge may
|consist of intuitively true 'facts' or of other mental constructions".

Hmm, interesting, I still think this is computable.  In fact it sounds
suspiciously like a standard AI technique of knowledge-base maintenance.

|Now, what some people object against the strong AI thesis is that
|the formalism of Turing machines does not allow to model the human
|semantic intentionality involved in understanding, mainly because
|the relation subject-object present in meaning-giving per se trascends
|the subject, and consequently, no theory of meaning can be formulated
|such that a TM can implement it. 
|
|This critique is clearly issued from strong philosophical premisses,
|namely that in assigning semantics, man is in some sense trascending
|himself, approaching ontologically far entities.

Aha! So this is the core of the argument.  I find this premise *extremely*
questionable.  We only transcend our own self-model.  Not the same thing
at all.

|The point we should now elucidate is whether by 'knowing' or giving
|meaning to entities man is in fact trascending himself, and in that
|case, whether this implies that no well defined formalism in
|a logical sense can describe such a semantics.

In this respect, at least, I am a materialist.  I am convinced that all mental
processes are grounded in the physico-chemical interactions of the neurons
(and perhaps glia) making up the brain.  [Note, I do not necessarily mean
that mentation can be strictly *reduced* to such a desription, only that it
is hierarchically built *on* *top* of this lower layer].
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



