From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!think.com!yale.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!mcsun!news.funet.fi!sunic!dkuug!diku!kurt Sun Dec  1 13:05:48 EST 1991
Article 1666 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!think.com!yale.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!mcsun!news.funet.fi!sunic!dkuug!diku!kurt
>From: kurt@diku.dk (Kurt M. Alonso)
Newsgroups: comp.ai.philosophy
Subject: Re: Arguments against Machine Intelligence
Message-ID: <1991Nov27.111048.4933@odin.diku.dk>
Date: 27 Nov 91 11:10:48 GMT
References: <43772@mimsy.umd.edu>
Sender: kurt@rimfaxe.diku.dk
Organization: Department of Computer Science, U of Copenhagen
Lines: 101

kohout@cs.umd.edu (Robert Kohout) writes:



>There are any number of people who have claimed that machines are
>incapable of producing intellgience, and in particular that
>modern computers are not up to the task. Implicit in this statement
>is that there is something requisite to intellgience that is not
>Turing computable. 

>This is a very strong statement, and before I am convinced, I would
>like to know what it is.

>The quick and dirty answer is 'consciousness', whatever that is. This
>is very convenient, because even if a machine were conscious, we
>would not be able to prove that it was, and we will be able to debate
>this issue forever. But if I am to be convinced, I'd like someone to
>show me that

>1) Consciousness is prerequisite to intelligence, and

>2) Consciousness is not possible in a digital system.

>or, alternatively,

>3) Whether consciousness is involved or not, intelligence is not possible
>on a digital system.

[lines deleted]

>The symbol-grouding argument is the modern day equivalent to Zeno's
>paradox. Just because we find symbols defining symbols means the process
>must continue infinitely, and just because we do not understand the
>way such a system might converge on a representational system does
>not mean it cannot be done. If this is to be taken as a serious objection,
>please show me that it is computationally impossible to ground symbols on 
>a digital machine.

>The most frequent response to this challenge is an appeal to semantics,
>which generally also implies an appeal to consciousness. These arguments
>most commonly involve an intuition that 'meanings' cannot be conveyed
>by formal digit flipping. Why not? and even if this is true, why are
>such representations required for intelligent behavior? Once again,
>since we see such representations in the brain, what properties of
>brain architecture are not present in digital machines, and why aren't
>discreet representations of analog information sufficient? Please don't
>offer up the Chinese Room, for not only is it a flawed argument at the
>most basic level, but is presumes an intelligent machine for the purposes
>of demonstrating that the symbol cruncher cannot be said to 'understand'
>anything. If this is the strongest objection one can raise to the
>digital approach, I will sleep easily.

>I confess to having great difficulties following some of the various
>philosophical stances. Perhaps that's what I prefer mathematics. 
>Besides, I remember what Nietzshe said of 'old Kant' - that he essentially
>proved what he wanted to prove, but that his desire for the result
>was prior to the proof. If such an objection can be raised for Kant,
>who among all great philsophers stands out as one of the most methodical
>and even ponderous in his methods, I must remain sceptical of such 'proofs'.

I think that the main objection that has been presented in this group
 against the strong AI thesis is that a programmed computer can not
have understanding, in the human sense.

Now, just to clarify things I will give my definition of understanding:
"understanding is the phenomenon we experience when upon exposure to
an isolated mental construction we find that this construction is
coherent with previous knowledge we had. Such previous knowledge may
consist of intuitively true 'facts' or of other mental constructions".

That understanding according to this definition requires self-consciousness
should be clear. Also, it should be clear that the subject experiencing
understanding is intentionally putting forward a desire of giving
meaning to the mental construction. 

Now, what some people object against the strong AI thesis is that
the formalism of Turing machines does not allow to model the humane
semantic intentionality involved in understanding, mainly because
the relation subject-object present in meaning-giving per se trascends
the subject, and consequently, no theory of meaning can be formulated
such that a TM can implement it. 

This critique is clearly issued from strong philosophical premisses,
namely that in assigning semantics, man is in some sense trascending
himself, approaching ontologically far entities.

The point we should now elucidate is whether by 'knowing' or giving
meaning to entities man is in fact trascending himself, and in that
case, whether this implies that no well defined formalism in
a logical sense can describe such a semantics.

Kurt.


>Bob Kohout

>--------------
-------------------------------------------------------
>When I tell people machines are my friends, they tell me that's 
>dehumanizing. As if the world needed to be completely humanized.
>---------------------------------------------------------------------


