From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor Tue Jan 28 12:16:40 EST 1992
Article 3068 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: Intelligence Testing
Message-ID: <1992Jan23.215711.6793@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <11775@optima.cs.arizona.edu>
Date: Thu, 23 Jan 1992 21:57:11 GMT

In article <11775@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>In article  <1992Jan22.203042.453@gpu.utcs.utoronto.ca> Andrzej Pindor writes:
>
>]On a more serious note - has it ever happened to you that you had this self-
>]awareness of understanding and then you decided that you really did not
>]understand the said problem? I am sure it did. Try to remeber now what made
>]you to realise that you did not understand?
>
>This is irrelevant.  I never said that the ability to answer questions
>in humans cannot be used to judge understanding.  In fact, I
>specifically said that it could.

I did not realise that you insist on applying different criteria to establish
understanding in a human and in a machine (which you have stated clearly
in another posting). In a case of such a severe anty-machine bias (:-))
discussion may be futile. However let me ask what it would take to convince
you that a machine understands? But please give me a practical answer, and
not some vague statements which have no practical value. All this talk about
semantics is useless until you tell me how would you determine that a machine
has semantics. You say that a machine understanding is syntactical and human
understanding - semantical. Can you tell me a _practical_ way of establishing
that someone's understanding of a subject, say group theory, is semantical
and not syntactical?
On some other occasion I've tried to coax people to spell more clearly what is
meant by 'semantical processing', but there were no takers. Someone originally
stated that semantic processing is *internal* to the system and not based on
external information and I suggested that it is then equvalent to a hardwired
information and could be implemented in a machine too. It was completely
ignored - too stupid or too blasphemous?
I am not a hard-core AI supporter, but I have yet too see a convincing any-AI
argument (i.e. that a machine can not be made to duplicate all functions of
a human mind). All this talk about self-awareness, feelings, pain etc. etc.
is a waste of time till we have _objective_ ways of detecting them. Perhaps
they are just artifacts of tremendously complex system? How do you now that
a similarly complex machine wouldn't have them too? Would you know how to
detect their presence or absence? Till it is established how such states 
demonstrate themselves and is proven that machine can not have them, I keep
my mind open. Of course I assume that if a machine, passing through some 
internal states, said (even with a proper intonation) 'I am unhappy' or 'No
one understands me' or like, it would not consitute a proof for you. Till
there is a better way of establishing mental states (rised blood pressure,
sweating palms, etc. are in the same category, there is no reason that 
a machine could not show such accidental behaviour; after all your bike
squeeks, if you let it rust), I hold my jugement.
By the way, if I understand Penrose correctly (based on his lecture, I haven't
digested his book yet), he does not claim that a machine can not be made to
duplicate all function of a human. He only thinks that the machine has to
exploit quantum mechanics (just like you can't bulit a laser without QM).
>--
>					David Gudeman
>gudeman@cs.arizona.edu
>noao!arizona!gudeman


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


