From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sdd.hp.com!news.cs.indiana.edu!arizona.edu!arizona!gudeman Tue Jan 28 12:17:45 EST 1992
Article 3147 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sdd.hp.com!news.cs.indiana.edu!arizona.edu!arizona!gudeman
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <11906@optima.cs.arizona.edu>
>From: gudeman@cs.arizona.edu (David Gudeman)
Date: 25 Jan 92 21:29:46 GMT
Sender: news@cs.arizona.edu
Lines: 69

In article  <1992Jan23.215711.6793@gpu.utcs.utoronto.ca> Andrzej Pindor writes:
]
]I did not realise that you insist on applying different criteria to establish
]understanding in a human and in a machine (which you have stated clearly
]in another posting). In a case of such a severe anty-machine bias (:-))
]discussion may be futile. However let me ask what it would take to convince
]you that a machine understands? But please give me a practical answer, and
]not some vague statements which have no practical value.

If a computer aquired intelligence "accidentally" (as in many science
fiction stories) and no one could account for the machine's actions in
terms of its construction and programming, I would at least consider
this evidence for the machine's understanding.  If the machine further
started talking about having feelings, preferences, self-awareness,
etc, then (assuming I didn't suspect cheating) I would be pretty much
convinced.

I don't have any problem believing that machine intelligence is
possible, I just don't think you can say that some behavior is a sign
of intelligence when you can completely explain the behavior without
refering to intelligence.  That sort of belief is completely
unmotivated.  (Or motivated by sloppy thinking.)

] ...Can you tell me a _practical_ way of establishing
]that someone's understanding of a subject, say group theory, is semantical
]and not syntactical?

Yes.  Clearly there is no set of syntactical rules that is simple
enough for a human to use and that would let be powerful enough to
answer hard questions about group theory.  So ask the person some hard
questions about group theory.  The important point is that if the
person answers correctly, there is no other reasonable explanation
except the explanation that the person understands.

]On some other occasion I've tried to coax people to spell more clearly what is
]meant by 'semantical processing', but there were no takers.

Semantical processing of a sentence about X involves thinking about X,
not about the sentence.  Syntactical processing of a sentence about X
involves only the sentence and not X.  However, if you didn't know
that already without me saying it, then you almost certainly do not
have the background to understand it (I know I didn't when I took my
first course in pragmatics.)

]... All this talk about self-awareness, feelings, pain etc. etc.
]is a waste of time till we have _objective_ ways of detecting them.

It is the pro-AIers who are causing this waste of time by claiming
that the external appearence of internal experiences _is_ an objective
way of detecting them.

] Perhaps
]they are just artifacts of tremendously complex system?

Until you can give an account of how such a thing might be possible,
even in principle, this will have to remain a completely unfounded
speculation.

]Of course I assume that if a machine, passing through some 
]internal states, said (even with a proper intonation) 'I am unhappy' or 'No
]one understands me' or like, it would not consitute a proof for you.

Certainly not.  I could write a program today that would pass through
some set of internal states and produce that.  And it would clearly
not be conscious.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman


