From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!samsung!uunet!psinntp!scylla!daryl Tue Jan 28 12:18:12 EST 1992
Article 3179 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!samsung!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <1992Jan27.175154.7158@oracorp.com>
Date: 27 Jan 92 17:51:54 GMT
Organization: ORA Corporation
Lines: 22

David Gudeman writes:

> If a computer aquired intelligence "accidentally" (as in many
> science fiction stories) and no one could account for the machine's
> actions in terms of its construction and programming, I would at least
> consider this evidence for the machine's understanding.  If the
> machine further started talking about having feelings, preferences,
> self-awareness, etc, then (assuming I didn't suspect cheating) I would
> be pretty much convinced.

It is interesting that in the last century, a common-sense argument
against evolutionary theory was that it was implausible to believe
that something so wonderous as human beings could arise by accident;
there must have been a designer.

Now, a century later, common sense has flip-flopped on this issue; it
now seems implausible that intelligence could be the result of design,
although it could very well arise by accident.

Daryl McCullough
ORA Corp.
Ithaca, NY


