From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!swrinde!zaphod.mps.ohio-state.edu!samsung!uunet!psinntp!scylla!daryl Fri Jan 31 10:26:48 EST 1992
Article 3245 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!swrinde!zaphod.mps.ohio-state.edu!samsung!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <1992Jan29.042359.12172@oracorp.com>
Date: 29 Jan 92 04:23:59 GMT
Organization: ORA Corporation
Lines: 44

Several people here, including David Gudeman and Jeff Dalton, have
claimed that we *don't* use the Turing Test to determine that other
people are conscious, intelligent, or whatever, we use an argument
by analogy:

    1. By introspection, I am conscious, intelligent, etc.

    2. My mental properties are caused by (or at least correlated with)
       properties of my brain.

    3. Other people have brains similar to mine.

    4. Therefore, I have good reason to believe that other people are
       conscious, intelligent, etc.

Whether or not this is a reasonable argument, it is certainly not
anything like the reasoning *I* use. I came to the conclusion that
other people were conscious long before I knew anything much about the
brain's role in mental processes (I *still* don't know much about it).

I am pretty sure that the reasoning I use for deciding whether other
people are conscious is precisely the Turing Test (or something much
like it). People are conscious because they act conscious. Coffee cups
are not because they don't act conscious. That may be fallacious
reasoning, but it is what I use, and it has served me well enough.

In a hypothetical situation where a frog (something I wouldn't
normally consider intelligent) starts acting intelligently, I would
eagerly adjust my opinion of the frog. If a frog starts talking to me,
and I assure myself that it isn't a trick (a hidden speaker, or
ventriloquism) I am not about to say "I'm sorry Mr. Frog. Amphibian
brains are too dissimilar from human brains for me to use the argument
by analogy, so I can't consider you conscious". If the frog talks
sensibly, then I'll give it the benefit of the doubt, and assume that
it understands what it is saying.

So, I actually *do* use the Turing Test in judging whether others are
conscious, so it would take a pretty strong argument as to why I can't
use it in judging a machine, or why a machine should be judged any
differently than humans.

Daryl McCullough
ORA Corp.
Ithaca, NY


