From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Wed Dec 18 16:02:15 EST 1991
Article 2197 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Causes and Reasons
Message-ID: <1991Dec17.154142.21021@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1991Dec17.033356.22762@oracorp.com>
Date: Tue, 17 Dec 1991 15:41:42 GMT

In article <1991Dec17.033356.22762@oracorp.com> daryl@oracorp.com writes:

>Assuming that it is possible to program a computer so that it can be
>consistently interpreted as, say, thinking about cats, there is still
>the possibility that it can *also* be interpreted as thinking about
>cherries, or chess, or chemistry. A physical system can be
>*interpreted* in infinitely many ways.
>
>I think that this is a very important point, although it still doesn't
>prove that AI is impossible, only that it has strange (though not
>inconsistent) consequences. I'm inclined to just bite the bullet and
>face up to the possibility (likelihood, in my opinion) that what a
>*person* is thinking about is not uniquely determined.

*WHAT*????!!!!!!


I honestly don't mean to be rude, but I take such statements to be evidence
that the person making them is so committed to a theoretical position that
they are willing to say things that are *clearly* wrong.

*I* uniquely determine what *I* am thinking about.  I am the sole arbiter
of the content of my conscious thoughts.  How could it possibly be otherwise?

- michael
 


